id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
53220217 | pes2o/s2orc | v3-fos-license | Dynamic protein–RNA interactions in mediating splicing catalysis
Abstract The spliceosome is assembled via sequential interactions of pre-mRNA with five small nuclear RNAs and many proteins. Recent determination of cryo-EM structures for several spliceosomal complexes has provided deep insights into interactions between spliceosomal components and structural changes of the spliceosome between steps, but information on how the proteins interact with pre-mRNA to mediate the reaction is scarce. By systematic analysis of proteins interacting with the splice sites (SSs), we have identified many previously unknown interactions of spliceosomal components with the pre-mRNA. Prp8 directly binds over the 5′SS and the branch site (BS) for the first catalytic step, and the 5′SS and 3′SS for the second step. Switching the Prp8 interaction from the BS to the 3′SS requires Slu7, which interacts dynamically with pre-mRNA first, and then interacts stably with the 3′-exon after Prp16-mediated spliceosome remodeling. Our results suggest that Prp8 plays a key role in positioning the 5′SS and 3′SS, facilitated by Slu7 through interactions with Prp8 and substrate RNA to advance exon ligation. We also provide evidence that Prp16 first docks on the intron 3′ tail, then translocates in the 3′ to 5′ direction on remodeling the spliceosome.
INTRODUCTION
Pre-mRNA splicing proceeds via a two-step transesterification reaction. The reaction is catalyzed by the spliceosome, which is assembled by sequential binding of five snRNAs and numerous protein factors to the pre-mRNA (1)(2)(3). During spliceosome assembly, U1 and U2 bind to the 5 splice site (5 SS) and the branch site (BS), respectively, and form base pairs with the conserved splice site sequence to form the prespliceosome. Following binding of the U4/U6.U5 tri-snRNP, the spliceosome undergoes a dramatic structural rearrangement, releasing U1 and U4, and forming new base pairs between U2 and U6, and U6 and the 5 splice site, to form the activated spliceosome.
RNA base pairings play roles in the recognition of splice sites by snRNAs, and also form the framework of the catalytic center of the active spliceosome. The structure is stabilized by protein factors. While components of U1 and U2 snRNPs play roles in stabilizing the interaction of U1 and U2 with the pre-mRNA, a protein complex associated with Prp19, named the NineTeen complex (NTC), is required for stabilizing the association of U5 and U6 with the spliceosome by promoting specific interaction of U5 and U6 with the pre-mRNA during spliceosome activation (4). NTC remains stably associated with the spliceosome until completion of the reaction, and can serve as a marker for postactivation spliceosomes (5,6).
Structural changes of the spliceosome are mediated by members of the DExD/H-box RNA helicase family, which utilize energy from ATP hydrolysis to unwind RNA duplexes or to remodel ribonucleoprotein complexes (7,8). Two DExD/H-box proteins, Prp2 and Prp16, are required during the catalytic phase. After activation of the spliceosome, Prp2 promotes destabilization of the U2 component SF3a/b (9,10) to allow binding of Cwc25, which is required for the first reaction (9,11). Cwc25 becomes stably associated with the spliceosome after the reaction, and requires Prp16 for its displacement before the second reaction can take place (12). Another protein factor, Yju2, which is required for the recruitment of Cwc25 to the spliceosome, is also displaced (12,13). After the removal of Yju2 and Cwc25, Slu7 and Prp18 are required to promote the second reaction (12). Upon completion of the reaction, mature mRNA is first released from the spliceosome, catalyzed by Prp22 (14), and the spliceosome is then disassembled into its separate components. In the yeast Saccharomyces cere-visiae, disassembly of the spliceosome is mediated by the NTR protein complex, comprising Ntr1, Ntr2, Cwc23 and the DEAH-box protein Prp43 (15)(16)(17)(18).
Recent determination of cryo-EM structures for several spliceosomal complexes has revealed the arrangement of protein and RNA components on spliceosomes at different catalytic stages (19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31), but provided little information on how protein components mediate positioning of the 3 splice site (3 SS) for exon ligation since only limited pre-mRNA sequence was observed. By systematic site-specific crosslinking analysis of proteins to RNA sequences across the splice sites for spliceosomes arrested at specific stages of the splicing pathway, we were able to elucidate changes in protein-RNA interactions along the pathway. Most of the proteins binding around the 5 SS do not significantly change their interaction modes throughout the catalytic phase. Prp8 was seen to crosslink to the 5 -exon near the splice junction, and Snu114, Cwc22 and Cwc21 crosslinked to positions further upstream at around position -20 5 SS . Proteins crosslinked to the intron sequences include Cwc2, Ecm2 and several NTC components. By contrast, proteins that bind to the BS-3 SS region show substantial changes between steps. On the B act complex, U2 component Hsh155 was seen to crosslink across the BS. After Prp2-mediated remodeling of the spliceosome, Prp8 replaces Hsh155 to crosslink across the BS, but switches its interaction to the 3 SS after Prp16-mediated spliceosome remodeling. Our results show that Prp8 directly binds over the 5 SS and the BS during the first catalytic step, and over the 5 SS and 3 SS during the second step.
Step one factor Cwc25 only crosslinked to the BS downstream region during the first step, and step two factors Slu7 and Prp22 crosslinked to both the intron and 3 -exon flanking the Prp8 crosslinked site. We also found that Slu7 dynamically interacts with the intron 3 tail (i3 T) after Prp2-mediated spliceosome remodeling and throughout the catalytic phase. Such interactions might facilitate positioning of the 3 SS at the active site for exon ligation. Our results suggest that Prp8 is the key player in positioning the splice sites to promote catalysis, while step one and step two factors facilitate or stabilize the interactions of Prp8 with the splice sites to promote the reactions.
Yeast strains
The following yeast strains were used:
Splicing extracts and substrates
Splicing extracts were prepared according to Cheng et al. (32). The pre-mRNA substrates were prepared by in vitro transcription with SP6 RNA polymerase. EcoRI-linearized pSP64-88 plasmid was used as the template for preparation of regular actin substrate. We adapted the method of Sontheimer for preparation of 4sU-labeled pre-mRNA substrates (33). DNA templates were generated by polymerase chain reaction (PCR) using pSP64-88 plasmid as a template. Primers used for PCR are listed in Supplemental Table S1. For preparation of the 5 RNA fragment, transcription reactions were performed in 40 mM Tris-HCl (pH 7.9), 6 mM MgCl 2 , 2 mM spermidine, 10 mM NaCl, 10 mM DTT, 2 units/l RNasin, 0.5 mM each of the four NTPs, 6.6 nM of ␣-32 P-UTP (3000 Ci/mmole), 60 nM DNA template and 1.9 units/l SP6 RNA polymerase. For preparation of the 3 fragment, transcription reactions were performed under the same conditions with the addition of 2.5 mM 4sUpG dinucleotide. The RNA fragments were all purified by electrophoresis on 5% polyacrylamide gels. The 3 fragment was phosphorylated with 32 P at the 5 -end using
Immunoprecipitation and immunodepletion
Immunoprecipitation of the spliceosome was performed as described (5). For each 10-20 l of splicing reaction mixtures, 10 l of protein A-Sepharose (PAS) conjugated with 1.5 l of anti-Ntc20 antibody or 5 l of anti-Prp16 antibody was used. For precipitation of the Slu7-V5-or Cwc25-HAassociated spliceosome, 1 l of anti-V5 antibody or 15 l of anti-HA antibody was used, respectively. For depletion of specific proteins from 100 l of yeast extracts, 12.5 mg of PAS was swollen in NET-2 buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 0.05% NP-40) to make a bed volume of 50 l, and this was used for conjugation with specific antibodies. For depletion of Prp16, Spp2 and Yju2, 50 l of anti-Prp16, 200 l of anti-Spp2 and 120 l of anti-Yju2 antibodies were used, respectively. For co-depletion of Slu7 and Prp22, 50 l of anti-Slu7 antiserum and 1.3 g of purified anti-Prp22 antibody were used. Each 100 l of extracts was incubated with antibody-conjugated PAS at 4 • C for 1 h, and supernatants were collected as depleted extracts.
Crosslinking analysis
Splicing reactions were carried out with 0.4 nM of 4sUlabeled actin pre-mRNA in the presence of 0.4 U/l RNasin. The reaction mixtures were spread onto a piece of parafilm overlaid on an ice-cold aluminum block and irradiated with UV 365nm for 10 min at a distance of ∼2 cm using a hand UV Lamp (Model UVGL-25, UVP Inc.). Spliceosomes were incubated with specific antibodies conjugated to protein A-Sepharose. The precipitated spliceosomes were incubated at 37 • C for 30 min following addition of an equal volume of solution containing 0.06 U/l RNase P1 and 6× Complete, EDTA-free Protease Inhibitor Cocktail (2X P1/CPI) before analyzing on SDS-PAGE. For immunoprecipitation of specific proteins, splicing reaction mixtures were treated with P1/CPI as described above, and then denatured by heating at 100 • C for 1.5 min in 1% SDS (w/v), 1% Triton X-100 (v/v) and 100 mM DTT. After centrifugation to remove insoluble materials, the mixtures were diluted 10-fold with a cold buffer containing 50 mM Tris-HCl (pH 7.5), 300 mM NaCl and 0.05% NP-40, and subjected to immunoprecipitation. The precipitates were treated with P1/CPI, and then analyzed by SDS-PAGE.
Mapping of crosslinked sites on Prp8
Splicing was carried out in extracts prepared from ZZtagged Prp8 strains with a TEV cleavage site inserted at various positions (34) using actin ACAC pre-mRNA with 4sUlabeled at the +8 or +37 position of the branch site. Following UV irradiation and P1/CPI treatment, Prp8 was pulleddown with a 1 2 volume of IgG-Sepharose. The precipitates were washed three times with NET-2 buffer and once with NET-2 supplemented with 5 mM DTT, and then incubated at 18 • C for 30 min upon addition of 0.4 g TEV protease per 10 l beads. The precipitates were further treated with P1/CPI and washed four times with a buffer containing 50 mM Tris-HCl (pH 7.4), 300 mM NaCl, 0.1% SDS and 0.1% Triton X-100 before fractionation by 6% SDS-PAGE.
RESULTS
To understand how proteins mediate structural changes of the spliceosome through interactions with pre-mRNA, we systematically analyzed crosslinking of proteins to the pre-mRNA at various stages of the splicing cycle. Splicing was arrested at different stages by depletion of specific factors from splicing extracts in the following ways: (i) depletion of Spp2 for post-activation (B act ); (ii) depletion of Yju2 for the pre-catalytic stage (B * ); (iii) depletion of Prp16 for post-first-reaction (C); (iv) adding dominant-negative Prp16 mutant D473A protein to the splicing reaction for the Prp16-associated spliceosome (C1) (12,35); (v) using 3 splice site mutant ACAC pre-mRNA for pre-secondreaction (C * ) and (vi) adding dominant-negative Prp22 mutant S635A protein to the splicing reaction for post-secondreaction (P) (36-38) ( Figure 1). Actin pre-mRNAs were synthesized with a single nucleotide being replaced with 4-thiouridine (4sU) for several positions around the 5 SS, BS and 3 SS. Substrates were prepared in such a way that each pre-mRNA molecule contained only one 32 P at the 5end of the 4sU residue, ensuring that the crosslinked proteins would be labeled with 32 P after digestion of crosslinked products with RNase P1. Several transcripts had one or two downstream nucleotides also changed to purines to increase the yields of transcripts synthesized by SP6 RNA polymerase (Supplementary Figure S1). Splicing reaction mixtures were irradiated with UV 365nm , and then precipitated with specific antibodies. After RNase P1 treatment, total precipitated proteins were analyzed by SDS-PAGE. To identify specific crosslinked proteins, reaction mixtures were first treated with RNase P1 following UV-irradiation, and then treated with denaturant before immunoprecipitation ( Figure 2A). A summary of the results is shown in Figure 2B.
Identification of proteins crosslinked to the intron 3 tail during the first catalytic step
The B act complex was isolated using anti-HA antibody by assembling the spliceosome in Spp2-depleted Hsh155-HA extracts. Hsh155 was seen to crosslink across the branchpoint ( Figure 3A and Supplementary Figure S2A), and the pre-mRNA retention and splicing complex (RES) components, Bud13, Pml1 and Snu17, crosslinked in the downstream region between positions +22 BS to +37 BS ( Figure Supplementary Figure S2B), in agreement with previous reports (39)(40)(41). A strong crosslink from the +18 BS position toward the 3 SS of an unidentified protein of around 65-kD was also seen in the B act complex, and persisted through the first step (see below). A major change in the crosslinking pattern was seen after Prp2-mediated remodeling of the spliceosome due to the release of the SF3 and RES complexes. In the B * complex, isolated by anti-Ntc20 antibody precipitation of spliceosomes assembled in Yju2-depleted extracts, Prp8 replaced Hsh155 to interact with the BS with strong crosslinking at positions +3 BS to +12 BS ( Figure 3B and Supplementary Figure S3A). Prp45 was observed to crosslink at positions +8 BS to +37 BS in the i3 T (Supplementary Figure S3B). Interestingly, step 2 factors Prp22 and Slu7 were also seen to crosslink to the same region as Prp45 (Supplementary Figure S3B and S3C), indicating that they can enter the spliceosome and contact pre-mRNA at a much earlier stage, i.e., before they are required for action.
After the first reaction, Prp8 continued interacting with the branch site in the absence of Prp16 (as in complex C, Figure 3C and Supplementary Figure S3D), or with Prp16 bound using a dominant-negative D473A mutant (complex named C1, Figure 3D) (35,37), except that crosslinking at the +3 BS position was extensively weakened. Prp45 and Slu7 continued interacting with the same region of the i3 T in the C and C1 complexes, but interactions of Prp22 appeared to be excluded from the branch site ( Figure 3C and D). We also observed weak crosslinking of Cwc25 at the +12 BS and +18 BS positions (Supplementary Figure S3D). Cwc25 was previously shown to crosslink to the +3 BS position (42), and indeed could be detected to crosslink at the +3 BS position on longer exposure of the film. This finding indicates that the primary Cwc25 interacting site is between positions +12 BS and +18 BS . In the C1 complex, Prp16-D473A presented strong crosslinks to the region from position +18 BS toward the 3 SS ( Figure 3D and Supplementary Figure S3E), which may represent the docking site of Prp16 on the i3 T to mediate the release of Yju2 and Cwc25 from the catalytic center. These data reveal an ordered arrangement of Prp8, Cwc25 and Prp16 interactions with the i3 T.
A Slu7-dependent switch of Prp8 binding site from the BS to the 3 SS prior to exon ligation
The splicing reaction carried out with ACAC pre-mRNA is blocked for the second catalytic reaction, with Slu7/Prp18/Prp22 stably bound to the spliceosome and forming the C * complex, which represents the structure immediately before exon ligation. The C * complex was assembled in Prp22-V5 extracts and selected with anti-V5 antibody. Prp8 was found to crosslink near to the 3 SS from positions +25 BS to +37 BS ( Figure 3E, Supplementary Figures S4A and S4B), indicating that Prp8 switched its interaction from the BS to the 3 SS prior to exon ligation. Strong crosslinking of Prp22 at position +8 BS and weak crosslinking of Slu7 at +12 BS were also detected ( Figure 3E and Supplementary Figure S4C). All three proteins continued interacting with the same regions of the i3 T after exon ligation, as observed in the P complex selected with anti-V5 antibody from splicing reactions performed in the presence of V5-tagged Prp22 dominant-negative S635A mutant protein ( Figure 3F, Supplementary Figures S4D and E). Unlike at previous stages when several other proteins were also seen to crosslink to the i3 T, only Prp8, Slu7 and Prp22 were detected in the C * and P complexes, with an additional unidentified 20-kDa protein at position +25 BS in the P complex. Prp22 was previously shown to bind to the i3 T after Prp16-mediated remodeling of the spliceosome, and then to translocate to the 3 -exon after exon ligation to promote mRNA release (40,43). However, the excised intron-lariat was still protected from oligo-directed RNase H cleavage (43). Consistently, we found that Prp22, as well as Slu7, continued to interact with the i3 T after exon ligation.
To determine whether Slu7/Prp18/Prp22 are required for switching the Prp8-interacting site, we depleted Slu7 and Prp22 from the extract and isolated the spliceosome with anti-Ntc20 antibody (complex named C2). We found that Prp8 remained interacting with the BS, as in the C complex ( Figure 3G), indicating a requirement for Slu7/Prp18/Prp22 to direct Prp8 to the 3 SS. Depletion of Prp22 alone did not prevent crosslinking of Prp8 to the 3 SS (data not shown), suggesting that Slu7 is the major player in mediating the switch. Interestingly, in the absence of Slu7/Prp18/Prp22, strong crosslinking of Prp16 was also observed in the region from the +8 BS to +25 BS positions (Supplementary Figure S5A), as opposed to from +18 BS to +37 BS for prp16-D473A ( Figure 3D), suggesting movement of Prp16 from its docking site toward the BS under ATP hydrolysis. Without crosslinking, Prp16 was not observed as being stably associated with the spliceosome (Supplementary Figure S5B) unless ATP was depleted prior to immunoprecipitation, indicating that the intermediates could only be captured by crosslinking. The action of Prp16 results in the removal of Cwc25 and Yju2 from the active site of the spliceosome, and possibly also destabilization of the interaction of Prp8 with the BS either directly upon moving close to the BS, or indirectly by removing Cwc25 and Yju2 from the catalytic center.
The fact that Prp8 crosslinked to the intron toward the 3 SS in the C * and P complexes suggests that the Prp8interacting site might extend to the 3 -exon. Indeed, Prp8, Slu7 and Prp22 all crosslinked to the 3 -exon in an array both before and after exon ligation (Figure 4 and Supplementary Figure S6). Prp8 crosslinked to positions +44 BS and +51 BS in both C * and P complexes, but had an additional crosslink at position +60 BS in the P complex. Prp22 had a strong crosslink at position +60 BS in the C * complex, and crosslinking was more evenly distributed from positions +51 BS to +66 BS in the P complex. Slu7 crosslinking was observed primarily at position +51 BS with an additional weaker crosslink at +60 BS for the P complex. Together, these results show that Prp8 binds over the 3 SS flanked by Slu7 and Prp22, suggesting that Prp8 plays a central role in positioning the 3 SS during exon ligation, while Slu7/Prp18/Prp22 may promote or stabilize the interaction of Prp8 with the 3 SS.
To establish whether Prp8 interacts with the BS and the 3 SS at different domains, we mapped the crosslinking sites on Prp8 using a TEV-tagged Prp8 system (34). Splicing was carried out in extracts prepared from ZZ-tagged Prp8 strains with a TEV cleavage site inserted at various positions using actin ACAC pre-mRNA labeled with 4sU at the +8 BS or +37 BS position. Crosslinking at positions +8 BS and +37 BS was mapped to the same region of Prp8 on the C-terminal half of the linker domain between amino acid residues 1503 and 1673 ( Figure 5), which contains the 1585loop and mutations that affect the first or second reaction (23,44,45). The +37 BS position was seen to crosslink to additional sites downstream of the linker domain, possibly in the RH domain ( Figure 5C, lane 5). The 1585-loop is located near the catalytic center of the spliceosome in cryo-EM structures (21,25). Our results suggest that the 3 SS is already positioned at the catalytic center in the C * complex even though the reaction cannot proceed. Furthermore, this segment of Prp8 plays a central role in the alignment of the splice sites for catalysis, whereas Slu7/Prp18/Prp22 play an auxiliary role in the second step.
Identification of proteins crosslinked to the 5 splice site region
Prp8 has been shown to crosslink to the 5 SS, 3 SS and BS (34,40,46-52), but at which stage of the splicing path-way these interactions occur has not been established. We examined crosslinking at the 5 SS and found that Prp8 crosslinked to the 5 -exon throughout the catalytic phase and predominantly at the -2 5 SS position ( Figure 6A) (53). This strongly suggests that Prp8 plays a central role in the alignment of the 5 SS and BS in the first step and of the 5 SS and 3 SS in the second step. Snu114, Cwc22 and Cwc21 were seen to crosslink at positions -20 5 SS and -16 5 SS on the 5exon throughout the catalytic phase (Supplementary Figure S7A), in agreement with their observed location near the 5 SS in the cryo-EM structures (21,26,28). At the -2 5 SS position, crosslinking of Cwc24 and Prp11 was also detected in the B act complex (53), as was crosslinking of Yju2 in the C complex ( Figure 6A and Supplementary Figure S7B), indicating exchange of specific proteins interacting with the 5 SS during the first catalytic step.
A different set of proteins was found to crosslink to the intron sequences of the 5 SS ( Figure 6B). As previously reported (54), Cwc2 was seen to crosslink near the 5 SS at the +9 5 SS and +12 5 SS positions throughout the catalytic phase (Supplementary Figure S8), suggesting a role for Cwc2 in orchestrating the structure of the RNA catalytic center. Several NTC components were found to crosslink to the intron downstream of the 5 SS ( Figure 6B, C, Supplementary Figures S9 and S10), supporting a role for the NTC in stabilizing extended U6-5 SS interactions, as suggested previously (4). Interestingly, Ntc30 exhibits strong crosslinking at positions +9 5 SS and +12 5 SS only in the B * and C complexes, but also interacts with the intron over a broad sequence expanse further downstream ( Figure 6B and C). Ecm2 also interacts with a wide region downstream of the 5 SS, from positions +20 5 SS to +38 5 SS , and more prominently after Prp2mediated spliceosome remodeling ( Figure 6C and Supplementary Figure S11).
A model for positioning the 3 splice site at the catalytic center of the spliceosome
Based on our crosslinking data, the information from cryo-EM structures (21,22,(24)(25)(26)28), and previous biochemical analyses (9,10,12,13,55,56), we propose a model for the transition from the first to the second catalytic step as shown in Figure 7. (i) In the first step, Yju2, Cwc25 and Ntc30 (Ntc30 not shown in the figure) are located at the active site, with the cavity surrounded by Prp8 reverse transcriptase (RT), large (L) and RH domains filled by Cwc25, to promote lariat formation (44). Slu7 may interact dynamically with the i3 T (shown by weak interactions of Slu7 with the i3 T). (ii) After branching, Prp16 docks on the intron downstream of the Cwc25 interacting site, and (iii) moves in the 3 to 5 direction to remove Cwc25, Yju2 and Ntc30 from the active site, thereby destabilizing the interaction between the Prp8 L domain and the BS. (iv) This action triggers a conformational change in Prp8, allowing rotation of the RH domain and removal of the branch helix from the catalytic site (21,26,28,57). The i3 T can then move freely in the active site until the 3 SS is positioned in the catalytic center. interactions with exon-2 and Prp8. Prp22 docks on the i3 T, but is unable to get into the active site possibly due to hindrance by the RH domain. Consequently, crosslinking of Prp22 was observed only being close to the +8 BS position of the BS. Stalled at the gateway of the cavity, Prp22 interacts with both the i3 T and exon-2. No other proteins were detected to crosslink to the RNA between the two ends of the Prp22-crosslinked sites ( Figure 3E). (vii) After exon ligation, the RH domain may rotate away to enlarge the cavity and destabilize the interaction with Slu7, allowing Prp22 to enter and disrupt the interaction between mRNA and Prp8 for the release of mRNA from the spliceosome.
DISCUSSION
By systematic crosslinking analysis using 4sU-labeled pre-mRNA, we have identified protein-RNA interactions at the 5 SS and BS-3 SS regions at defined steps of the catalytic phase of splicing. Splicing was blocked at different stages by depletion of specific factors, addition of dominant-negative Prp16 or Prp22 mutant protein, or using 3 SS mutant ACAC pre-mRNA. Accumulated splicing complexes were then isolated after UV irradiation using antibodies against specific proteins. This resulted in isolation of seven distinct splicing complexes from the catalytic phase, despite whether all of them are true functional intermediates of the spliceosome cannot be concluded. Pre-mRNAs with a single 4sU labeled at 13 positions spreading around the 5 SS and 13 positions around the BS-3 SS region were synthesized for these experiments. The identity of crosslinked proteins was validated by immunoprecipitation of crosslinked products with specific antibodies, but only one or two representative positions from each cluster were analyzed. Although these analyses are not on quantitative basis, they provide an overview on how protein components interact with the splice sites and their surrounding sequences in mediating structural changes of the spliceosome during the two catalytic steps. It is worth noting that each base of pre-mRNA crosslinked to multiple proteins, and each protein crosslinked to multiple RNA residues, suggesting that pre-mRNA interacts with the spliceosome in a rather dynamic manner. This outcome explains why only limited pre-mRNA sequences were detected in cryo-EM structures. Our data revealed that most of the components that interact with the 5 SS, either with the 5 -exon or with the intron, interact with specific regions of the pre-mRNA constitutively throughout the catalytic phase, whereas substantial changes of protein-RNA interactions occur in the BS-3 SS region.
In the cryo-EM structures, the last few bases of the 5exon are located between the Prp8 N and L domains that show very little conformational change throughout the catalytic phase. In agreement, we observed Prp8 crosslinked to the -2 5 SS to -16 5 SS positions on the 5 -exon throughout the catalytic step. The 5 -exon is presumed to extend out through a channel enclosed by Snu114 and the MA3 and MIF4G domains of Cwc22 located on one surface of the spliceosome, but the RNA sequence was not seen (20)(21)(22)24,25,28). Consistently, we noted Snu114, Cwc22, Cwc21 and a protein of around 65-kDa of unknown iden- Figure 6. Analysis of total crosslinked proteins at the 5 splice site. Splicing was carried out using actin ACAC pre-mRNA with 4sU labeled at indicated positions of the 5 -exon (A) or intron (B, C) sequences at the 5 SS in extracts depleted of Spp2 (B act ), Yju2 (B * ), Prp16 (C), or wild-type extracts (C * ). Following UV irradiation, spliceosomes were precipitated with anti-Ntc20 antibody, and analyzed by 12.5% (A) or 4-20% gradient (B, C) SDS-PAGE. tity crosslinked to the -16 5 SS and -20 5 SS positions on the 5 -exon. This suggests that the 5 -exon is highly dynamic in this region and can contact any of the four proteins. The 65-kD protein was not seen in cryo-EM structures. It may bind RNA only with low affinity, and can only be detected when crosslinked to RNA. Cwc22 is an eIF4G-like protein, and its human ortholog was shown to interact with the exonjunction-complex (EJC) core component eIF4IIIA for deposition of the EJC on the mRNA for nonsense-mediated mRNA decay (58)(59)(60). The position of the Cwc22 crosslinking site conforms with the EJC binding site, supporting its proposed role as an adaptor in recruiting the EJC to the mRNA. In contrast, Cwc24, Prp11 and Yju2 all contact the -2 5 SS position of the 5 -exon only at specific stages. They are also only transiently associated with the spliceosome. Cwc24 and Prp11 were detected to crosslink at the -2 5 SS position in the B act complex, whereas Yju2 did so in the C complex. In the cryo-EM structures of the B act complex, Cwc24 and Prp11 were observed to interact with the first base of the intron (G1) (20). In agreement, we have detected interactions of Cwc24 with G1 and U2 of the intron by UVcrosslinking, but only weakly with the 5 -exon (53), suggesting that crosslinking using 4sU-labeled pre-mRNA is much more efficient than UV-crosslinking. Prp11 has previously been shown to crosslink to the BS upstream region (41), and can interact with both the 5 SS and BS in the B act complex (20). Prp11 is displaced together with the SF3a/b and RES complexes upon Prp2 action, which presumably disrupts the interactions of SF3a/b and RES with sequences in the BS region. Structural change in the BS region is likely to impact interactions of components at the 5 SS, leading to dissociation of Cwc24.
Proteins crosslinked to the intron sequence near the 5 SS are primarily NTC and NTC-related components. No protein was detected to crosslink to the +4 5 SS position of the intron, possibly due to base pairing of the residue with U6 snRNA after spliceosome activation. Cwc2 was previously shown to contact the +15 5 SS position by UV-crosslinking (54). We observed crosslinking of Cwc2 at positions even closer to the 5 SS, i.e. at positions +9 5 SS and +12 5 SS , suggesting that Cwc2 may play a critical role in supporting the structure of the catalytic center of the spliceosome. Isy1/Ntc30 also showed strong crosslinking at positions +9 5 SS and +12 5 SS in the B * and C complexes of the first step. Consistently, the N-terminal region of Ntc30 was detected at the catalytic center of the spliceosome in the cryo-EM structure of the C complex (22,25). Although Ntc30 is not seen in any other cryo-EM structures, our results demonstrate that, like other NTC components, Ntc30 also interacts with a broad range of the intron sequences downstream of the 5 SS throughout the catalytic phase. This finding suggests that Ntc30 may interact with the spliceosome in a dynamic manner, and its N-terminal domain is stabilized when positioned close to the active site during the first step. Syf1/Ntc90, Cef1/Ntc85, Clf1/Ntc77 and Ecm2 were found to crosslink to positions +16 5 SS to +38 5 SS of the intron sequence, with this region previously having been shown to interact with the Lsm-binding site of U6 snRNA in an NTC-dependent manner (4). On binding to the 5 SS downstream region, NTC components may promote the release of the Lsm complex from U6 and further stabilize the interaction of the pre-mRNA with the U6 3 tail. Whether NTC components also directly interact with U6 snRNA remains to be investigated.
Crosslinking in the i3 T region also underwent a major change after Prp2-mediated remodeling of the spliceosome. Prp8 replaces Hsh155 to interact with the BS upon the release of SF3a/b, and remains bound until Yju2 and Cwc25 are displaced after the first reaction. Prp8 then binds to the 3 SS while retaining interaction with the 5 -exon for exon ligation. These results suggest that the Prp8 protein plays a key role in positioning the 5 SS and the BS for lariat formation, and the 5 SS and 3 SS for exon ligation. Mapping of crosslinked sites identified a region of Prp8 between amino acid residues 1503 and 1673 in the linker domain that crosslinked to the +8 BS position of the C1 complex and to the +37 BS position of the C * complex. This region contains the 1585-loop located at the catalytic center of the spliceosome. This finding indicates that the BS and the 3 SS are positioned at the catalytic center of the C1 and C * complexes, respectively, and it is consistent with observations based on cryo-EM structures of the spliceosome C * complex, which show that the branch helix is oriented away from the catalytic center by 60Å and that an RNA fragment, likely containing the 3 SS and the 3 -exon sequences, is located at the catalytic center. An additional crosslinked site toward the C-terminal end of Prp8, possibly in the RH-domain, was also seen for the +37 BS position, presumably arising from the population of spliceosomes whose 3 SS had not yet been positioned at the catalytic center. These results suggest that during the second step, Prp16 mediates remodeling of the spliceosome not only to remove Yju2 and Cwc25 from the active site but also to destabilize the interaction of Prp8 with the BS, allowing the 3 SS to displace the BS for interaction with Prp8.
Prp16 was seen to crosslink to the i3 T from position +18 BS to the 3 SS when the dominant-negative ATPase mutant D473A was used in the splicing reaction. Similarly, the Prp2 helicase mutant S378L could crosslink to the i3 T in the same region of pre-mRNA, but requires more than 20 nt downstream of the branchpoint for crosslinking (61). Both Prp2 and Prp16 interact with the C-terminal domain of Brr2, which has been shown to interact with many spliceosomal components, and might serve as the platform for recruiting proteins to the spliceosome (62). Prp2 was pro-posed to first interact with Brr2, and then to translocate to the i3 T of the pre-mRNA. Upon ATP hydrolysis, Prp2 can move in the 3 to 5 direction to displace SF3a/b (61). Conceivably, Prp16 may act in a similar way to displace Yju2 and Cwc25 from the active site. Supporting this notion, our data show that when splicing was performed in Slu7-depleted extracts, wild-type Prp16 could crosslink to a region of the i3 T much closer to the BS than could the D473A mutant, implicating ATP hydrolysis-driven movement of Prp16 from its docking site toward the BS. Moreover, the Prp16 crosslinked sites overlap with those of Prp8 near the BS. This result suggests that Prp16 may displace the Prp8-i3 T interaction on moving toward the BS, which then allows the 3 SS to enter the active site to interact with Prp8. Nevertheless, without crosslinking, Prp16 was not detected to associate with the spliceosome by immunoprecipitation unless ATP was depleted from the reaction mixture prior to immunoprecipitation, suggesting that Prp16 only weakly interacts with the spliceosome during cycles of ATP hydrolysis.
Both Slu7 and Prp22 interact with the i3 T near the BS at earlier stages before their functions are required. Crosslinking was detected as early as in the B * complex after Prp2mediated remodeling of the spliceosome. Slu7 has previously been shown to facilitate the release of Yju2 and Cwc25 after the first reaction (55), consistent with its early association with the spliceosome. Slu7 interacts with the i3 T throughout the catalytic phase, with crosslinks also detected in the P complex. Judging from its weak but broad-range crosslinking, Slu7 might interact with the i3 T nonspecifically. In contrast, a strong crosslink of Slu7 at position +51 BS was observed in the C * complex, accompanied by switching of Prp8 crosslinking from the BS to the 3 SS. This finding suggests that Slu7 might interact strongly with the 3 -exon after Prp16-mediated spliceosome remodeling. Slu7 was seen to interact with the N, linker and RH domains of Prp8 in the cryo-EM structures of the C * complex, but not seen in those of the C complex (19,21,(27)(28)(29)(30), suggesting that Slu7 is not stably associated with the spliceosome before formation of the C * complex. Since Slu7 is required for switching the Prp8 crosslinking site, the interaction of Slu7 with the 3 -exon and/or with Prp8 might be important for the switch. It is conceivable that Slu7 may access the spliceosome through dynamic interactions with pre-mRNA sequences downstream of the BS to prepare for docking to the spliceosome. When the branch helix is moved away from the active site, the i3 T and Slu7 can enter the active site, allowing Slu7 to be deposited on the spliceosome upon interacting with Prp8. This may facilitate the interaction of Slu7 with the 3 -exon to promote or stabilize the interaction of Prp8 with the 3 SS.
Crosslinks of Prp8 to the BS region were observed primarily at positions +8 BS to +12 BS . A strong crosslink at the +3 BS position was also seen, but only in the B * and C2 complexes, which represent the spliceosome arrested at stages after SF3a/b and Yju2/Cwc25, respectively, are displaced. The crosslinked site on Prp8 was mapped to the same region containing the 1585-loop that crosslinks to positions +8 BS and +37 BS (data not shown). Conceivably, removing SF3a/b and Yju2/Cwc25 from the active site could create an open space at the catalytic center of the spliceo-some, allowing the branch helix to move more freely into the active site in the case of the B * complex or to exit it for the C2 complex. This scenario would also enable the +3 BS residue to interact with Prp8. The presence of SF3a/b or Yju2/Cwc25 likely prevents +3 BS from interacting with Prp8 by sequestering the two elements or by restraining the conformation of the active site. Cryo-EM structures of the B act complex indeed reveal sequestering of the branch helix from the catalytic center by Hsh155/SF3b1 (20,24). In contrast, the branch helix is located at the active site with +3 BS positioned close to but not in contact with Prp8 in the cryo-EM structures of the C complex (22,25), suggesting that the active site of the spliceosome is structurally inflexible and prohibits free movement of the pre-mRNA.
Although cryo-EM structures of the spliceosome reveal detailed arrangements and interactions of the spliceosomal components, very little of the pre-mRNA sequences was observable due to the dynamic character of the pre-mRNA in these structures. How spliceosomal components interact with the pre-mRNA is not well elucidated, except for those at the conserved splice site sequences. By site-specific crosslinking analysis, we were able to visualize interactions of protein components with the pre-mRNA in the 5 SS, 3 SS and BS regions at various stages of the catalytic phase, even under conditions when the interactions are more dynamic, such as the interaction of Slu7, Prp22 and Prp16 with the i3 T. It is particularly valuable to reveal the interactions of those complexes in open conformational states (63), such as the B * and C2 complexes, whose dynamic nature at the active site would make structural determination difficult. | 2018-11-15T16:51:26.188Z | 2018-11-05T00:00:00.000 | {
"year": 2018,
"sha1": "60b4714ab3862151992f537b462fd157d0d8973e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/nar/gky1089",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60b4714ab3862151992f537b462fd157d0d8973e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118774140 | pes2o/s2orc | v3-fos-license | On the Vacuum Propagation of Gravitational Waves
We show that, for any local, causal quantum field theory which couples covariantly to gravity, and which admits Minkowski spacetime vacuum(a) invariant under the inhomogeneous proper orthochronous Lorentz group, plane gravitational waves propagating in such Minkowski vacuum(a) do not dissipate energy or momentum via quantum field theoretic effects.
INTRODUCTION
Gravitational waves propagate in empty space in general relativity. One basic class of such vacuum solutions is the plane waves in Minkowski spacetime. They describe, for example, the propagation of a gravitational wave, emitted by a bounded source, in a region far from its source 1 . The metric is given by the following exact solution to the vacuum Einstein equation where h ij (u) is a symmetric, traceless, and otherwise arbitrary d − 2 by d − 2 matrix valued smooth function of u. This describes a plane gravitational wave propagating along the light-like direction v, with h ij (u) specifying the space-dependent profile for the (d − 2)(d − 1)/2 − 1 polarizations of the wave in d-dimensional spacetime 2 . h ij (u)'s in-dependence on the coordinate v shows explicitly that, among other things, no dissipation occurs. For h ij (u) with finite support, the metric describes a plane gravitational wave in Minkowski spacetime with a finite duration.
As classical solutions to the equations of motion, these plane wave spacetimes are robust under a large class of deformations to the gravitational dynamics. Any local correction to the Hilbert action by adding higher powers of the Riemann tensor and covariant derivatives leaves the solutions intact. This follows from the fact that any scalar or second-rank tensor field constructed locally from the Riemann tensor and the covariant derivative necessarily vanish on these backgrounds 3 . Hence any higher curvature and higher derivative correction to the Hilbert action and to the Einstein equation does not materialize.
In quantum theory, the vacuum is no longer empty. As ripples of spacetime curvature travel through the vac- * Electronic address: xliu@perimeterinstitute.ca 1 For results of recent attempts of experimental detection, see [1] 2 For another coordinate system that makes explicit the symmetry of the plane wave front, see e.g. [2] 3 To jump ahead a little bit, we note that this by itself does not imply that vac, in|T ren µν (x)|vac, in vanishes, which is, by definition, sensitive to the global property of the background spacetime. A priori, non-vanishing second-rank tensor fields can be constructed based on the Riemann tensor via non-local expressions.
uum, the zero-point fluctuations of the quantum field in the background are generically amplified to higher energy excitations. To calculate these effects, one needs to diagonalize the full time-dependent Hamiltonian for the quantum field at every instant of time, and to expand the in-vacuum state in the basis of the out-states. This calculation is in general difficult to carry out, except when the field theory is free. In the latter case, diagonalizing the full Hamiltonian boils down to solving free field equations in the time-dependent background, and the occupation number of the in-vacuum in each out-Fock space state is determined by the Bogolubov coefficients (see e.g. [3]).
The metric (1) has a covariantly constant global null Killing vector field, which generates the translation along the plane wave (v-direction). Had it generated an evolution across the wave (along u for example), one would have been able to immediately conclude that no particle would be produced by the gravitational wave in any field theory, because one can choose to work with a (light cone)time-independent Hamiltonian by slicing the geometry properly. Nevertheless, in the case of free quantum fields [4], the presence of this along-the-wave Killing field is enough to forbid mixing between positive and negative frequency modes under the evolution along the udirection across the wave (but still allows mixing between positive/negative frequency modes with themselves respectively). This establishes that plane gravitational waves do not dissipate energy or momentum by exciting the vacuum of a free quantum field. This calculation does not apply to interacting theories. In particular, it does not apply to the real world, since the relevant low energy physics is governed by the nonlinear interacting theory of photons, in which the leading interaction from integrating out the electron is [5]. One may wonder whether, taking into account this interacting nature of the QED (and the Standard Model) vacuum, the gravitational wave could dissipate energy and momentum by producing extremely soft photons as it propagates 4 . Were this possible to happen, attenuation of extremely high frequency gravitational waves might accumulate over a cosmic distance scale to a significant level 5 .
We analyze the physical vacuum expectation value of the renormalized energy-momentum-stress tensor vac, in|T ren µν (x)|vac, in in the plane gravitational wave spacetime in general quantum field theories which couple to gravity covariantly and which admit Minkowski space vacuum(a). This expectation value is sensitive to the global property of the spacetime, and in principle can be computed from a regulated vac, in|T µν (x)|vac, in by subtracting all cut-off dependence through local counterterms. 6 Instead of pursuing this direct approach case by case, we determine the form of the finite piece vac, in|T ren µν (x)|vac, in for a very general class of field theories by exploiting the symmetries of the plane wave spacetime. We find that vac, in|T ren µν (x)|vac, in vanishes identically in the metric (1). This conclusion holds regardless of the nature and the strength of the interaction 7 and the phase of the vacuum, and independent of the dimension of the spacetime. It shows that gravitational waves far from its source propagate without dissipation via any quantum field theoretic effects.
GRAVITATIONAL PLANE WAVE SPACETIME
The spacetime defined by the metric (1) is geodesically complete, and contains no closed time-like or light-like curves. It admits 2d − 3 Killing vector fields for generic choice of h ij (u), although only one of them is manifest in the Brinkman coordinates (1). The 2(d−2) non-manifest Killing vector fields are all in the form: where (b 1 (u), . . . , b d−2 (u)) is a solution to: 5 Dimensional analysis suggests that such effects, even if present, are completely in-significant in the frequency ranges of LIGO and LISA. 6 In addition to counter-terms present in the flat space, new terms will be generated in curved backgrounds involving the geometric quantities (the Riemann tensor and covariant derivatives). Counter-terms that involve only the geometric quantities vanish in the plane wave background. Those that involve both the geometric quantities and the other fields do not. In the case of interacting photon theory coupled to gravity, for example, one such term that may be generated and, if generated, does not vanish, is R d d x √ gR αβγδ F αβ F γδ , with a cut-off dependent coefficient. 7 The interaction, for example, does not need to preserve P , T , C separately in Minkowski spacetime.
The ODE has 2(d − 2) independent solutions, which give rise to the same number of additional independent Killing vector fields. Killing fields associated to two solutions b i (u) andb j (u) satisfy where the Wronsky In a suitable basis, they generate the Heisenberg algebra with central element Z. The Killing vector fields in (6) preserve each u =const hypersurface, and generate on each such hypersurface the d − 1 translations and the d − 2 x-linearly-dependent translations along v. For any given Killing vector field, the actions on the constant u hypersurfaces are udependent. To help characterize the algebraic aspect of this dependence, we introduce yet another vector field This is not a Killing field. It generates evolution along u and would be upgraded into the light-cone Hamiltonian if we quantize field theories in the plane wave background.
In the basis of (6) H, and Z remains central. Observe that the algebra does not close, unless h ij (u)'s are constant. This is what one expects, since H does not generate isometries unless the metric does not depend on u.
We are interested in plane gravitational waves of finite duration, that propagate in otherwise flat spacetime. So we demand h ij (u) to vanish outside [−T, T ]. In regions |u| > T where (1) reduces to flat space, the Killing fields generate a subgroup of the subgroup of Poincare group that preserves the null hyperplanes {u = u 0 }. The latter is the same subgroup that preserves the vector field ∂ ∂v , and is generated by the translation along v, the translations and rotations of the x i 's among themselves, and the d − 2 additional vector fields: The last d − 2 fields are linear combinations of boosts and rotations . All the translations and the d − 2 boostrotations extend to the whole plane wave spacetime, while the rotations among the x i themselves do not for generic h ij (u). The translations and boost-rotations account for the total 2d − 3 global Killing vector fields.
PLANE WAVES AS ROBUST CLASSICAL SOLUTIONS IN EFFECTIVE FIELD THEORIES
The Einstein-Hilbert action is an effective action for gravity. Various higher dimensional operators may be added and are presumably indeed present, their effects small until spacetime curvature approaches the mass scale that suppresses these higher dimensional operators, at which point the applicability of the effective theory itself starts to break down. Two natural questions come to mind: (1)what modifications these corrections may bring to the gravitational wave solutions? (2)how strong the gravitational wave needs to be to invalidate the application of the effective theory itself? In normal scenarios, the cut-off scale for the gravity effective action is assumed to be not too low below the Planck scale or the string scale.
The answer to the first question is well-known (see, e.g. [6] [7]): any higher curvature and higher derivative corrections to the Einstein-Hilbert action, involving only the quantities derived from the metric itself, does not modify the geometry (1).
The reason for this, as already mentioned in the introduction, is that any scalar and non-trivial second-rank tensor field that can be constructed based on the metric, the curvature, and the covariant derivative, necessarily vanish in the background (1). Hence both the corrections to the action, and the corrections to the Einstein equation, vanish for the plane wave spacetime.
To see how the geometric property comes about, we compute the Riemann tensor and its covariant derivatives in the Brinkman coordinate basis. The only nonvanishing components of the Riemann tensor are and those related to this by symmetry. Further inspection reveals that ∇ α ∇ β ...∇ γ R µνρσ vanishes unless every index is either u or one of the d−2 i's, and the total number of i-index must be less than or equal to 2. In fact, by inspecting the basic operations in the construction of these higher rank tensor fields, a simple "sum rule" can be shown to hold: the total number of i-index for any nonvanishing component plus its degree as a homogeneous polynomial of the variables {x i , i = 1, ..., d − 2} always equals to 2. Technicalities aside, the upshot for now is that any component of the above tensors with a vindex vanishes. Note also that g uν = 0 if and only if ν is v, and that h ij (u) is traceless. It then follows by inspection that no non-vanishing scalar or second-rank tensor fields (except the metric itself) can be constructed because there are too many lower u indices and no lower v index that there is no way to contract all of them.
Since the contribution of all higher dimensional operators are parametrically smaller (indeed, in this case, they vanish) than the leading contribution from the Einstein equation, the application of the effective theory is, by definition, valid regardless of how strong the gravitational wave is. This might seem a little confusing at the first glance, because the solutions (1) allow h ij (u), the tidal force, arbitrarily large and arbitrarily fast varying.
There is no paradox here. The point is that h ij (u) and its variations are frame-dependent; and, around every point in the plane wave geometry, one can always boost to a free-falling frame in which all components of the tidal force and its gradients are smaller than 1 in any specified mass unit. So they can all be made parametrically small compared to the mass scales suppressing the corrections in the action. This follows from the non-compactness of the Lorentz group and the the fact above that there is no non-vanishing Lorentzian scalar constructed locally by the Riemann tensor and its derivatives. To gain intuitions about it, we now see it directly from the metric (1).
Along the locus {x i = 0, i = 1, ..., d − 2}, the basis is already a Lorentz frame at each point, namely the metric tensor in this basis is diag (+1, −1 as a linear transformation in the tangent space at a given point) leaves the metric invariant, but enforces ∇ α1 ...∇ α k R µνρσ → λ k+2 ∇ α1 ...∇ α k R µνρσ for every nonvanishing component, by the previously mentioned sum rule. Hence given any finite number of such tensors, we can always choose λ properly to make all the components of these tensors arbitrarily small. This finds the proper Lorentz frames point-wise along Remember that the spacetime has a large isometry, which acts on each hypersurface {u = u 0 } transitively, some of which, specifically, act as translations. So for any point P in the spacetime, there always is an isometry that brings that point to a point Q in {x i = 0, i = 1, ..., d−2}. The pullback to P from Q of the appropriate Lorentz frame at Q gives rise to the sought-after frame at P that makes all the components of the tidal force and its gradients small.
The boost we did to scale down the tidal force corresponds to speeding up in the direction that the wave propagates. This elongates the duration of the plane wave and lowers its frequency of variation, both of which are frame-dependent scales. Furthermore, as shown above, no frame-independent scale exists at all that local observer can define in the spacetime 8 . This implies that the solutions (1) are valid for arbitrarily strong waves. On the other hand, as well-known, the field theoretic description does break down, but only as we start asking questions about the local physics on length scales approaching the cut-off scale.
To recapitulate, as long as we restrict ourselves to length scales above the cut-off, not only that the geometry (1) is always valid as classical solutions to the effective action, but also that the application of the effective action itself is always valid in solving for these classical solutions.
4.
vac, in|T ren µν (x)|vac, in IN GENERALLY COVARIANT QUANTUM FIELD THEORIES We ignored the presence of other fields in the last section by setting them to their values in a classical vacuum, which we assume to be a configuration of Minkowski space with a Poincare invariant profile for all the fields present. This is consistent at the classical level 9 .
Quantum mechanically, local observables in the vacuum are no longer sharply peaked at any particular values. The consequences are several-fold. First, zero-point motions give rise to cut-off dependent contributions to the effective Lagrangian density. These include, generically, a constant piece, acting effectively like a cosmological constant, and various other fields and curvature dependent terms. Unless one works in a UV finite theory like string theory, one can only determine the coefficients of these interactions by measurements. We will make no statements about these coefficients, except that we restrict ourselves to theories that admit (proper orthochronous) Poincare invariant Minkowski space vacuum(a). This implies, in particular, that the total cosmological constant vanishes, and that only operators which transform as (pseudo-)scalars may acquire vacuum expectation values.
Suppose now that we have experimentally determined all the couplings of the interactions in the effective theory and computed the ground state wave function of the quantum fields in the Minkowski vacuum. We ask the question, what happens to the quantum field vacuum as a train of plane gravitational wave passes by. As mentioned in the introduction, one a priori expects that particles would be excited, although a calculation specific to free fields suggests otherwise.
So we consider quantizing a general field theory 10 in the background (1) for which h ij (u) has a finite support [−T, T ]. We specify the initial condition that at {u = u 0 < −T }, the theory lives in the in-vacuum 9 That the other fields do not backreact on spacetime is clear because Tµν vanishes in the classical vacuum. That the gravitational wave does not disturb the fields away from their vacuum values requires some qualification. We assume that covariant couplings to gravity linear in the fields vanish. This is automatically satisfied in the plane wave spacetime for particles with spin less or equal to 3/2. In the presence of higher spin particles, we would have to impose this extra assumption. 10 We exclude gravitons from this field theory, because the energymomentum tensor of gravitons, constructed from hµν , is not a tensor of the Lorentz group.
|vac, in . To fully specify the system, we also need to impose boundary conditions along {u > u 0 , v → −∞}.
We require that the boundary condition at v → −∞ preserves the isometry of the spacetime. This implies, in particular, that no field and/or particle comes in from v → −∞ except the gravitational wave itself. We analyze the conditions vac, in|T ren µν (x)|vac, in needs to satisfy. Remember that in the Minkowski space portion of the spacetime (1), the isometries are part of the Poincare group which, by assumption, leaves |vac, in invariant. Combined with the boundary condition we imposed, this implies that vac, in|T ren µν (x)|vac, in (for convenience, we will denote this quantity by T µν ) is an invariant tensor under the isometry group. That is, for any transformation x → y = f (x) that satisfies (f * g) µν = g µν we have That this does not only hold in the before-wave region but also hold everywhere requires some explanation. Let U [f ] be the operator that realizes the isometry transformation f in the quantum field theory. This operator is u-dependent, and its action on each constant u hypersurface, which the isometry f preserves, is determined by the generating Killing vector field V f , which, in turn, is determined by (3) or equivalently by (8). We have by the fact thatT ren µν (x) is an operator that transforms as a tensor; the arguments ofT in this equation (points x and y) share the same value for the u-coordinate.
We claim that, for all values of u,Û [f ](u) leaves the in-vacuum invariant. This is clear if |u| ≥ T in which case it represents an element of the Poincare group that preserves the null hyperplanes u =constant. It might seem less clear if |u| < T , but it is also true. The point is, on each constant u hypersurface, the Killing vector field V f that generates f can be expanded in the basis of vector fields {∂ v , ∂ i , x j ∂ v , i, j = 1, . . . , d − 2} restricted to the same hypersurface. The corresponding operatorÔ[V f ](u) can thus be expanded in terms of the u-independent op- Since we know from the flat before-wave region that all the latter annihilate the invacuum,Ô[V f ](u) must also do. Hence, for all values of u. Now we sandwich (12) between vac, in| and |vac, in , simplify the right hand side via (13), and produce (11). It follows from (11) that for any V in the algebra (6). Writing out this equation explicitly we find that for V = Z ∂ v T µν = 0 (15) and that for V one of the X's where, as before, i, j = 1, ..., d−2 and repeated indices are summed over regardless of their vertical positions. Since the 2d − 4 (b 1 (u), . . . , b d−2 (u))'s that define the X-type isometries constitute a complete basis of the solutions to (4), the functions multiplying b i (u) and b ′ i (u) in (16) must vanish separately. Working out their consequences we find In the above equations, we explicitly write out the coordinate(s) each component of T µν is allowed to depend on; for example, T uv can only depend on u. These all follow from solving the isometry constraints (14).
It is now clear that, that vac, in|T ren µν (x)|vac, in is invariant under the full isometry group is very constraining: the expectation values of a d-dimensional secondrank symmetric tensor, that is d(d + 1)/2 functions of d variables each, reduce to a single unknown function T uv of a single variable u! This is certainly only possible because we started in the in-vacuum and imposed proper boundary conditions, any excitations in the initial state or in the in-coming wave from v → −∞ will spoil the property.
To proceed further, we will need some dynamical equation, which is generally hard to write down. There is a simple one, the covariant conservation of the energymomentum-stress tensor: This condition is necessary for general covariance to be preserved at the quantum mechanical level [8]. Now applying the results in (17), it simplifies to Further application of (17) immediately shows that (19) only gives nontrivial constraint when ν is u: Hence T uv is a constant. What we have shown is that, up to an overall constant, there is precisely one covariantly constant symmetric second rank tensor field in the background (1)that is invariant under the full isometry (6). Of course, the metric tensor itself satisfies these conditions, hence T µν = constant × g µν . Since we started from a Minkowski vacuum in which T µν ≡ 0 for u < −T , this constant must vanish 11 .
We showed that vac, in|T ren µν (x)|vac, in ≡ 0 in the gravitational plane wave background. What does it mean? Had the in-vacuum evolved into an (locally discernable) excited out-state in the future flat region, this quantity would have been non-vanishing. Hence, for any local observer after the wave, the field appears to remain in a vacuum state. Put another way, in any finite (however large) region of space, the energy and momentum dissipated into the quantum field in that region by the gravitational wave vanish exactly.
On the other hand, the gravitational aspect of T ren µν 's significance is not at all clear at the conceptual level. Plausible statements had been made in the literature that suggest to feed it back to the Einstein equation to further correct the background metric in some sort of semiclassical approximation, but none had been made precise. Time-dependent backgrounds in string theory would hopefully be understood well-enough to clarify its physical significance in future. Nevertheless, we note, given that the field theoretic aspects of the computation of T ren µν is well-defined, the result T ren µν = 0 should be taken seriously. It may also be reassuring to note that, incidentally, this result nullifies further concerns of backreaction on the metric at the semi-classical level.
REMARKS
When solving the free field wave equation in the plane gravitational wave spacetime [4], one finds that a monochromatic positive frequency solution in the beforewave region evolves into a superposition of positive frequency solutions after the wave passes by 12 . That is, the 11 We thank K.Krasnov for pointing out the reference [9] which showed that the cosmological constant, if zero, is not renormalized by pure graviton loops up to two loops. If such result fails to hold at higher loops and/or after coupling with matter, Tµν = constant × gµν by itself means that no dissipation of energy and momentum into the matter sector occurs. 12 To see this, one needs to transform the equations (3.1)-(3.3) of [4], which are given in the Rosen coordinate associated to the before-wave region, into the global Brinkman coordinate. One should also note that the singularities of the mode solutions do not represent a fundamental obstruction to quantizing the field theory. They disappear when wave packets are considered that have finite supports in directions transverse to the propagation of the wave. On the other hand, it does indicate formations of singularity when two infinite plane waves are collided [10].
creation operators do not mix with annihilation operators, but they do mix with themselves. In a free field theory, for which the physical ground state is the same as the Fock space ground state, one concludes that the field stays in the vacuum undisturbed. After interaction is turned on, one expects that the physical ground state spreads out in the Fock space. So it may appear that mixing between the positive frequency solutions themselves would generically lead to volume-extensive particle productions. We find the contrary. The point has to do with the vacuum structure in light cone quantization. Remember that we sliced the geometry by constant u hypersurfaces, which are light-like. The rules of light cone quantization for general field theories are not entirely clear, but it is generically expected that the physical vacuum, modulo the zero-modes problem, is the same as the Fock space vacuum [11], as a result of the positivity of the longitudinal light cone momentum. This vacuum furthermore is not affected when the light cone Hamiltonian becomes (light cone) time-dependent, again modulo the problem associated to the zero-modes. In a simple case like the λφ 4 theory, the light cone quantization in the plane wave spacetime can be carried out and explicitly shows that no particle production effects arise. On the other hand, our argument in the previous section holds for general field theories. It does not depend on any specifics of light cone quantization, but is consistent with expectations derived from it.
The message to take away is that, the propagation of gravitational waves in stable (proper orthochronous) Poincare invariant Minkowski spacetime vacua is ro-bustly characterized by the classical solutions to general relativity. Vacuum fluctuations of field theoretic origin, regardless of their property, do not modify this behavior.
Exceptions, however, may arise if Lorentz invariance is spontaneously broken in a Minkowski vacuum. One such class of examples was constructed, at low energies, in [12]. After coupled to gravity, the goldstone field π(x) of these theories violates the equivalence principle and allows sources to anti-gravitate; π(x) also develops a Jeanslike instability at large distances around the flat background. It may be interesting to study the propagation of gravitational waves in this class of Lorentz violating vacua, both at the classical and at the quantum level.
For practical purposes, it is perhaps important to study the propagation of a gravitational wave in a gas of particles ( see e.g. [13] [14] [15] for some earlier results). | 2007-06-05T18:48:07.000Z | 2007-06-04T00:00:00.000 | {
"year": 2007,
"sha1": "8ba721aba383a8931d8a82b7359c5de97fe60e21",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8ba721aba383a8931d8a82b7359c5de97fe60e21",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
195811743 | pes2o/s2orc | v3-fos-license | Aging, Cancer and Immunity
Cancers are being frequently diagnosed in the elderly. Immunosenescence which refers to the gradual deterioration of the immune system brought on by natural age advancement, has been the key cross center in the increasing frequency and severity of cancer, aging and immunity. Monoclonal antibodies targeting immune checkpoint molecules CTLA-4, PD-1 or PD-L1 are the promising anticancer therapeutics in multiple cancer subtypes generating remarkable and long-lasting clinical responses. These immune checkpoint blockers (ICBs)have already obtained approval for the treatment of patients with metastatic melanoma, advanced/refractory non-small cell lung cancer and renal cell cancer. ICBs can not only enhance immune responses against cancer cells but can also lead to inflammatory side effects called immune-related adverse events (irAEs). As none or only a small number of older patients were enrolled in most ICBs studies, it remains difficult to confirm the impacts of ICBs on the elderly. We could expect that clinical specificity of older patients (co-medications, comorbidities and reduced functional reserve) and immunosenescence may affect the efficacy of ICBs and tolerance in this population. However, the results from meta-analysis on the efficacy of ICBs are very encouraging and suggesting that the older patients will benefit from the ICBs revolution in oncology without increased toxicity.
Introduction
It is definite that the occurrence and development of many diseases, including cancers, have been shown to be associated with aging. In recent years, increasing number of researchers have come to a consensus that immune factors play more and more important roles in the process of physical degeneration and the pathologic changes, which may be the vital target for the assessment and treatment in the aged patients with tumors. To further understanding the geriatric oncology, here we provide a brief overview on the relationship between aging, cancer and immunity, besides the recent evidences of the immune management in the aged patients with tumor.
Hypothesized and proven links between aging and cancer
Aging is characterized by a progressive loss of physiological integrity, leading to impaired function. This deterioration is the primary risk factor for major human pathologies, including cancer, cardiovascular disorders, neurodegenerative diseases and diabetes (1,2). Increasing evidences have revealed the incidence of cancer augments with aging, which could be attributed to a multitude of age-associated changes including the dysregulation of the immune system (3). Advanced age is an important risk factor of cancer and is associated with poor prognosis (4). Approximately half of all malignancies are diagnosed in patients older than 65 years.
Cancer and aging can be regarded as two different manifestations of the same underlying process, specifically, the accumulation of cellular damage (1). There are several genetic or pharmacological manipulations that are capable of modulating the effects of both cancer and aging. For example, the systemic downregulation of the insulin-like growth factor 1(IGF-1) signaling pathway Ivyspring International Publisher by the overexpression of PTEN tumor suppressor could increase longevity, delay aging, and confer protection against cancer on mice (4,5). Similarly, the reduced expression of c-Myc oncogene could provide the elderly with resistance to several age-associated pathologies in osteoporosis, cardiac fibrosis and immunosenescence, and therefore increase their life expectancy (5).
Age-associated changes in cell-mediated immunity
Aging is a complex process that deeply affects the immune system. The decline of the immune system with age is reflected in the increased susceptibility to infectious diseases, poorer response to vaccination, increased prevalence of cancer, autoimmune and other chronic diseases.
The immune system is a complex system in which a multitude of different cells throughout the organism interact with each other, either directly or through a variety of soluble mediators, to achieve a thorough defense of the organism against foreign attacks while maintaining control of correct cell proliferation within the body. The mechanisms of the immune response have been divided into an innate and an adaptive component. The innate response comprises both the anatomical and biochemical barriers and the unspecific cellular response mediated mainly by monocytes, natural killer cells and dendritic cells. The adaptive response provides an antigen-specific response mediated by T and B lymphocytes. Both parts of the immune response are affected by the aging process.
Immunosenescence
Immunosenescence, which is the term given to age-associated impairments of the immune system at both cellular and serological levels, affecting the process of generating specific responses to foreign and self-antigens. There were three major theories which may explain immunosenescence, known as autoimmunity, immunodeficiency and immunodysregulation (6).
The autoimmnune theory
With increasing age, the ability of the immune system to differentiate between invaders and normal tissues diminishes. Immune cells begin to attach normal body tissues. Arthritis (7) and autoimmune thyroid disease (8) could be among the typical examples.
The immune deficiency theory
As a person ages, the immune system is no longer able to defend the body from foreign invaders and detrimental changes result.
The immune dysregulation theory
With aging, multiple changes in immune system occur disrupting the regulation between multiple components of immune process implying the progressive destruction of body cells.
Immunosenescence is a complex process that affects the immune system on the whole and reflected by the organism's capability of adequately responding to pathogens. There is no single impairment to be blamed; instead it is a multilevel dysfunction that affects individuals to a different extent. As a result, elderly people have the increased susceptibility to infections (9), decreased responses to vaccination (10) and poorer responses to known and new antigens. Additionally, aged individuals tend to present a chronic low-grade inflammatory state that has been implicated in the pathogenesis of many age-related diseases (atherosclerosis, Alzheimer's disease, osteoporosis, diabetes) (11)(12)(13).
Generally, the increased prevalence of cancer has been associated with an age-related impairment of the immune surveillance function (14,15).
Hypothesized and proven cellular and molecular mechanisms for aging, cancer, and immunity
The relationship between the immune system and human cancer is dynamic and complex (16). The immune system plays a dual role in cancer development. It can not only suppress tumor growth by destroying cancer cells and inhibiting their outgrowth but also promote tumor progression either by selecting for tumor cells that are more fit to survive in an immunocompetent host or by establishing conditions within the tumor microenvironment that facilitate tumor outgrowth (17). Individual human tumors harbor a multitude of somatic gene mutations and epigenetically dysregulated genes, the products of which are potentially recognizable as foreign antigens (18). The immune system, as one of the first lines of defense, must recognize danger signals and respond accordingly (19). Immune escape and immunotolerance are considered as the main mechanism to be linked to cancer development (19)(20)(21).
Targeted immunotherapy as a potential treatment for cancer has made significant strides over the past decade based on the concept of underlying principles of tumor biology and immunology (22,23).
Cancer immunotherapy comprises a variety of treatment approaches, including antitumor monoclonal antibodies, cancer vaccines, adoptive transfer of ex vivo activated T and natural killer cells, and administration of antibodies or recombinant proteins that either co-stimulate immune cells or block immune inhibitory blockers (ICBs) (16,24,25).
Monoclonal antibodies
Monoclonal antibodies (mAbs) have had a major impact on the practice of clinical oncology. The majority of mAbs approved for clinical use contain a human immunoglobulin (Ig) G1 heavy chain (16). Although much of the antitumor effect of mAb results from the cytotoxic effects of the drugs, it is likely that immune response also plays a role (26). The immune response and in particular antibody-dependent cell-mediated cytotoxicity (ADCC) have been proved to be the major mechanism of action via which mAbs exert their therapeutic effects. Studies in vitro, animal models, and correlative clinical investigations indicate that the interaction between mAb and Fc receptor (FcR) contributes to the clinical antitumor activity of rituximab (26). Patients with lymphoma and a polymorphism encoding high-affinity FcR (more specifically, Fc_RIII) have a better response rate to single-agent rituximab than do patients with low-affinity FcR (27)(28)(29). Cancers growing in mice lacking activating FcR fail to respond to anticancer mAbs, including rituximab and trastuzumab (30). Trastuzumab can alter human epidermal growth factor receptor 2 signaling; its ability to mediate ADCC likely also contributes significantly to its antitumor activity (31). This also applies to other mAbs that target antigens on the surface of cancer cells such as other epidermal growth factor receptor family members.
Adoptive cell transfer
Adoptive cell transfer (ACT) is a form of immunotherapy in which antitumor T cells are manipulated ex vivo and then infused into the patient. One of the examples of ACT was bone marrow transplantation (BMT) for hematologic malignancies (32,33). In the 1980s, with the discovery that human T cells isolated from peripheral blood, tumor-draining lymph nodes, or tumor tissue could manifest selective antitumor reactivity in vitro, the cancer immunotherapy field undertook to develop specifically targeted ACT protocols.
Melanoma tumor-infiltrating lymphocytes (TILs) are a rich source of tumor-specific CD4_ andCD8_Tcells relative to other malignancies (34). Autologous unfractionated TILs expanded in vitro and infused into patients with metastatic melanoma, in conjunction with systemic IL-2, have mediated objective responses in 34% -50% of patients (35,36). Combined with more intense chemoradiotherapy preconditioning regimens, objective clinical response rates of49% -72% were observed in patients with melanoma receiving highly selected TILs (37).
Chimeric antigen receptors (CARs) were engineered and used to overcome limitations of intracellular antigen processing imposed by ACT with conventional T cells. CARs are single-chain constructs composed of an Ig variable domain (extracellular) fused to a T cell receptor (TCR) constant domain; when introduced into T cells, they combine the antigen-recognition properties of antibodies with T-cell lytic functions, broadening the spectrum of tumor antigen recognition (38). Encouraging early clinical results with second-generation anti-CD19 CARs have been observed in patients with lymphoma (39,40). However, the high affinity for target cells conferred by the Ig component of CARs, combined with amplified nonphysiologic T-cell signaling in second-and third-generation constructs, has been associated with serious adverse events (41). Reducing on-target toxicities while maintaining antitumor efficacy is an important goal of current investigations.
Vaccine
Long-standing interest in cancer vaccines comes from the tremendous successes of prophylactic vaccines for infectious diseases and is based on immunobiology demonstrating the capacity of T cells to recognize target antigens in the form of peptides complexed to surface MHC molecules. Because immunogenic peptides can be derived from proteins in every cellular compartment, essentially any protein has the potential to be recognized by T cells as a tumor-specific or tumor-selective antigen. Successful vaccination marshals multiple immune effector arms including CD4_ and CD8_T cells to generate a potent antitumor response (42).
Despite anecdotal reports and promising phase I and II clinical trial results with cancer vaccines evaluated since the 1960s, a string of failures in randomized clinical trials has bred significant skepticism as to the ultimate clinical value of therapeutic cancer vaccines (43)(44)(45). However, in the past few years, a number of important successes with cancer vaccines have dramatically altered the perception of their potential value.
The first successful randomized phase III cancer vaccine trial used a putative dendritic cell (DC) vaccine-sipuleucel-T-to treat patients with advanced hormone-resistant prostate cancer (46). These vaccines are based on the concept that optimal T-cell activation requires antigen processing and presentation by a specialized cell-the DC-with the capacity to concomitantly deliver strong co-stimulatory signals in the form of membrane ligands and secreted cytokines.
Recently, two positive randomized cancer vaccine trials were reported. A melanoma vaccine consisting of a modified gp100 peptide plus systemic IL-2 was compared with systemic IL-2 alone in patients with advanced melanoma(47), yielding a statistically higher Objective response rate(ORR) in the vaccine plus IL-2 arm, improved progression free survival (PFS), and improved overall survival (OS) (P=0.06). Of note, the same peptide vaccine, when combined with anti-cytotoxic T-lymphocyte antigen 4 (CTLA-4), demonstrated no improvement in patients with advanced melanomas relative to anti-CTLA-4 alone (48),underscoring the importance of context when evaluating vaccines as components of combinatorial therapies. Another trial comparing apoxvirus-prostate specific antigen prime/boost vaccine regimen plus GM-CSF versus nonantigen expressing viruses in patients with advanced prostate cancer demonstrated a significant (8 months) OS benefit for the vaccine arm but no effect on PFS or ORR (49).
With the relatively recent realization that cancer exerts animmune-tolerizing influence in the host, new trends in immunotherapy have focused on methods to interrupt tolerogenic pathways and reactivate endogenous immunity against unique as well as shared tumor antigens.
Immune checkpoint inhibitors
ICBs that mediate T-cell response have significantly enhanced antitumor immunity (50,51). CTLA-4, also known as CD152, with its ligands CD80 and CD86, an inhibitory receptor as a global immune checkpoint engaged in priming immune responses via down-modulating the initial stages of T-cell activation, was the first clinically validated checkpoint pathway target (25,51,52). Table 1 summarized the ICBs that are approved in clinic. CTLA-4 is a coinhibitory TCR, the natural function of which is to downmodulate immunity at the appropriate time, avoiding collateral normal tissue damage. Although there is no tumor specificity in the expression ofB7-1 or B7-2, potent antitumor properties of CTLA-4 blocking mAbs were nonetheless observed in preclinical models and then validated in the clinic (53). Two anti-CTLA-4 blocking mAbs-ipilimumab (Bristol-Myers Squibb, Princteon, NJ) and tremelimumab (Pfizer, New York, NY)-demonstrated similar properties in patients with advanced solid tumors in early phase clinical trials, mediating objective response rates of 10% to 15% in patients with metastatic melanoma and RCC (54)(55)(56). Ipilimumab (Yervoy; Bristol-Myers Squibb) was recently approved as first-line therapy for patients with metastatic melanoma, based on phase III trials in which this drug, administered alone or in combination with a gp100 peptide vaccine or with dacarbazine, demonstrated superior OS and PFS compared with vaccine alone (48) or dacarbazine alone (57), respectively. Approximately 20% of patients in both studies achieved long-term survival benefit; this exceeded the reported ORRs of 10% to 15%, suggesting that, as with other immunotherapies, ipilimumab may induce a state of equilibrium between the immune system and cancer, resulting in prolonged disease stabilization but not regression in some patients. Programmed cell death-1 (PD-1, also known as CD279) is another inhibitory receptor. Of three anti-PD-1 mAbs currently in the clinic for cancer therapy-MDX-1106/BMS936558 (Medarex, Princeton, NJ; Bristol-Myers Squibb), CT-011 (CureTech, Yavne, Israel) and MK-3475 (Merck, Whitehouse Station, NJ). A first-in-human phase I trial of intermittent dosing showed durable objective responses in 3 of 39 patients with treatment-refractory metastatic solid tumors (melanoma, RCC, and colorectal cancer), and clinical responses correlated with pretreatment expression of B7-H1/PD-L1 in the tumor (24). An ongoing trial administering MDX-1106 biweekly has shown preliminary evidence of durable objective tumor responses in approximately one third of patients with advanced melanoma and RCC. Of interest, objective tumor responses to MDX-1106 have also occurred in patients with refractory non-small cell lung cancer, highlighting activity against a nonimmunogenic tumor. A blocking antibody against the major ligand for PD-1/PD-L1 (MDX-1105/ BMS936559)-is also in phase I clinical trial in patients with advanced solid tumors, including melanoma, RCC, and non-small cell lung cancer.
Overall, the exciting revolution of ICBs development in oncology arouses great expectations in cancer patients.
Hypothesized and proven links among aging, immunity, and cancer
Cancer is primarily a disease of older adults (4,58,59). Monoclonal antibodies targeting immune checkpoint molecules CTLA-4, PD-1 or PD-L1 are emerging as promising anticancer therapeutics in multiple cancer subtypes with improved efficacy and better safety profiles when compared to traditional cytotoxic drugs (60). ICBs have already obtained approval for the treatment of patients with metastatic melanoma, advanced/refractory non-small cell lung cancer and renal cell cancer. While there are no specific trials for elderly, ICBs treatment of elderly presents a unique challenge. Comorbidities and their immune system age-related impairment might affect the function and tolerance of ICBs. Current literature does not allow us to draw definitive conclusions regarding the role of ICBs in older adults.
In 2016, a meta-analysis of ICBs randomized trials has studied efficacy of ICBs in older patients compared to young adults (61). A total of 5265 patients (ICBs: 2925; controls: 2340) were included in the analysis from three ipilimumab trials, one tremelimumab trial, four nivolumab trials and one pembrolizumab trial. The underlying malignancies included were melanoma (5 trials), non-small cell lung cancer (2 trials), prostate cancer (1 trial) and renal cell carcinoma (1 trial). Eight trials used 65 years and one trial used 70 years as age cut-off to conduct subgroup analyses. A total of 4725 patients from eight trials were included in the analysis of HRs for OS. The patients were dichotomized into younger and older groups with an age cut-off of 65-70 years. For younger patients, the pooled HR for OS showed significant difference between ICBs and controls (HR, 0.75; 95% CI, 0.68-0.82; P < 0.001). For older patients, ICBs also significantly improved OS (HR, 0.73; 95% CI, 0.62-0.87; P < 0.001) in comparison with controls. There was no statistically significant difference between subgroups of younger and older patients concerning the pooled HRs for OS (P = 0.96).
ICBs may be responsible for specific toxicities called "immune related adverse events" (irAEs) (62)(63)(64)(65). These irAEs are related to the infiltration of normal tissues by activated T cells responsible for autoimmunity. Fortunately, most of these serious immune-related adverse events are individually rare (<1%). Immune-related side effects may be more challenging in older patients due to reduced functional reserve and age-associated comorbidities. Moreover immunosenescence could affect the efficacy and/or the toxicity of ICBs (66). Paradoxically, immunosenescence is also coupled with higher concentrations of inflammatory cytokines, called "inflammaging". Finally, older patients are known to have a higher prevalence of autoantibodies (67)(68)(69) and one can expect that ICBs may reveal subclinical autoimmune diseases.
Using ipilimumab in elderly melanoma patients, Silenireported that patients over 70 years old presented irAE with a similar frequency compared to overall population (70).Despite speculation about the specificities of older adult immunity, the current safety data appears to be similar to the population at large.
Across the different approved ICBs, no overall differences in safety were reported in elderly patients (≥65 y.o.) and no dose adjustment is recommended (60,71). The currently approved ICBs have not been evaluated in patients with severe renal or hepatic impairment. Nevertheless, no dose adjustment is recommended for patients with mild or moderate renal impairment (i.e. ≥30 ml/min creatinine clearance) or mild hepatic impairment (i.e. total bilirubin > upper limit normal to 1.5 N).
As older patients with cancer are often taking medications for other comorbidities, it is important to note that the currently approved ICBs monoclonal antibodies are not metabolized by cytochrome P450 enzymes, therefore enzymatic competition is not expected. The use of corticosteroids may hypothetically interfere with ICBs efficacy and is recommended to avoid at baseline. Patients treated by anticoagulants or anti-aggregants must be carefully monitored in case of colitis symptoms (risk of gastrointestinal hemorrhage) or autoimmune thrombocytopenia.
In older adults, tolerance of irAEs should be carefully monitored as associated comorbidities may decompensate more easily. Moreover the use of some symptomatic treatments (such as antihistamine for pruritis) or corticosteroids may expose older patients to iatrogenic events such as diabetes worsening, mental status disturbance, hypertension and delirium.
Overall, ICBs such as anti-CTLA4 and anti PD1/PD-L1 are already part of the approved treatments for patients with advanced melanoma (72), non-small cell lung cancer and RCC (73). As most of ICBs studies have involved a low number of older patients it remains difficult to confirm the impact of this new therapeutics in elderly. One could expect that clinical specificity of older patients (comorbidities, co-medications, reduced functional reserve) and immunosenescence may affect ICBs efficacy and tolerance in this population. However, preliminary data of ICBs in the literatures are very encouraging and suggest that older adults will benefit from the ICBs revolution in oncology without increased toxicity. | 2019-07-09T13:05:16.042Z | 2019-06-02T00:00:00.000 | {
"year": 2019,
"sha1": "3832f41c6d17fe764bbb49eb3d863b976c65afb2",
"oa_license": "CCBYNC",
"oa_url": "https://www.jcancer.org/v10p3021.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3832f41c6d17fe764bbb49eb3d863b976c65afb2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54972319 | pes2o/s2orc | v3-fos-license | Rheological and Flocculation Analysis of Microfibrillated Cellulose Suspension Using Optical Coherence Tomography
Featured Application: Optical coherence tomography brings excellent new possibilities for improving and extending existing rheological measurement devices and methods. Abstract: A sub-micron resolution optical coherence tomography device was used together with a pipe rheometer to analyze the rheology and flocculation dynamics of a 0.5% microfibrillated cellulose (MFC) suspension. The bulk behavior of the MFC suspension showed typical shear thinning (power-law) behavior. This was reflected in a monotonously decreasing floc size when the shear stress exceeded the yield stress of the suspension. The quantitative viscous behavior of the MFC suspension changed abruptly at the wall shear stress of 10 Pa, which was reflected in a simultaneous abrupt drop of the floc size. The flocs were strongly elongated with low shear stresses. With the highest shear stresses, the flocs were almost spherical, indicating a good level of fluidization of the suspension. sub-micron Doppler optical coherence tomography (DOCT) for investigating the rheology of 0.5% MFC of there in analysis with the measurement of a scattering material. OCT uses interference of a low coherence light to record depth-dependent
Introduction
Microfibrillated cellulose (MFC) is a material of high interest due to its sustainability and biodegradability, and unique properties such as, mechanical robustness, barrier properties, large surface area, and lightness [1,2]. Over the past decade, there has been explosive growth in MFC research, including improved MFC production technologies, surface functionalization, characterization techniques, composites processing, self-assembly, optical properties, and barrier properties. The applications of MFC are already numerous including supercapacitors, transparent flexible electronics, batteries, barrier/separation membranes, and antimicrobial films [3].
A frequently noted issue in the processing of MFC suspensions is their complex rheological behavior. MFC suspensions tend to form a strong gel, which shows, for example, yield stress, shear thinning, hysteresis, and thixotropy already in low mass concentrations. Rheological information is critical in the design and operation of, for example, pumping, mixing, storage, and extrusion processes. MFC is also commonly used as a rheology modifier, for example, in cements, inks, drilling fluids and cosmetics. Thus, the bulk rheology of MFC suspensions has been a popular subject of discussion [4][5][6].
Due to a high aspect ratio of fibres and strong interfibrillar forces, MFC fibres flocculate easily and form a highly entangled network already at relatively low concentrations. Thus, in addition to rheology, the flocculation tendency of MFC fibres has been of interest [7][8][9]. As summarized in [9], at the macroscale, that is, at the rheometer scale, changes in the floc structure of MFC correlate with a change in the shear stress. At low shear stress, flocs are attached to each other. When shear stress is increased, the floc structure starts to yield via flocs separating from each other. At high shear stresses, fibrils flow in individual, detached flocs, the size of which is inversely proportional to the shear stress.
Previously the flocculation of MFC has been studied in a transparent cylindrical rheometer geometry [7][8][9]. Here, we analyze the flocculation of a MFC suspension in a pipe flow, which is a more realistic geometry for many practical applications. The flocculation measurements were performed when preparing our recently published article Ref. [10]. In that paper, we used a combination of pipe flow, pressure loss measurement, and a high-speed, sub-micron resolution Doppler optical coherence tomography (DOCT) device for investigating the rheology of 0.5% MFC suspension. DOCT was used in measuring the stationary velocity profiles of the MFC flow in the near-wall region of a straight tube. In addition to analysing the bulk rheology (yield stress and viscous behavior) using the concept of velocity profiling rheometry, the wall/depletion layer dynamics of the suspension was studied there in detail. The novel flocculation analysis presented in this paper is based on the structural information obtained simultaneously with the velocity information. The results in [10] that are relevant to this study are briefly presented.
Materials and Methods
The microfibrillated cellulose sample was prepared from never-dried bleached kraft birch pulp via grinding three times in a supermasscolloider (Masuko Sangyo Co. Ltd., Kawaguchi, Saitama-pref., Japan). Prior to grinding, the pulp was changed to its sodium form and washed with deionized water, to obtain an electrical conductivity less than 10 µS/cm, according to a procedure introduced by [11]. The dry matter content after grinding was 2 wt %. For the rheological experiments, MFC samples were diluted with deionized water to a mass concentration of 0.5 wt %. An image of the MFC fibers is shown in Figure 1. fluids and cosmetics. Thus, the bulk rheology of MFC suspensions has been a popular subject of discussion [4][5][6]. Due to a high aspect ratio of fibres and strong interfibrillar forces, MFC fibres flocculate easily and form a highly entangled network already at relatively low concentrations. Thus, in addition to rheology, the flocculation tendency of MFC fibres has been of interest [7][8][9]. As summarized in [9], at the macroscale, that is, at the rheometer scale, changes in the floc structure of MFC correlate with a change in the shear stress. At low shear stress, flocs are attached to each other. When shear stress is increased, the floc structure starts to yield via flocs separating from each other. At high shear stresses, fibrils flow in individual, detached flocs, the size of which is inversely proportional to the shear stress.
Previously the flocculation of MFC has been studied in a transparent cylindrical rheometer geometry [7][8][9]. Here, we analyze the flocculation of a MFC suspension in a pipe flow, which is a more realistic geometry for many practical applications. The flocculation measurements were performed when preparing our recently published article Ref. [10]. In that paper, we used a combination of pipe flow, pressure loss measurement, and a high-speed, sub-micron resolution Doppler optical coherence tomography (DOCT) device for investigating the rheology of 0.5% MFC suspension. DOCT was used in measuring the stationary velocity profiles of the MFC flow in the near-wall region of a straight tube. In addition to analysing the bulk rheology (yield stress and viscous behavior) using the concept of velocity profiling rheometry, the wall/depletion layer dynamics of the suspension was studied there in detail. The novel flocculation analysis presented in this paper is based on the structural information obtained simultaneously with the velocity information. The results in [10] that are relevant to this study are briefly presented.
Materials and Methods
The microfibrillated cellulose sample was prepared from never-dried bleached kraft birch pulp via grinding three times in a supermasscolloider (Masuko Sangyo Co. Ltd., Kawaguchi, Saitamapref., Japan). Prior to grinding, the pulp was changed to its sodium form and washed with deionized water, to obtain an electrical conductivity less than 10 µS/cm, according to a procedure introduced by [11]. The dry matter content after grinding was 2 wt %. For the rheological experiments, MFC samples were diluted with deionized water to a mass concentration of 0.5 wt %. An image of the MFC fibers is shown in Figure 1. Optical coherence tomography (OCT) is a well-established technique introduced in 1991 [12]. It is a light-based imaging method, which enables non-contact, micron-scale spatial resolution measurement of a scattering material. OCT uses interference of a low coherence light to record depth- Optical coherence tomography (OCT) is a well-established technique introduced in 1991 [12]. It is a light-based imaging method, which enables non-contact, micron-scale spatial resolution measurement of a scattering material. OCT uses interference of a low coherence light to record depth-dependent reflectivity profile (A-scan). By lateral scanning, 2D cross-sectional and 3D volumetric images can be generated. In addition to structural imaging, velocity information of the moving structures can be retrieved simultaneously by utilizing the Doppler effect principle. This imaging mode is often referred to as Doppler OCT, or DOCT [13,14].
The DOCT method enables direct measurement of the flow velocity profiles of turbid and opaque fluids with a high spatial resolution and high sampling rates [15][16][17]. Furthermore, DOCT appears capable of very accurate measurement of velocity profile very close to a channel wall [18]. Due to the high sampling rate (which varies from tens to some hundreds of kHz), DOCT can be utilized not only on laminar, but also on turbulent flows [19]. When DOCT is combined with pressure loss (e.g., pipe flow) or shear stress (e.g., rotational rheometers) measurements, velocity profiling (i.e., calculation of local viscosities of the studied fluid) becomes possible. Recently, it has been shown that DOCT is a great tool to be used in rheological measurements [20][21][22] and well suited to study the complex rheology of MFC suspensions [9,10,23].
The experimental setup is shown in Figure 2. The measurement unit consisted of an optical grade glass pipe with an inner diameter of D = 8.6 mm. A container filled with the MFC suspension was connected to the pipe with a rubber hose and attached to a compressed air source via a pressure regulator. The suspension flow was controlled with both a manual valve after the glass pipe and the set overpressure in the container. The flow was is all cases both laminar and fully developed. The pressure gradient ∇P in the pipe was acquired with a differential pressure sensor (2051, Emerson Electric, St. Louis, MO, USA; the probes were located at the distance of L 1 = 52D and L 3 = 150D from the pipe inlet). The pressure gradient was used to calculate the wall shear stress τ w = D∇P/4. For each flow rate, 50,000 DOCT A-scans were acquired at a distance of L 2 = 110D from the pipe inlet. Here, a laboratory-built spectral domain DOCT device was used; the detailed device description can be found in refs. [24,25]. This device has an axial resolution of 0.9 µm in water. The submicron resolution was achieved by combining a custom OCT spectrometer (designed for the spectral region of 400-800 nm) with an ultra-broadband supercontinuum laser source (SuperK Extreme EXB-1, NKT Photonics, Birkerød, Denmark). The maximum scanning depth of the device is 365 µm in water, and the maximum scanning rate is 123 kHz. The accusracy of the used OCT setup has been verified in microfluidic flow conditions in ref. [25]. There the volumetric flow rate determined from the DOCT velocity profile deviated 6% from the set volumetric flow rate. The pipe rheometer configuration and the DOCT measurements are presented in more detail in ref. [10]. dependent reflectivity profile (A-scan). By lateral scanning, 2D cross-sectional and 3D volumetric images can be generated. In addition to structural imaging, velocity information of the moving structures can be retrieved simultaneously by utilizing the Doppler effect principle. This imaging mode is often referred to as Doppler OCT, or DOCT [13,14]. The DOCT method enables direct measurement of the flow velocity profiles of turbid and opaque fluids with a high spatial resolution and high sampling rates [15][16][17]. Furthermore, DOCT appears capable of very accurate measurement of velocity profile very close to a channel wall [18]. Due to the high sampling rate (which varies from tens to some hundreds of kHz), DOCT can be utilized not only on laminar, but also on turbulent flows [19]. When DOCT is combined with pressure loss (e.g., pipe flow) or shear stress (e.g., rotational rheometers) measurements, velocity profiling (i.e., calculation of local viscosities of the studied fluid) becomes possible. Recently, it has been shown that DOCT is a great tool to be used in rheological measurements [20][21][22] and well suited to study the complex rheology of MFC suspensions [9,10,23].
The experimental setup is shown in Figure 2. The measurement unit consisted of an optical grade glass pipe with an inner diameter of D = 8.6 mm. A container filled with the MFC suspension was connected to the pipe with a rubber hose and attached to a compressed air source via a pressure regulator. The suspension flow was controlled with both a manual valve after the glass pipe and the set overpressure in the container. The flow was is all cases both laminar and fully developed. The pressure gradient ∇P in the pipe was acquired with a differential pressure sensor (2051, Emerson Electric, St. Louis, MO, USA; the probes were located at the distance of L1 = 52D and L3 = 150D from the pipe inlet). The pressure gradient was used to calculate the wall shear stress = D∇P/4. For each flow rate, 50,000 DOCT A-scans were acquired at a distance of L2 = 110D from the pipe inlet. Here, a laboratory-built spectral domain DOCT device was used; the detailed device description can be found in refs. [24,25]. This device has an axial resolution of 0.9 µm in water. The submicron resolution was achieved by combining a custom OCT spectrometer (designed for the spectral region of 400-800 nm) with an ultra-broadband supercontinuum laser source (SuperK Extreme EXB-1, NKT Photonics, Birkerød, Denmark). The maximum scanning depth of the device is 365 µm in water, and the maximum scanning rate is 123 kHz. The accusracy of the used OCT setup has been verified in microfluidic flow conditions in ref. [25]. There the volumetric flow rate determined from the DOCT velocity profile deviated 6% from the set volumetric flow rate. The pipe rheometer configuration and the DOCT measurements are presented in more detail in ref. [10]. The pressure measurement taps are located at L 1 = 52D and L 3 = 150D from the pipe inlet. The OCT is located at L 2 = 110D from the pipe inlet. A computer-controlled scale is used for the mass flow rate measurements. Figure 3 shows an example of the measured velocity and amplitude signals. Velocity data could be obtained, in the best case, up to 200 µm from the pipe wall, excluding the immediate vicinity of the wall. The closest point relative to the pipe wall having reliable velocity data was estimated to be 2 µm, below which the profiles were affected by the signal originating from the wall. Figure 3 shows an example of the measured velocity and amplitude signals. Velocity data could be obtained, in the best case, up to 200 µm from the pipe wall, excluding the immediate vicinity of the wall. The closest point relative to the pipe wall having reliable velocity data was estimated to be 2 µm, below which the profiles were affected by the signal originating from the wall.
Results and Discussion
The velocity profiles obtained could be characterized in the following way. For the lowest flow rates, when the wall shear stress was below the yield stress τy = 3.4 Pa of the MFC suspension [10], the velocity profile corresponded to a pure plug flow in the whole pipe, excluding a yielding marginal wall layer of a few microns in thickness. When the wall shear stress exceeded the yield stress, the velocity profiles consisted of three distinctive parts (see Figure 3). In the outer (bulk) region, at the distances greater than 20 µm, the shear rate was small. In the region of 2-20 µm the velocity profile was rather steep and approached zero towards the wall. In the immediate vicinity of the wall (distance 0-2 µm), the velocity dropped abruptly to zero. The most natural explanation for the observed behavior of the velocity profile was a development of a consistency profile in the pipe, caused by wall depletion [26], when the distance from the wall was smaller than 20 µm.
The measured velocity profiles were fitted by the empirical formula where y is the distance from the wall and , , and are free parameters. Parameter is the apparent shear rate at wall, and are the apparent slip velocities, and is the characteristic thickness of the (apparent) slip layer. The local viscosity of the suspension could then be calculated from the formula where y is the distance from the pipe wall, ( ) = ( − 2 ) ⁄ , and ( ) = du/dy is the local shear rate. Outside the wall depletion layer (y > 20 µm), but still close to the wall (y < 200 µm), the shear rate and shear stress are approximately ( ) ≈ and ( ) ≈ , respectively. Figure 4 shows the viscosity ≈ / of the MFC suspension in the bulk (y > 20 µm). Above its yield stress, the wellknown shear thinning (power-law) behaviour of MFC suspensions is evident in Figure 4 (5-15 Pa). This behaviour is also typical for fibre suspensions and is believed to be due to adhesive contacts
Results and Discussion
The velocity profiles obtained could be characterized in the following way. For the lowest flow rates, when the wall shear stress was below the yield stress τ y = 3.4 Pa of the MFC suspension [10], the velocity profile corresponded to a pure plug flow in the whole pipe, excluding a yielding marginal wall layer of a few microns in thickness. When the wall shear stress exceeded the yield stress, the velocity profiles consisted of three distinctive parts (see Figure 3). In the outer (bulk) region, at the distances greater than 20 µm, the shear rate was small. In the region of 2-20 µm the velocity profile was rather steep and approached zero towards the wall. In the immediate vicinity of the wall (distance 0-2 µm), the velocity dropped abruptly to zero. The most natural explanation for the observed behavior of the velocity profile was a development of a consistency profile in the pipe, caused by wall depletion [26], when the distance from the wall was smaller than 20 µm.
The measured velocity profiles were fitted by the empirical formula where y is the distance from the wall and . γ a w , u a s , u s and λ w are free parameters. Parameter . γ a w is the apparent shear rate at wall, u a s and u s are the apparent slip velocities, and λ w is the characteristic thickness of the (apparent) slip layer. The local viscosity of the suspension could then be calculated from the formula where y is the distance from the pipe wall, τ(y) = τ w (D − 2y)/D, and . γ(y) = du/dy is the local shear rate.
Outside the wall depletion layer (y > 20 µm), but still close to the wall (y < 200 µm), the shear rate and shear stress are approximately .
γ(y)
. γ a w and τ(y) τ w , respectively. Figure 4 shows the viscosity µ τ w / . γ a w of the MFC suspension in the bulk (y > 20 µm). Above its yield stress, the well-known shear thinning (power-law) behaviour of MFC suspensions is evident in Figure 4 (5-15 Pa). This behaviour is also typical for fibre suspensions and is believed to be due to adhesive contacts between the fibres that are broken by the shear forces when the shear rate increases [27,28]. As a result, the (floc) structure of the MFC suspension changes [29,30]. Notice, that close to and below the yield stress the viscosity values are large, and there are strong fluctuation in their values. Large values are due to the values of . γ a w being close to zero when the suspension is non-yielded. The strong variation in the viscosity values is due to continuous break-up and recovery of the local network structure in the flow. Such fluctuations may be caused by small variations in shear history and a non-homogeneous floc structure of the sample suspension. This effect could be minimized by using longer measurement times. Figure 4 shows that there is a narrow transition region between two power laws, which takes place when the shear stress is ca. 10 Pa. To our knowledge, this kind of viscosity behavior has not been reported earlier for MFC suspensions. Such abrupt change in the rheological behavior of the MFC suspension is likely to be related with a sudden structural change in the suspension, for example, in fibre orientation [31] and/or flocculation [28]. between the fibres that are broken by the shear forces when the shear rate increases [27,28]. As a result, the (floc) structure of the MFC suspension changes [29,30]. Notice, that close to and below the yield stress the viscosity values are large, and there are strong fluctuation in their values. Large values are due to the values of being close to zero when the suspension is non-yielded. The strong variation in the viscosity values is due to continuous break-up and recovery of the local network structure in the flow. Such fluctuations may be caused by small variations in shear history and a nonhomogeneous floc structure of the sample suspension. This effect could be minimized by using longer measurement times. Figure 4 shows that there is a narrow transition region between two power laws, which takes place when the shear stress is ca. 10 Pa. To our knowledge, this kind of viscosity behavior has not been reported earlier for MFC suspensions. Such abrupt change in the rheological behavior of the MFC suspension is likely to be related with a sudden structural change in the suspension, for example, in fibre orientation [31] and/or flocculation [28]. For the floc size analysis, 50 images consisting of 1000 successive A-scans were analysed for the amplitude signals. The analysed area was 100 µm long in the radial direction and started 20 µm from the wall to avoid the effect of the wall depletion layer on the results. The uneven OCT amplitude profile (see Figure 3) was eliminated from the images by scaling individual amplitude A-scans with an averaged A-scan amplitude profile. This correction removes all stationary intensity variations from original A-scans, and the remaining intensity variations are due to temporal differences in the local suspension properties (see Figure 5). A dominant factor causing most of these variations is the local concentration of the suspension. Due to different flow rates and scanning frequencies, the size of the analysed area varied in axial flow direction between 60 µm and 1.1 mm (pixel sizes in the axial flow direction thus varied between 0.06 µm and 1.1 µm). In order to make the analysis of different flow rates and their axial and radial floc sizes commensurate, all images were resized to the pixel size of 1.1 µm, using MATLAB's (ver. 9.1.0.441655 R2016b, MathWorks, Natick, MA, USA) imresize routine, which performs a bicubic interpolation.
The floc size analysis was performed using the method presented in [8]. The OCT structural images were thresholded separately using the median of intensity. The (length-weighted) distribution of floc dimensions, in both radial and axial directions, was then computed for every image in the sequence as the run-length distributions (see Figure 6). The floc size distributions were log-normal, which are typical for fibers [32,33] and many other flocculating particles [34,35]. Determined distributions were finally averaged to obtain the length weighted average floc sizes. For the floc size analysis, 50 images consisting of 1000 successive A-scans were analysed for the amplitude signals. The analysed area was 100 µm long in the radial direction and started 20 µm from the wall to avoid the effect of the wall depletion layer on the results. The uneven OCT amplitude profile (see Figure 3) was eliminated from the images by scaling individual amplitude A-scans with an averaged A-scan amplitude profile. This correction removes all stationary intensity variations from original A-scans, and the remaining intensity variations are due to temporal differences in the local suspension properties (see Figure 5). A dominant factor causing most of these variations is the local concentration of the suspension. Due to different flow rates and scanning frequencies, the size of the analysed area varied in axial flow direction between 60 µm and 1.1 mm (pixel sizes in the axial flow direction thus varied between 0.06 µm and 1.1 µm). In order to make the analysis of different flow rates and their axial and radial floc sizes commensurate, all images were resized to the pixel size of 1.1 µm, using MATLAB's (ver. 9.1.0.441655 R2016b, MathWorks, Natick, MA, USA) imresize routine, which performs a bicubic interpolation.
The floc size analysis was performed using the method presented in [8]. The OCT structural images were thresholded separately using the median of intensity. The (length-weighted) distribution of floc dimensions, in both radial and axial directions, was then computed for every image in the sequence as the run-length distributions (see Figure 6). The floc size distributions were log-normal, which are typical for fibers [32,33] and many other flocculating particles [34,35]. Determined distributions were finally averaged to obtain the length weighted average floc sizes. Figure 7 shows the axial and radial floc size as a function of wall shear stress. Both floc sizes are seen to remain approximately constant below the yield stress of τy = 3.4 Pa. Above the yield stress, the both floc sizes decrease monotonically and become approximately equal at the highest wall shear stress of 15.2 Pa. Furthermore, the radial and axial floc size distributions are almost identical at the highest wall shear stress (see Figure 6). The above observations indicate that the suspension is finally well fluidized. Figure 7 shows the axial and radial floc size as a function of wall shear stress. Both floc sizes are seen to remain approximately constant below the yield stress of τy = 3.4 Pa. Above the yield stress, the both floc sizes decrease monotonically and become approximately equal at the highest wall shear stress of 15.2 Pa. Furthermore, the radial and axial floc size distributions are almost identical at the highest wall shear stress (see Figure 6). The above observations indicate that the suspension is finally well fluidized. There appears to be an abrupt drop (see the arrow in Figure 7) in both radial and axial floc sizes at τw = 10 Pa, after which the floc size remains almost constant up to τw = 12 Pa. This sudden structural change coincides with the sudden drop in the viscosity of the suspension (see Figure 4). The number of data points is, however, rather limited, and additional experiments are needed to verify this interesting behavior. Figure 8 shows the floc aspect ratio α (axial size/radial size) as the function of the wall shear stress. We see from Figure 8 that the floc aspect ratio α varies a lot with the smallest wall shear rates, but it appears to stay approximately constant, α = 1.6, at least until wall shear stress of 6 Pa. The elongation of the MFC flocs (from spheres into prolates) has probably happened already during the constriction conditions of the flow of the suspension from the plastic container into the rubber hose [36]. Furthermore, the data in Figure 8 suggest that when the wall shear stress exceeds 6 Pa, the floc aspect ratio starts to decrease monotonically reaching approximately α ≈ 1 with the highest shear stress. Unfortunately, there are no measurement points in the wall shear stress range of 6-9 Pa, and thus the more accurate onset of this phenomenon remains unclear. There appears to be an abrupt drop (see the arrow in Figure 7) in both radial and axial floc sizes at τ w = 10 Pa, after which the floc size remains almost constant up to τ w = 12 Pa. This sudden structural change coincides with the sudden drop in the viscosity of the suspension (see Figure 4). The number of data points is, however, rather limited, and additional experiments are needed to verify this interesting behavior. Figure 8 shows the floc aspect ratio α (axial size/radial size) as the function of the wall shear stress. We see from Figure 8 that the floc aspect ratio α varies a lot with the smallest wall shear rates, but it appears to stay approximately constant, α = 1.6, at least until wall shear stress of 6 Pa. The elongation of the MFC flocs (from spheres into prolates) has probably happened already during the constriction conditions of the flow of the suspension from the plastic container into the rubber hose [36]. Furthermore, the data in Figure 8 suggest that when the wall shear stress exceeds 6 Pa, the floc aspect ratio starts to decrease monotonically reaching approximately α ≈ 1 with the highest shear stress. Unfortunately, there are no measurement points in the wall shear stress range of 6-9 Pa, and thus the more accurate onset of this phenomenon remains unclear. There appears to be an abrupt drop (see the arrow in Figure 7) in both radial and axial floc sizes at τw = 10 Pa, after which the floc size remains almost constant up to τw = 12 Pa. This sudden structural change coincides with the sudden drop in the viscosity of the suspension (see Figure 4). The number of data points is, however, rather limited, and additional experiments are needed to verify this interesting behavior. Figure 8 shows the floc aspect ratio α (axial size/radial size) as the function of the wall shear stress. We see from Figure 8 that the floc aspect ratio α varies a lot with the smallest wall shear rates, but it appears to stay approximately constant, α = 1.6, at least until wall shear stress of 6 Pa. The elongation of the MFC flocs (from spheres into prolates) has probably happened already during the constriction conditions of the flow of the suspension from the plastic container into the rubber hose [36]. Furthermore, the data in Figure 8 suggest that when the wall shear stress exceeds 6 Pa, the floc aspect ratio starts to decrease monotonically reaching approximately α ≈ 1 with the highest shear stress. Unfortunately, there are no measurement points in the wall shear stress range of 6-9 Pa, and thus the more accurate onset of this phenomenon remains unclear. As pointed in [8], geometry gap dimensions and material may affect flocculation. Especially with small gaps, wall depletion may hinder the breakdown of the flocs, and the floc sizes can reach dimensions of the gap. Probably due to the rheometer geometry and the imaging setup, the floc size varied for an identical MFC suspension between 0.1-1 mm in [9], being in a completely different length scale when compared with the present study. Also, the dynamic behavior of flocculation was qualitatively very different when compared with our results for the pipe flow. During the rheometer measurements, the floc size usually increased until the yield stress was reached above which the floc size decreased with increasing shear stress.
While the number of flocculation studies with MFC is rather low, dynamics of flocculation and floc rupture of wood fibers and other materials has been studied extensively [37][38][39][40][41]. The observations on the flocculation and breaking mechanisms have varied a lot, but the relation between the floc size L and shear stress τ has typically been a power law L = Gτ −β . ( As we see from Figure 7, this is also the case here for the radial floc size L r with β = 0.32. Depending on the floc breaking mechanism and inertial range, one can find in literature various values for β [38,42]. Similar values (β~0.3) have been reported, for example, in [43,44]. The dynamics of the axial floc size L a is more complicated; the slope for L a is up to 6 Pa approximately the same as for L r , but above 6 Pa it is twice as big. It is probable that the elongational forces due to the constriction conditions in the outflow from the container not only stretch, but also break flocs, when wall shear rate exceeds 6 Pa [45,46].
Conclusions
In this work, a sub-micron resolution optical coherence tomography device was used together with a pipe rheometer to analyze the rheology and flocculation dynamics of a 0.5% MFC suspension. The bulk behavior of the studied MFC suspension showed typical shear thinning (power-law) behavior in the interior part of the tube. This was reflected with a monotonously decreasing floc size when the shear stress exceeded the yield stress of the suspension. Here, the radial floc size followed a power law, while the dynamics of the axial floc size was more complicated.
The quantitative viscous behavior of the MFC suspension changed abruptly at the wall shear stress of 10 Pa, which is likely due to a sudden structural change of the suspension. Indeed, there appeared to be a simultaneous abrupt drop in both the radial and the axial floc sizes. The number of data points is, however, rather limited, and additional experiments are needed to verify this flocculation behavior.
The benefit of performing the rheological experiments in a real process geometry (pipe flow) was confirmed by comparing the flocculation results with earlier rheometer studies. While the flocculation behavior was consistent in the current study, the restricted geometries used in earlier studies have likely contributed to the observed intricate flocculation behavior.
As yet, there has been a lack of experimental techniques that would allow direct measurement of flows and internal structures of complex, opaque fluids especially in the immediate vicinity of the wall. OCT provides a remedy for this long-standing grievance by bringing excellent new possibilities for improving and extending the capabilities of existing rheological measurement devices and methods. | 2018-12-05T06:36:36.234Z | 2018-05-10T00:00:00.000 | {
"year": 2018,
"sha1": "921e88ab99fd6ce104993cd1fbcbf3619094b659",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/5/755/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7277e2c3a7a3eb5d0c7a5405d05b6a610f508add",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
119091380 | pes2o/s2orc | v3-fos-license | Measurement of the Two-Jet Differential Cross Section in proton-antiproton Collisions at sqrt{s} = 1800 GeV
A measurement is presented of the two-jet differential cross section, d^3\sigma/dE_T d\eta_1 d\eta_2, at center of mass energy sqrt{s} = 1800 GeV in proton-antiproton collisions. The results are based on an integrated luminosity of 86 pb^-1 collected during 1994-1995 by the CDF collaboration at the Fermilab Tevatron collider. The differential cross section is measured as a function of the transverse energy, E_T, of a jet in the pseudorapidity region 0.1<|eta_1|<0.7 for four different pseudorapidity bins of a second jet restricted to 0.1<|\eta_2|<3.0. The results are compared with next-to-leading order QCD calculations determined using the CTEQ4 and MRST sets of parton distribution functions. None of the sets examined in this analysis provides a good description of the data.
The results are compared with next-to-leading order QCD calculations determined using the CTEQ4 and MRST sets of parton distribution functions. None of the sets examined in this analysis provides a good description of the data.
PACS numbers: 13.85.Rm, 12.38.Qk Jet production in proton-antiproton collisions results predominantly from hard interactions between two initial state partons. Theoretical developments in both perturbative next-to-leading order (NLO) and parton shower Monte Carlo calculations permit calculation of many QCD jet processes with theoretical uncertainties small enough to allow detailed comparison with measured distributions [1]. In this paper, we present a measurement of the dijet differential cross section that provides more precise information about the initial state partons than has been probed by previous CDF measurements of inclusive jet transverse energy [2], total transverse energy [3], and dijet mass [4]. All previous measurements showed an excess of events at high jet energies when compared to the QCD prediction based on standard sets of parton distribution functions (PDFs). One explanation for this excess is a larger than expected number of high momentum partons, particularly gluons, in the proton [5,6]. While those measurements provide cross sections averaged over a wide range in their variable, in this analysis we reduce the region over which averages are taken by measuring the cross section for four separate ranges. This provides more detailed information about the cross section shape. Previous measurements of the dijet differential cross section have been performed by the CDF [7] and DØ [8] collaborations with smaller data samples. The present measurement places new constraints on the parton distributions of the proton.
Jet production rates are usually expressed in terms of the transverse energy, E T , and pseudorapidity, η, of the jets, where η is related to the polar angle θ relative to the proton beam line by η ≡ -ln[tan(θ/2)]. At leading order in QCD, the proton, p, and anti-proton, p, momentum fractions, x 1 and x 2 , carried by the two colliding partons can be expressed as Here η 1 and η 2 are the pseudorapidities of the two jets, √ s is the center of mass energy of the colliding hadrons and E T is the transverse energy of the leading jet.
For a fixed E T and η 1 , one can probe higher x values by selecting events in which the second jet has a larger η 2 value. For a given x we have four measurements at what are effectively different values of Q 2 the square of the four-momentum transfered in the interaction, calculated by The four distributions in this analysis allow us to measure the cross section on a surface in the x-Q 2 phase space whose shape is sensitive to the predictions of different PDFs.
The constraint on the parton distributions at high x comes mainly from prompt photon production in pp or pA collisions from WA70 [9] and the E706 [10] experiments and inclusive jet data from the Tevatron [2]. The data do not constrain the parton distributions very well at high x. The higher statistics of this measurement together with the multiple cross section measurements at different Q 2 for approximately the same x provide a precise set of data which can be used to determine improved sets of PDFs. The current measurement, based on data of an integrated luminosity of 86 pb −1 from 1.8 TeV pp collisions taken during the 1994-1995 Fermilab Tevatron collider run, covers the range 0.05 The CDF detector is described in detail in [11]. In this analysis we utilize the cen- The event vertex is resolved to within 1 mm along the z axis, using time projection chambers surrounding the beam pipe.
A cone algorithm with cone radius R ≡ (∆φ) 2 + (∆η) 2 = 0.7 is used to identify jets [12]. Transverse energy is defined as E T = E sin θ, where E is the scalar sum of energy deposited in the calorimeter towers within the cone and θ is the angle formed by the event vertex, the beam direction, and the cone center. Our data sample consists of events collected by on-line identification of at least one jet with transverse energy above trigger thresholds of 20, 50, 70, and 100 GeV at integrated luminosities of 0.091, 2.2, 11, and 86 pb −1 , respectively. The bin widths in E T were chosen to be larger than the measurement resolution on E T and to ensure sufficient statistics in the bins.
In this analysis we use events with at least two jets of E T > 10 GeV of uncorrected energy. We consider events in which the E T -weighted centroid of at least one of the two highest E T jets is in the range 0.1 < |η| < 0.7. This "leading" jet is required to deposit more than 40 GeV E T , prior to corrections, in the central calorimeter.
In addition, the centroid of the second leading jet is required to be in the region 0.1 < |η| < 3.0, and the primary event vertex must be located within ±60 cm of the nominal interaction point. Poorly measured events and background from cosmic rays, beam halo, and detector noise are removed by requiring that total energy recorded by the detector be less than 2000 GeV and E / T / √ ΣE T < 6 √ GeV, where E / T is the missing transverse energy and ΣE T is the scalar sum of the total transverse energy.
In this analysis, we evaluate the E T spectrum of the leading jet for the following four η bins of the second leading jet in the event: The η 2 ranges were chosen to place regions of reduced response (due to gaps between detectors) within single bins while at the same time maintaining a sufficient number of events in the bins. Both jets are included in the distribution for the 0.1 < |η 2 | < 0.7 bin if each satisfies the requirement 0.1 < |η| < 0.7 and E T > 40 GeV.
Since the calorimetric response varies as a function of η, we determine the trigger response separately for each η 2 bin. The trigger efficiency was measured using overlapping E T regions for the different trigger thresholds. For the 20 GeV trigger threshold, for which no lower E T trigger was available, the second jet in the event was used to determine the trigger efficiency. For the four trigger thresholds, the trigger efficiency was found to be greater than 90% for jets of E T greater than 40, 82, 105, and 130 GeV.
The measured jet E T must be corrected for calorimeter non-linearity and loss of energy in the gaps between calorimeters. In addition, the measured jet E T spectrum must be corrected for the smearing effect caused by the resolution in the measured jet E T . We simultaneously correct all these effects with the procedure used in our previous measurement of the inclusive jet E T spectrum [2]. For the central η bin Tables 1 and 2 and plotted in Figure 1.
The systematic error on the measurement of the jet cross section is dominated by the uncertainty in the measurement of the jet E T magnified by the steep slope of the E T spectrum. Although the same sources of uncertainty contribute to the cross section of each E T bin, the uncertainty depends on the local slope of the E T spectrum. The systematic uncertainties were evaluated as in References [2] and [13].
The uncertainties include: charged hadron response at high p T (h pt); calorimeter response to low-p T hadrons (l pt); ± 1 % on the jet energy of the absolute calibration of the calorimeter (esc); jet fragmentation functions used in the simulation (frag); ± 30% on the underlying event energy in the jet cone (uevt); detector response to electrons and photons (e/ph); and modeling of the detector jet energy resolution (cres). The resolution on the measured η causes events to migrate between adjacent bins. In the highest η bin, the gap between the plug and forward calorimeters results in decreased η resolution and has the effect that more events migrate out of the bin than into it. To compensate for this effect, we have applied an E T -dependent Table 2: The measured dijet differential cross sections for 1.4 < |η 2 | < 2.1 and 2.1 < |η 2 | < 3.0. The differential cross section is given for the average E T of the bin.
The statistical and systematic errors are shown as a percentage of the central value.
correction which is less than 8% in all bins. The effect was studied by breaking it into two components, the resolution on the measured η (η res) and a systematic shift In Figure 2, the difference between the fully corrected two-jet differential cross section and the predicted cross section is divided by the predicted cross section and plotted as a function of the leading jet E T for the four η ranges of the second jet.
The theory predictions were calculated using the NLO calculation of the JETRAD program [14] with the PDFs indicated. The calculations use a renormalization scale where R sep is a measure of the maximum separation between the cones of two jets that are merged into one. The error bars represent the statistical errors, while the shaded bands represent one standard deviation of the systematic error, which is correlated for all the different E T values. The data are compared to the predicted cross section obtained using the PDF set CTEQ4M [5].
The solid curve shows the expected results when using CTEQ4HJ [5], and the dashed curves show the results when using the PDF set MRST [15]. The MRST set of PDFs is based in a wide range of deep inelastic scattering data and has an improved treatment of heavy flavors and prompt photon production than do previous MRST sets. The main constraint upon the gluon at high x comes from prompt photon production from the WA70 [9] and E706 [10] data. The set MRST(g↑) was derived assuming that there is no initial state partonic transverse momentum (< k T > = 0); this does not lead to a good fit for the prompt photon data from the E706 experiment. The set labelled MRST(g↓) was derived by allowing non-zero < k T > while maintaining reasonable agreement with the WA70 data. The MRST(g↓) set has < k T > = 0.64 GeV. These two sets represent the extreme values of < k T > that yield reasonable agreement with the data used in the fit. The set labelled MRST represents the preferred set from the global analysis and has < k T > = 0.4 GeV.
The covariance matrix for the dijet cross section is is the statistical uncertainty in bin i and σ i (sys k ) is the systematic uncertainty, k, on bin i. The sum is over the 13 sources of systematic errors listed above and over all the E T bins in each of the four η bins.
We calculate the χ 2 from PDFs. The fit to the data has 51 degrees of freedom.
In summary, we have measured the differential cross section for dijet production in pp collisions with one jet restricted to the pseudorapidity region 0.1 < |η 1 | < 0.7 for four different pseudorapidity bins of a second jet restricted within 0.1 < |η 2 | < 3.0. By allowing the pseudorapidity of the second jet to vary through 0.1 < |η| < 3.0, we are able to map out the cross section over the available kinematic phase space and provide a differential cross section that more tightly constrains the parton distributions of the proton than in measurements previously reported by us. The measurement provides more precise information about the parton distributions of the proton in the high x region, an area which is not well constrained, and will provide useful input to QCD global fits. The resulting improved sets of PDFs will help to further enhance our knowledge of the structure functions of the proton. | 2019-04-14T02:26:22.031Z | 2000-12-05T00:00:00.000 | {
"year": 2000,
"sha1": "00b830dd8fba2ac80629b326bdf3ce8f4dc4f007",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "99c447454df6cf6ea683121f899b5509d597b940",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231733380 | pes2o/s2orc | v3-fos-license | Allelic Dropout Is a Common Phenomenon That Reduces the Diagnostic Yield of PCR-Based Sequencing of Targeted Gene Panels
Primary cardiomyopathies (CMPs) are monogenic but multi-allelic disorders with dozens of genes involved in pathogenesis. The implementation of next-generation sequencing (NGS) approaches has resulted in more time- and cost-efficient DNA diagnostics of cardiomyopathies. However, the diagnostic yield of genetic testing for each subtype of CMP fails to exceed 60%. The aim of this study was to demonstrate that allelic dropout (ADO) is a common phenomenon that reduces the diagnostic yield in primary cardiomyopathy genetic testing based on targeted gene panels assayed on the Ion Torrent platform. We performed mutational screening with three custom targeted gene panels based on sets of oligoprimers designed automatically using AmpliSeq Designer® containing 1049 primer pairs for 37 genes with a total length of 153 kb. DNA samples from 232 patients were screened with at least one of these targeted gene panels. We detected six ADO events in both IonTorrent PGM (three cases) and capillary Sanger sequencing (three cases) data, identifying ADO-causing variants in all cases. All ADO events occurred due to common or rare single nucleotide variants (SNVs) in the oligoprimer binding sites and were detected because of the presence of “marker” SNVs in the target DNA fragment. We ultimately identified that PCR-based NGS involves a risk of ADO that necessitates the use of Sanger sequencing to validate NGS results. We assume that oligoprimer design without ADO data affects the amplification efficiency of up to 0.77% of amplicons.
Primary cardiomyopathies (CMPs) are monogenic but multi-allelic disorders with dozens of genes involved in pathogenesis. The implementation of next-generation sequencing (NGS) approaches has resulted in more time-and cost-efficient DNA diagnostics of cardiomyopathies. However, the diagnostic yield of genetic testing for each subtype of CMP fails to exceed 60%. The aim of this study was to demonstrate that allelic dropout (ADO) is a common phenomenon that reduces the diagnostic yield in primary cardiomyopathy genetic testing based on targeted gene panels assayed on the Ion Torrent platform. We performed mutational screening with three custom targeted gene panels based on sets of oligoprimers designed automatically using AmpliSeq Designer ® containing 1049 primer pairs for 37 genes with a total length of 153 kb. DNA samples from 232 patients were screened with at least one of these targeted gene panels. We detected six ADO events in both IonTorrent PGM (three cases) and capillary Sanger sequencing (three cases) data, identifying ADO-causing variants in all cases. All ADO events occurred due to common or rare single nucleotide variants (SNVs) in the oligoprimer binding sites and were detected because of the presence of "marker" SNVs in the target DNA fragment. We ultimately identified that PCR-based NGS involves a risk of ADO that necessitates the use of Sanger sequencing to validate NGS results. We assume that oligoprimer design without ADO data affects the amplification efficiency of up to 0.77% of amplicons.
INTRODUCTION
In recent years, the study of the genetic causes of monogenic diseases has evolved from a basic science research area into widely accepted clinical testing protocols with substantial impacts on diagnostics and clinical decision-making (Ackerman et al., 2011). Primary cardiomyopathies (CMP) are monogenic but multi-allelic disorders with dozens of genes involved in pathogenesis (Hershberger et al., 2018). The prevalence of clinically expressed and hypertrophic cardiomyopathy (HCM) gene carriers has been greatly underestimated and could be as high as 1:200 (Semsarian et al., 2015).
The implementation of NGS approaches has resulted in more time-and cost-efficient DNA diagnostics of cardiomyopathies. However, the diagnostic yield of genetic testing for each subtype of CMP fails to exceed 60% (Hershberger et al., 2018). Negative results obtained by genetic testing do not rule out the presence of genetic disease because our knowledge about the molecular pathogenesis of disease is still evolving. Moreover, the technical limitations of all known techniques of DNA/RNA analysis and variant interpretation contribute to incomplete results. Alternative sequencing approaches such as capillary Sanger sequencing confirm the genetic variants found by NGS methods to increase the reliability of the DNA test results (Baudhuin et al., 2015).
Allelic dropout (ADO) is a common phenomenon that reduces the efficiency of PCR-based targeted sequencing. It was first described in 1991 as a "partial amplification failure, " causing a potential source of misdiagnosis for both dominant and recessive diseases (Navidi and Arnheim, 1991). The practical importance of the ADO phenomenon was originally shown in 1997 by Lissens and Sermon in a case of preimplantation genetic diagnosis of cystic fibrosis wherein the heterozygous F508 mutation in the CFTR gene was not detected in 25% of mutant blastomeres (Lissens and Sermon, 1997). The ADO phenomenon involves selective allele amplification during the polymerase chain reaction (PCR) thermocycling process. The presence of single nucleotide variants (SNVs) in the forward and/or reverse oligoprimer binding sites may lead to the complete or partial lack of amplification of the single allele, while the second one may "drop" out during the PCR process. In such cases, SNVs causing ADO are usually located closer to the 3 ′ end of the oligoprimer binding site (Martins et al., 2011).
Bi-directional capillary Sanger sequencing and highthroughput semiconductor sequencing approaches are routinely used for cross-validation of genetic findings (Baudhuin et al., 2015;Di Resta and Ferrari, 2018). Both approaches are PCRbased, share similar limitations, and may be negatively impacted by ADO. However, the incidence of ADO events in these PCR-based diagnostic assays remains unknown.
The aim of this study was to demonstrate that ADO is a common phenomenon influencing the diagnostic yield of targeted gene panel testing of primary CMPs on the Ion Torrent platform with follow-up verification by Sanger sequencing.
MATERIALS AND METHODS
We performed genetic testing on DNA samples from 232 patients diagnosed with inherited cardiomyopathies in clinical centres. This study was performed in accordance with the 1964 Helsinki declaration, its later amendments and local ethics committee. Written informed consent was obtained from all individual participants included in the study. DNA samples were extracted from venous blood using Quick-DNA Miniprep Plus Kit (Zymo Research Corp., Irvine, CA, USA) according to the manufacturer's instructions.
Mutational screening was performed using three custom targeted gene panels with two sets of oligoprimers designed automatically using Ion AmpliSeq Designer ® (Thermo Fisher Scientific, Waltham, MA, USA) containing 1,049 primer pairs for 37 genes, with a total length of 153 kb. More detailed characteristics of each target genes panels are presented in Supplementary Table 1. Manufacturer grouped primers for each panel in 2 pools. Libraries preparation was performed using Ion AmpliSeq TM Library Kit 2.0 according to the manufacturer's instructions (Thermo Fisher Scientific). Sequencing was performed on Ion 314 TM and Ion 316 TM chips using highthroughput semiconductor sequencing on an Ion PGM TM System according to the manufacturer's instructions (Thermo Fisher Scientific). Average reads per amplicon were 192, mean coverage with at least 20 reads−94.7%, mean coverage with at least 100 reads−79.51%. Data from the Ion PGM TM System were processed with CoverageAnalysis and VariantCaller plugins available within licensed Torrent Suite Software 5.6.0 and Ion Reporter Software (Thermo Fisher Scientific). NGS sequencing reads were visualized using the Integrative Genomic Viewer (IGV) tool (Robinson et al., 2011) using hg19 as a reference genome. All DNA samples were screened with at least one of the targeted gene panels mentioned.
Rare genetic variants detected by NGS were verified via bi-directional capillary Sanger sequencing on an ABI 3730XL DNA Analyzer according to the manufacturer's instructions (Thermo Fisher Scientific). Alternative pairs of oligoprimers flanking the coding and adjacent intronic regions of the 37 genes were designed for PCR using open-source PerlPrimer (Marshall, 2004). The PCR protocol and annealing temperature of the primers were determined experimentally. The results of direct Sanger sequencing were visualized using Chromas 2 software (Technelysium Pty Ltd, South Brisbane, Australia).
All archival direct Sanger sequencing chromatograms were involved in the study to track the ADO phenomenon. Genetic variants found by NGS were visually compared with Sanger sequencing chromatograms, noting the possible loss of heterozygosity or underrepresentation of alternative alleles. To reveal the cause of ADO, forward-and reverse-primer binding sites were analysed using the Genome Aggregation Database (gnomAD) (Karczewski et al., 2020). In order to exclude only one allele amplification, all amplicons with noted or suspected ADO cases were re-sequenced with alternative non-overlapping oligoprimer pairs.
All genetic variants newly detected in this study were registered in public database ClinVar (https://www.ncbi.nlm. nih.gov/clinvar/). List of variants with accession numbers is summarized in Supplementary Table 2.
RESULTS
We performed mutational screening on 232 DNA samples from patients diagnosed with different types of inherited CMPs. The DNA samples were screened with at least one of the three targeted gene panels.
We found that three ADO cases occurred during sequencing on the IonTorrent platform and three occurred during capillary Sanger sequencing. In the targeted gene panels, ADO led to underrepresentation/loss of marker variants in NGS reads ( Table 1). The Sanger sequencing chromatograms revealed a dropout of the allele due to the loss of heterozygosity of the already detected ("marker") SNV (Table 1). Control capillary resequencing using additional alternative oligoprimers confirmed the true allelic status.
We identified the cause of ADO in all six cases ( Table 1). In the targeted genes, ADO was caused by rare or unique genetic variants in the oligoprimer binding sites; in capillary Sanger sequencing, all ADO cases occurred due to common SNVs.
We found that the ADO phenomenon may lead not only to the complete loss of an allele but also to the underrepresentation of the "marker" variant in NGS reads. For example, the heterozygous rare missense variant c.641G>A (p.R214Q) in the SCN1B gene was detected in only 1 of 2 overlapping amplicons by target IonTorrent sequencing and was represented in 5% of all reads ( Figure 1A). This caused the loss of two missense variants, -c.744C>A (p.S248R) and c.749G>C (p.R250T)-and only reads with the wild-type allele were displayed. The presence of all three linked heterozygous missense variants-c.641G>A (p.R214Q), c.744C>A (p.S248R), and c.749G>C (p.R250T)-in the SCN1B gene in the DNA sample was confirmed by control Sanger sequencing ( Figure 1B).
To reproduce this case of ADO in a single (i.e., nonmultiplexed) PCR, we performed a single control PCR with two oligoprimers designed by AmpliSeq and flanking genomic region chr19:35524839-35525003 (the corresponding target region of the SCN1B gene). This amplicon was sequenced separately by capillary Sanger sequencing and the loss of the allele containing the c.641G>A (p.R214Q) variant was reproduced ( Figure 1C).
Cross-validation of DNA diagnostic results using an alternative sequencing approach (capillary Sanger sequencing was performed first as a basic method) allowed us to identify a case of allelic dropout in the SCN5A gene in an Iranian family with Brugada syndrome (Figure 2A). Heterozygous missense variant c.4516C>T (p.P1506S) in exon 26 of the SCN5A gene was found by capillary Sanger sequencing in DNA samples of the proband ( Figure 2B) and proband's brother. However, further cascade familial screening revealed this missense variant in the proband's nephew in the hemizygous state ( Figure 2C). Theoretically, the hemizygous state of this c.4516C>T variant in the II-4 family member may involve alternative explanations such as consanguinity in the family or de novo deletion of the SCN5A gene in the maternal allele, but it did not fit the clinical phenotype and family history. Dropout of the wild-type allele was the most reliable explanation.
Manual analysis of oligoprimer binding sites using gnomAD revealed the presence of a common genetic variant c.4542+89C>T [total minor allele frequency (MAF) 0.098] in the 3 ′ -end of the R-primer which caused ADO. This heterozygous SNV was detected in DNA samples of patient II-4 and his mother by PCR-RFLP analysis using an additional pair of oligoprimers flanking this region. Absence of genetic variant c.4516C>T (p.P1506S) in the mother's DNA sample was confirmed by capillary Sanger sequencing with two independent oligoprimers and PCR-RFLP analysis. Control re-sequencing on the IonTorrent platform showed that family member II-4 is a carrier of heterozygous c.4516C>T (p.P1506S) ( Figure 2D).
In one case, we detected ADO when comparing results from two consecutive targeted gene panel sequencing assays with overlapping gene spectra and different oligoprimers encompassing the target regions of the LDB3 gene. A variant of unknown significance-c.1051A>G (p.T351A)-in the LDB3 gene in sample ARVD19 was detected only by panel I ("Genes encoding desmosomal and associated proteins") (Supplementary Figure 1A) but not in panel II ("Genes encoding sarcomeric and associated proteins") (Supplementary Figure 1B). This SNV was also confirmed by control capillary sequencing (Supplementary Figure 1C).
We also found that ADO may occur not only due to localization of SNVs in the 3 ′ -end of oligoprimer binding sites, but also close to the 5 ′ -end. Deep intronic variant c.2300-195A>G in the PKP2 gene was located close to the 5 ′ -end and led to ADO in nine samples studied ( Table 1).
We found that ADO is a non-consistent process even in the same DNA sample. For example, three consecutive PCR capillary sequencing runs with sample ARVD16 yielded one positive result (heterozygous variant c.2091A>G was detected) and two negative results (complete loss of the c.2091A>G variant). This ADO event was caused by the intronic variant c.1904-49T>A located at the 3 ′ -end of the primer-binding site.
All amplicons with identified ADO events were carefully resequenced using alternative oligoprimer pairs and selective allele amplification was confirmed in all cases ( Table 1).
DISCUSSION
Currently, ADO is a known limitation of PCR-based molecular diagnostic approaches. Caused by different mechanisms, the single allele amplifies exclusively or pre-dominantly, leading to overrepresentation of homozygosity (Wang et al., 2012).
Following the automatization of all molecular diagnostic procedures-from primer design to variant detection, calling, and interpretation-that has increased the amount of samples tested simultaneously, the awareness of ADO events should also be increased. The Clinical Laboratory Standards Institute (CLSI) Guidelines recommend that assay development and quality control should include measures aimed at both detecting allelic dropout and minimizing its occurrence (CLSI, 2012).
In this study, six ADO events were identified in both PCR-based sequencing platforms. The causes of these events were also revealed in all cases. On the IonTorrent platform, ADO events were caused by rare or unique genetic variants in the oligoprimer binding sites. On the capillary sequencing platform, ADO events were caused by common SNVs in the oligoprimer binding sites ( Table 1). As a result of such selective amplification, we observed partial hemizygosity or underrepresentation of the heterozygous genetic variants in the NGS results. Some of these underrepresented variants may be filtered automatically during NGS data processing. Cross-validation of the genetic findings revealed by one sequencing platform with an alternative approach is a powerful method to decrease the rate of false-positive results in genetic testing. However, there is no universally-accepted method to decrease-let alone effectively detect-partial hemizygosity due to allelic dropout. It seems that resequencing of the region of interest by two independent oligoprimer pairs remains the "gold standard" of DNA diagnostics.
There are 2 types of causes of the ADO phenomenon described in literature (Wang et al., 2012): (1) "Sample-specific" causes due to the quality of the DNA sample or the low DNA concentration. Such ADO cases are found in forensic diagnostics, where only fragmented or degraded DNA is available, as well as in preimplantation diagnostics, where genotyping is performed on DNA extracted from one blastomere; (2) "Locus-specific" causes due to the characteristics of the locus under investigation. The presence of single nucleotide polymorphisms (SNPs) in oligoprimer binding sites of forward and/or reverse primers disturbs the specificity of the complementary interaction between the oligonucleotide and the target DNA sequences, leading to the lack of oligoprimer hybridization, and elongation of the amplicon.
All ADO events revealed in this study involved locus-specific causes due to the characteristics of individual loci in normal concentrations of DNA.
In cases of locus-specific allelic dropout, the causal SNV in the oligoprimer binding site is usually located close to the 3 ′ -end of the oligoprimer. This was initially reported in a study by Martins et al. (2011). The authors used the rs2247836 variant (MAF = 0.403 in the European population and 0.323 in the African population) in the intron 4 of the PAH gene to evaluate the probability of ADO depending on SNV location in oligoprimer binding sites. Four alternative variants of forward oligoprimers were designed containing the SNV in the 3rd, 5th, and 7th positions of the 3 ′ -end of the oligoprimer. Sanger sequencing was performed for patients carrying the heterozygous genetic variant rs2247836 and mutation p.Arg158Gln in exon 5 of the PAH gene. Loss of heterozygosity was detected for all positions of the ADO-causing SNV. The authors recommended careful consideration during primer design of the rare/common SNVs in areas within 7 nucleotides of the 3 ′ -end of oligoprimers. We found that the presence of SNVs close to the 5 ′ -end of oligoprimers may also cause ADO events. Convincing data suggest that any polymorphic position within the oligoprimer sequence potentially reduces the accuracy of DNA diagnostics (Martins et al., 2011).
Data from 30,769 reported genotypes for eight mutations involved in four diseases show that, on average, allele dropout/drop-in potentially leading to misdiagnosis occurred in 0.44% of genotype results (Blais et al., 2015). We re-analysed the oligoprimer binding sites containing SNVs within overlapping amplicons and found additional amplicons that may be missing due to ADO events. The presence of SNVs in these fragments was exhibited in the NGS reads as underrepresented variants and/or was revealed in isolated reads ( Table 2).
We hypothesize that the risk of ADO would increase with the number of target genes and overall panel size because it would increase number of oligoprimer pairs to cover. Potential causes of ADO (SNVs) were found in 4 of 521 amplicons in panel 1 (0.77%). There are increasing numbers of studies that discuss the importance of ADO in DNA diagnostic procedures (Tester et al., 2006;Coulet et al., 2010;Medlock et al., 2012;Rossetti et al., 2012;Lam and Mak, 2013;Shmukler et al., 2013;Rhees et al., 2014;Blais et al., 2015;Proost, 2016). This phenomenon was detected in oncogenetics (BRCA1/2 testing), inborn metabolic disease genotyping (FAH testing), hematology research etc (Lam and Mak, 2013;Shmukler et al., 2013;Jeong et al., 2019). We surmise that the actual number of ADO events remains unknown and may significantly exceed the events actually detected. The risk of the negative impacts of possible polymorphic sites in automatically generated primer sequences on sequencing results remains high. It depends on the number of overlapping amplicons. It seems that underrepresentation of genetic variants in NGS reads is not dependent on the read depth. Jeong et al. demonstrated a reproducible ADO phenomenon during 3 consecutive resequencing by Ion S5 with read depth from 1985 to 8608 (Jeong et al., 2019).
Allelic dropout can lead to underdetection of the rare variants within missing amplicons and increase false negative results rate in diagnostic setting, and can cause mistaken assignment of heterozygous genotypes as homozygotes with underestimation of the observed heterozygosity in population studies. The simple strategy to restore allelic drop out could be a repeated genotyping with non-overlapping pairs of oligoprimers. But in daily practice replicate genotyping is costly. Increasing the tiling density of amplicons would be also helpful but it requires more oligos for design and synthesis, and also noticeably increases the assay price. Another way could be to improve automatic primer design tools with continuously updating dbSNP and gnomAD data to omit inclusion of the SNPs into the primer sequences. Analysis the secondary structure of primer and template sequences would also be important for the design algorithm (Lam and Mak, 2013). Regular updates on SNV distribution and prevalence in the human genome and improvement in primer design algorithms would greatly improve the diagnostic yield of molecular genetic testing.
In conclusion, PCR-based sequencing technologies such as next-generation sequencing and Sanger sequencing are widely used in clinical practice. Despite their high throughput and constantly improving efficiency, limitations remain for the application of these technologies.
All PCR-based methods involve the risk that ADO will decrease the diagnostic yield of genetic testing because of undetectable, potentially pathogenic variants. We demonstrate that ADO is a common phenomenon in both NGS and Sanger sequencing results.
Theoretically, ADO may affect up to 0.77% of amplicons. It seems that the actual rate of ADO may be even higher and is dependent on the number of oligoprimer pairs. Specific software that incorporates updates on the distribution of SNVs to avoid ADO resulting from automatic oligoprimer design would substantially increase the accuracy of molecular research.
Oligoprimer sequences are available upon request.
DATA AVAILABILITY STATEMENT
The datasets for this article are not publicly available due to concerns regarding participant/patient anonymity. Requests to access the datasets should be directed to the corresponding author. The ClinVar reference numbers for the variants discussed in the article are present in the article text and/or Supplementary Material.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the local Ethics Committee of Petrovsky National Research Center of Surgery (Moscow, Russia). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AS, AB, and SS performed wet genetic investigation. AS performed data analysis and drafting the manuscript. EZ-management of the project, editing, and final approval of the manuscript. All authors read, discussed, and approved the manuscript as submitted.
FUNDING
This work was supported by Russian Science Foundation Research grant № 16-15-10421. | 2021-02-02T17:52:09.084Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "bafd9ddf4a6ca3f41590fb12d13441fde04849d8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.620337/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bafd9ddf4a6ca3f41590fb12d13441fde04849d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236978244 | pes2o/s2orc | v3-fos-license | Tumor microenvironment evaluation promotes precise checkpoint immunotherapy of advanced gastric cancer
Background Durable efficacy of immune checkpoint blockade (ICB) occurred in a small number of patients with metastatic gastric cancer (mGC) and the determinant biomarker of response to ICB remains unclear. Methods We developed an open-source TMEscore R package, to quantify the tumor microenvironment (TME) to aid in addressing this dilemma. Two advanced gastric cancer cohorts (RNAseq, N=45 and NanoString, N=48) and other advanced cancer (N=534) treated with ICB were leveraged to investigate the predictive value of TMEscore. Simultaneously, multi-omics data from The Cancer Genome Atlas of Stomach Adenocarcinoma (TCGA-STAD) and Asian Cancer Research Group (ACRG) were interrogated for underlying mechanisms. Results The predictive capacity of TMEscore was corroborated in patient with mGC cohorts treated with pembrolizumab in a prospective phase 2 clinical trial (NCT02589496, N=45, area under the curve (AUC)=0.891). Notably, TMEscore, which has a larger AUC than programmed death-ligand 1 combined positive score, tumor mutation burden, microsatellite instability, and Epstein-Barr virus, was also validated in the multicenter advanced gastric cancer cohort using NanoString technology (N=48, AUC=0.877). Exploration of the intrinsic mechanisms of TMEscore with TCGA and ACRG multi-omics data identified TME pertinent mechanisms including mutations, metabolism pathways, and epigenetic features. Conclusions Current study highlighted the promising predictive value of TMEscore for patients with mGC. Exploration of TME in multi-omics gastric cancer data may provide the impetus for precision immunotherapy.
ABSTRACT Background Durable efficacy of immune checkpoint blockade (ICB) occurred in a small number of patients with metastatic gastric cancer (mGC) and the determinant biomarker of response to ICB remains unclear. Methods We developed an open-source TMEscore R package, to quantify the tumor microenvironment (TME) to aid in addressing this dilemma. Two advanced gastric cancer cohorts (RNAseq, N=45 and NanoString, N=48) and other advanced cancer (N=534) treated with ICB were leveraged to investigate the predictive value of TMEscore. Simultaneously, multi-omics data from The Cancer Genome Atlas of Stomach Adenocarcinoma (TCGA-STAD) and Asian Cancer Research Group (ACRG) were interrogated for underlying mechanisms. Results The predictive capacity of TMEscore was corroborated in patient with mGC cohorts treated with pembrolizumab in a prospective phase 2 clinical trial (NCT02589496, N=45, area under the curve (AUC)=0.891). Notably, TMEscore, which has a larger AUC than programmed death-ligand 1 combined positive score, tumor mutation burden, microsatellite instability, and Epstein-Barr virus, was also validated in the multicenter advanced gastric cancer cohort using NanoString technology (N=48, AUC=0.877). Exploration of the intrinsic mechanisms of TMEscore with TCGA and ACRG multiomics data identified TME pertinent mechanisms including mutations, metabolism pathways, and epigenetic features. Conclusions Current study highlighted the promising predictive value of TMEscore for patients with mGC. Exploration of TME in multi-omics gastric cancer data may provide the impetus for precision immunotherapy.
BACKGROUND
Clinical trials of immune checkpoint blockade (ICB), antibodies, such as antiprogrammed cell death protein 1 (PD-1) and anti-programmed death-ligand 1 (PD-L1), showed manageable toxicity and antitumor activity in patients with advanced gastric cancer (GC) in the ATTRACTION-2 and KEYNOTE-059 trials. 1 2 However, different studies with ICB treatment revealed a highly variable objective response rate, ranging from 10% to 26% in patients with GC. 1 3 4 Hence, the precise biomarkers to discriminate potential responders to immune therapies remains an urgent priority.
The biomarkers predictive of ICB response are under investigation. Currently, PD-L1 combined positive score (CPS), microsatellite instability-high (MSI-H), and tumor mutation burden (TMB) are widely recognized as promising biomarkers suggest greater efficacy of ICB despite some limitations. 5 6 Immunohistochemistry (IHC)-based PD-L1 CPS, is most adopted but controversial for the PD-L1 expression heterogeneity, unstandardized detective process, and various positive criteria. 7 Besides, ATTRACTION-2 suggested that the survival benefit with nivolumab in GC was independent of PD-L1 positivity (<1% vs ≥1%), indicating that PD-L1 positivity might omit part of responders. 1 Patients with high TMB have a higher chance of mobilizing host immune reaction, thus responding to ICB, but facing several measurement hurdles. [8][9][10] Likewise, MSI-H leads to the accumulation of somatic mutations and is rarely detected in patients with GC. 11 12 The common ground of these biomarkers is the focus on the inherent characteristics of tumor cells and the neglection of the interactions with the tumor microenvironment (TME) components, 13 thus partially interpreting unsatisfactory results in GC clinical trials exploring predictive biomarkers towards ICB.
The TME comprizing various immune cells, stromal cells, and extracellular components, profoundly affects tumorigenesis, progression, and therapeutic resistance. [14][15][16][17] Increasing evidence indicated the implication Open access of TME in the antitumor process, which can facilitate ICB response prediction. 15 18 Researches reveal that a fraction of cancer-associated fibroblasts (CAFs), myeloid-derived suppressor cells, and macrophages can hijack ICB immunotherapy. 6 17 19 Additionally, the TME stromal signals of the epithelial-mesenchymal transition (EMT)-related gene signature and transforming growth factor-beta (TGF-β) 6 20 restrain antitumor immunity and response to ICB. However, ways to integrate these parameters lack full exploration, hindering optimizing selection strategies for potential ICB responders. Obstacles include an inaccurate combination of these parameters and uncertain interactions of these signatures.
Investigating the multi-omics data of 1524 patients with GC, we previously established a methodology termed TMEscore 15 to evaluate the immune cell infiltration pattern. TMEscore is promising in determining the responsiveness to ICB in melanoma and metastatic urothelial cancer. For improvement, we optimized the TMEscore evaluation and verified its clinical utility in advanced gastric cancer using NanoString technology. 18 21 22 We incorporated our TME-evaluation methodology into an open-source R package, TMEscore, to predict tumor immunogenicity and ICB sensitiveness from bulk transcriptomic data. To understand the TMEscore-related tumor intrinsic characteristics and antitumor immunity, we comprehensively analyzed the genomic characteristics, molecular subtypes, metabolic, and methylation features. The genomic and molecular biomarkers of response and resistance to ICB we identified demonstrates the complex host-tumor interplay in treatment response.
METHODS
Human gastric cancer specimens and NanoString gene expression analysis Formalin-fixed paraffin-embedded or fresh-frozen tumor tissue from multiple clinical centers was collected retrospectively at baseline before receiving checkpoint immunotherapy. Tumor responses were evaluated according to RECIST V.1.1 criteria. Tumor specimens derived from patients with mGC (up to 90 days from treatment start) were conducted as previously described by Ayers et al. 21 Of 70 specimens from five clinical centers (Nanfang Hospital of Southern Medical University, Sun Yat-sen University Cancer Center, Guangdong Provincial Hospital of Chinese Medicine, The Sixth Affiliated Hospital of Sun Yat-sen University and The First Affiliated Hospital of Sun Yat-sen University), 48 specimens were of sufficiently high quality for RNA evaluation. A minimum of approximately 80 ng of total RNA was used to measure the expression of 51 TMEscore genes, comprizing 25 TME signature A genes, 19 TME signature B genes and some checkpoint-related genes (eg, PD-L1, LAG3, PDCD1LG2, CTLA4, TIGIT, TIM3 and PDCD1), and 10 housekeeping genes (ACTB, ABCF1, B2M, G6PD, GAPDH, GUSB, PGK1, RPLPO, TFRC and TUBB) using the nCounter platform (NanoString Technologies; Seattle, Washington, USA). 22 Data was normalized using the housekeeping genes.
Gastric cancer specimens derived from clinical trial
Prospective, open-label, phase 2 trial (NCT02589496) of advanced gastric cancer was designed as a single-arm, phase 2 study at Samsung Medical Center. Immune checkpoint inhibitor (pembrolizumab) 200 mg was administered as 30 min intravenous infusion every 3 weeks until documented disease progression, unacceptable toxicity, or up to 24 months. Tumor responses were evaluated every two cycles according to RECIST V.1.1 criteria. Toxicities were graded based on the National Cancer Institute Common Terminology Criteria for Adverse Events V.4.0. Tumor sample collection, eligibility criteria, PD-L1 IHC, MSI status determination, Epstein-Barr virus (EBV) in situ hybridization, tissue genomic analysis, and RNA sequencing pipeline of this cohort were detailed in our previous research. 5 Other patient cohorts used in this study Patient cohorts used in this study are summarized in online supplemental table S1. Seven genomic and transcriptomic data sets from patients with metastatic urothelial cancer treated with an anti-PD-L1 agent (NCT02951767), 6 patients with metastatic melanoma and non-small-cell lung cancer treated with MAGE-3 agent-based immunotherapy (NCT00706238), 23 patients with advanced melanoma treated with PD-1 blocker, 24 patients with advanced melanoma treated with various types of immunotherapy from The Cancer Genome Atlas of Skin Cutaneous Melanoma (TCGA-SKCM) cohort, 25 patients with melanoma treated with anti-CTLA-4 (cytotoxic T-lymphocyteassociated protein 4) or PD-1 (programmed cell death protein 1) antibody, 26 and mouse model treated with anti-CTLA-4 27 were downloaded and analyzed to determine the predictive capacity of TMEscore and were compared with its counterparts.
TMEscore evaluation, immune cell deconvolution and signature score estimation For the gene expression (normalized by RMA, TPM, FPKM or housekeeping genes) matrix, the expression of each gene in a signature was standardized so that its mean expression was 0, and the SD was 1 across samples. Then, PCA was performed, and principal component 1 was extracted to serve as the gene signature score. This approach had the advantage of focusing the score on the set with the largest block of well-correlated (or anti-correlated) genes in the set, while down-weighting contributions from genes that do not track with other set members. 6 15 As our previous study 15 indicated, TMEscore of each patient was estimated by the formula: TMEscore = ∑ PC1 i -∑PC1 j , where i is the signature score of clusters whose Cox coefficient is positive, and j is the expression level of the gene whose Cox coefficient is negative. The analytic code and package used to perform the TMEscore estimation are provided for non-commercial use Open access at GitHub: https://githubcom/DongqiangZeng0808/ TMEscore. To characterize the metabolism, immune microenvironment and other prevalent gene signatures activation in each tumor sample, multi-algorithms were applied to determine the pathway activity using IOBR package (https:// github. com/ IOBR/ IOBR). 28 Immun-eScore, Stromalscore, and tumor purity were assessed computationally in RNA-seq data using the ESTIMATE algorithm 29 that uses gene expression signatures to infer the fraction of stromal and immune cells in tumor samples. Other computational algorithms and tools used to estimate the microenvironment were detailed in the online supplemental methods.
Differentially gene expression analysis All differential gene analyses were conducted using the DESeq2 package. 30 Differential gene expression analysis was performed using a generalized linear model with the Wald statistical test, with the assumption that underlying gene expression count data were distributed per a negative binomial distribution with DESeq2. DEGs were considered for further analysis with a q value<0.05. The adjusted p value for multiple testing was calculated using the Benjamini-Hochberg correction. 31 Identification of TMEscore relevant mutations and mutational signatures The mutation MAF files were downloaded with TCGAbiolinks, 32 and the mutation status and mutation burden were inferred from the MAF files. Mann-Whitney U test was adopted to define the significance of binary variables (wild type or mutated). We applied the Benjamini-Hochberg method to convert the p values to adjusted p values. 31 The mutational signature analysis was performed using the deconstructSigs package 33 in R, which selects combinations of known mutational signatures 34 that account for the observed mutational profile in each sample.
Functional and pathway enrichment analysis Gene annotation enrichment analysis was performed with the R package clusterProfiler. 35 Enrichment p values were based on 1000 permutations and subsequently adjusted for multiple testing using the Benjamini-Hochberg procedure to control the false discovery rate (FDR). 31 Gene Ontology (GO) and KEGG terms were identified with a strict cut-off of p<0.01 and an FDR of less than 0.05. We also identified pathways that were up-regulated and down-regulated among groups by running a gene set enrichment analysis (GSEA) 36 of the adjusted expression data for all transcripts.
Single-sample gene-set enrichment analysis of tumor processes To characterize the tumor processes and pathway activation status in each tumor sample, a ssGSEA algorithm 37 was applied to determine the pathway activity using GO, 38 KEGG 39 and HALLMARK gene sets derived from MSigDB (V.6.2). 40 Other prevalent gene signature scores with respect to the TME, tumor intrinsic pathway, and metabolism were calculated for each sample using the PCA algorithm by IOBR package. 28 Differentially methylated probes analysis Methylation data (β values of Illumina Infinium Human-Methylation450) of The Cancer Genome Atlas of Stomach Adenocarcinoma (TCGA-STAD) patients were obtained through TCGAbiolinks. 32 β values reported by the 450K Illumina platform for each probe were set as the methylation level measurement for the targeted CpG site. Methylation data quality control, normalization, and filtering of redundant probes were conducted using the pipeline of the ChAMP. Differentially methylated probes (DMP) analysis was detected by the 'champ.DMP' function of ChAMP package. 41 DMPs were considered for further analysis with a q value <0.05. The adjusted p value for multiple testing was calculated using the Benjamini-Hochberg correction. 31
Statistical analysis
The normality of the variables was tested by the Shapiro-Wilk normality test. For comparisons of two groups, statistical significance for normally distributed variables was estimated by an unpaired Student's t-test, and nonnormally distributed variables were analyzed by the Mann-Whitney U test. For comparisons of more than two groups, the Kruskal-Wallis and one-way analysis of variance tests were used for non-parametric and parametric methods, respectively. The correlation coefficient was computed by Spearman and distance correlation analyses. Χ 2 test and two-sided Fisher's exact tests were used to analyze contingency tables. The cut-off values of each data set were evaluated based on the association between survival outcome and signature score in each separate data set using the survminer package. The Kaplan-Meier method was used to generate survival curves for the subgroups in each data set, and the log-rank (Mantel-Cox) test was used to determine if they were statistically different. The HRs for univariate analyses were calculated using the univariate Cox proportional hazards regression model. The sensitivity and specificity of signature scores were depicted by the receiver operating characteristic (ROC) curve and quantified by the area under the ROC using the pROC package. 42 The ' roc. test' function of pROC package was used to compare the area under the curve (AUC) or partial AUC of two correlated or uncorrelated ROC curves. All statistical analyses were conducted using R V.3.6.3.0 (https://www. r-project. org/), and the p values were two-sided. P values of less than 0.05 were considered statistically significant.
TMEscore predicts ICB response of gastric cancer
To optimize the TME assessment for more efficient clinical translations, feature engineering (see online supplemental methods) was conducted in six ICB data sets Open access (online supplemental table S1) and reduced TMEscore 15 signature genes from 244 to 44. As previous research suggested, 15 genes negatively associated with ICB response were enriched in immune exclusion phenotype (EMT/ TGF-β pathway), whereas the immune relevant genes positively associated with treatment efficacy figure 1A, (online supplemental figure S1A). In several GC cohorts (online supplemental table S1), we found a consistent and closed association between the 44-gene TMEscore and the prior TMEscore measured of 244 genes (online supplemental figure S1B). Notably, the TMEscore was capable of serving as a prognostic biomarker of immunotherapy meta-cohort (GSE78220, 24 IMvigor210, 6 GSE93157, 43 Snyder et al 44 and TCGA-SKCM 25 ) (figure 1B: TMEscore, p=0.0001; online supplemental figure S1C): TMEscoreA, p<0.0001 and online supplemental figure S1D: TMEscoreB, p=0.0396, respectively), and a predictive biomarker of ICB response in several independent cohorts (online supplemental figure S1E-L, online supplemental table S2. The AUCs of eight independent data sets indicated that the predictive value of simplifying TMEscroe (44 genes) was enhanced after dimension reduction (online supplemental figure S1E-L). In the advanced GC cohort receiving anti-PD-1 immunotherapy, 5 the TMEscore yielded the highest AUC (AUC=0.891), surpassing other prevalent biomarkers, including MSI status, TMB, CPS and EBV infection (AUC=0.708, 0.672, 0.817, and 0.708, respectively) (figure 1C and online supplemental table S3), and several transcriptomic-based predictive counterparts, comprizing gene expression profile score (GEPs), 18 ImmunoScore, 29 CD8+ T effector score, and pan-fibroblast TGF-β response signature (Pan-F-TBRs) 6 (figure 1D).
We further measured expression of TMEscore genes in the tumor microenvironment, using NanoString nCounter platform 22 and RNA isolated from tumor tissue obtained at baseline from 48 patients with advanced gastric cancer of multicenter before receiving ICB (table 1 and online supplemental table S4). Apparently, TMEscore achieves an overall accuracy of AUC=0.877, which is higher than other prevalent gene signature predictors 6 18 21 and capturing almost all true responders (figure 1E,F). Consistent with our previous study, 15 regressive tumors (complete response (CR)/ partial response (PR)) were observed markedly higher TMEscoreA than stable and progressive tumors (progressive disease (PD)/stable disease (SD)), and TMEscoreB was negatively associated with the treatment efficacy of advanced GC (figure 1F, statistical p value of TMEscore, TMEscoreA and TMEscoreB were 6.1×10 −6 , 0.047 and 0.00046, respectively), implicating stromal activation as a critical mechanism of resistance to ICB. 6 15 TMEscoreB (stromal-relevant) genes were more precise biomarker and significantly associated with treatment resistance, while TMEscoreA (immune-relevant) genes were highly expressed in a few non-responders (SD/PD) ( figure 1G,H).
TMEscore predicts efficacy of checkpoint immunotherapy alone or combination with chemotherapy or angiogenesis inhibitor To provide a precise map for understanding TMEscore performance in the context of mono-and combinational immunotherapy, we further explored the NanoString result of a 48 patients gastric cancer cohort. The expression of PD-L1 is prevailingly enriched in the responsive subset (CR/PR) relative to the progressive counterparts (figure 2A-C and online supplemental table S5). Intriguingly, the PD-L2 and TIM3 were significantly higher in non-responsive tumor, suggesting that upregulations of other corresponding or bypass checkpoint pathway may contribute to the resistance of PD-1 blockades (figure 2B-D and online supplemental table S5), by which according to reports the stromal activation and T-cell exclusion were induced. 6 Additionally, SYNPO was reported to be upregulated during CAF activation, 45 which is the critical mechanism of ICB resistance.
The clinical benefit of ICB monotherapy for advanced gastric cancer is limited, and recent clinical trials have demonstrated that combinations of ICBs with chemotherapy, anti-vascular targeted therapy or other molecular targeted therapies significantly improve treatment outcomes such as CheckMate-649. 46 47 Consequently, there will be a pressing need for biomarkers that can be applied for patient selection for anti-PD-1 immunotherapy and chemotherapy combination. Among the multicenter data of GC, 19 patients received ICB monotherapy, and 29 patients were treated with ICBs combined with chemotherapy or other inhibitors (table 1). We systematically evaluated aforementioned biomarkers in both ICB monotherapy and the combination treatment settings. The majority of ICB relevant genes and immune relevant signatures were positively related to favorable mono-immunotherapy response, corroborating former discoveries (figure 2E,F and online supplemental figure S2A,B). Whereas their predictive efficacy significantly slid in therapy combination subset, especially the signatures related with immune activation (figure 2G,H). However, the TMEscore still harbored robust predictive capacities in both settings (figure 2G,H), possibly attributing to the superiorly essential influence exerted by stromal activation during synergic treatment (online supplemental figure S2C,D). Comparable trend of PD-L2 and TIM3 expression were also exhibited in the synergic therapy. Their upregulations in progressive patients suggested the potential pivotal molecular characteristics in shaping tumor immune evasion (figure 2G,I), which also implied the existence of synchronously upregulation of immune checkpoint pertinent genes, indicating this subset of patients may be latent candidate to benefit from PD-L2 or TIM3 pathway inhibitions.
We depicted a landscape of the TME signature score, clinicopathological features, and molecular characterization in patients with metastatic GC treated with anti-PD-1 immunotherapy 5 to investigate factors potentially associated with the treatment efficacy of ICB. We observed that patients with better responses were more likely to possess EBV and MSI-H molecular subtypes but were rarely enriched in chromosomal instability (CIN), genomically stable (GS), and EMT molecular subtypes (figure 3A, EBV and MSI-H: responders (n=9), non-responders (n=0); GS and CIN: responders (n=3), non-responders (n=33); p=2.5×10 −7 , Fisher's exact test). Consistent with our recent research 15 in TCGA-STAD and ACRG cohorts, the TMEscore was significantly higher in patients with MSI-H and EBV subtypes, relative to CIN and GS (figure 3B, p=0.003), suggesting that the predictiveness of the TMEscore was mostly contributed to molecular phenotype stratification. We next examined the predictive capacity of gene signatures and prevalent biomarkers in stratified patients with EBV and MSI-H molecular subtypes that indicates better responses to ICBs. 48 49 ROC analyses indicated that the TMEscore (AUC=0.895) was superior in predicting EBV and MSI-H molecular subtypes, compared with MSI status, TMB, CPS, EBV status, GEPs, ImmuneScore, Pan-F-TBRs, and Immune Checkpoint (AUC=0.778, 0.781, 0.797, 0.708, 0.847, 0.646, 0.764, 0.767, respectively; online supplemental figure S2F-H and online supplemental table S7).
Open access
with tumor mutation burden in both data sets (TCGA-STAD: p=4.4×10 −16 ; figure 3J; ACRG: p=8.6×10 −11 ; online supplemental figure S3D) and predicted neoantigen load in TCGA-STAD cohort (p=2.5×10 −11 ; online supplemental figure S3E). Collectively, the EBV subtype remained at a low level of TMB and neoantigens with a high TMEscore and immune associated signatures in Pan-Caner cohorts (online supplemental figure S4A-G and online supplemental table S8). As shown by previous research 48 49 about GC cohort treated with ICBs, 5 patients with EBV infection, as well as the MSI-H phenotype, had an increased potential to benefit from ICB treatment. These observations further confirmed that TMB, as a widely used predictive biomarker, 50 is incapable of identifying patients with GC with EBV subtype and tumor with virus infection (online supplemental figure 1 S5A-F), which also benefit from immunotherapy. As expected, the TMEscore could identify the EBV and MSI subtypes from all patients in TCGA-STAD and ACRG cohorts with significantly higher accuracy than TMB, GEPs, 18 Pan-F-TBRs, 6 and Immune checkpoint score 33 ARID1A and PIK3CA deficiency potentiate therapeutic antitumor immunity in gastric cancer Somatic gene mutations can alter the vulnerability of cancer cells to T cells and T cell immunotherapies. 44 51 52 We sought to uncover the immunogenomic determinants of therapeutic response and the tumor immune microenvironment activation of GC in two large patient cohorts (TCGA-STAD and ACRG). Mutations associated with TMEscore was identified utilizing Wilcoxon test and Fisher's exact test (figure 4A and online supplemental table S9). Our analyses highlighted that mutation of ARID1A and PIK3CA (figure 4A), whether evaluated continuously (figure 4B,C) or binarily (online supplemental table S9), were markedly correlated with TMEscore levels in TCGA-STAD cohort, which were verified in the ACRG cohort (online supplemental figure S6A). Meanwhile, TMB was divided into high TMB group and low TMB group (cut-off=400, (online supplemental figure S6B) to analyze the relationship between TMEscore and ARID1A or PIK3CA mutations. As shown in online supplemental figure S6C,D, patients with ARID1A or PIK3CA mutations exhibited significantly higher TMEscore in the low TMB group. However, no significant trend was observed in the high TMB group. The above results suggested that in low TMB conditions, both ARID1A and PIK3CA mutations are associated with TME activation, while in high TMB conditions, the effect of ARID1A and PIK3CA mutations might be covered by the phenomenon that increasing neoantigens caused by abundant mutations further activating TME. PIK3CA is the most commonly mutated oncogene across all solid tumors. 53 ARID1A deficiency, also a frequent mutation in various malignancies, has been reported to contribute to compromised mismatch repair (MMR), increased mutagenesis, and microsatellite instability genomic signature, and may cooperate with anti-PD-L1 therapy. 54 Notably, we investigated further into the specific mutation locations to identify recurrent mutations with top mutation frequencies in binary TMEscore settings to visualize results by trackViewer. 55 Intriguingly, p.D18550Tfs*33 and p.F2141Sfs*59 of the ARID1A mutation were highlighted in high-TMEscore tumors ( figure 4D) and statistically correlated with TMEscore levels (p=0.03; figure 4E, online supplemental table S10). Gastric cancer with PIK3CA p.E545K and p.H1047R mutations were prominently enriched in the high-TMEscore group (online supplemental figure S7A, online supplemental table S10). However, limited statistical difference was observed in the continuous TMEscore despite the significant discrepancy across mutated and wild type (p=2.7×10 −8 ; online supplemental figure S7B). Additionally, the mutation rate of ARID1A and PIK3CA in TCGA-STAD cohort were also higher in EBV and MSI molecular subtypes, which was correlated with an elevated TMEscore and immunotherapeutic response as compared with CIN and GS subtypes (ARID1A: p<2.2×10 −16 ; PIK3CA: p<2.2×10 −16 ; χ 2 test; online supplemental figure S7C,D). We further found that the ARID1A-inactivating mutation in low TMB group was correlated with an upregulated immune checkpoint, CD8+ T effector, antigen presentation process (online supplemental figure S8A), and cellular response to glutamate metabolism (online supplemental figure S8B), which collectively suggested the higher T-cell infiltration and potential benefit from the blockade of ICB. Two recent studies indicated that the mutation of signaling pathways could serve as an immunotherapy biomarker 56 and suggested combination therapy opportunities. 52 The current study demonstrated pathway mutations derived predominantly from MSI molecular subtype (figure 4F and online supplemental table S11) and significant mutation accumulations of almost all pathways in the high-TMEscore fraction (figure 4F and online supplemental table S11). Nevertheless, in accordance with prior results (online supplemental figure S7D), a higher PI3K pathway mutation frequency was also observed in the EBV subtype in comparison with the GS and CIN subtypes, suggesting a latent interplay between EBV infections and the PI3K signaling pathway (online supplemental figure S9A and online supplemental table S11), which may partially explain the predominant increase of the TMEscore in EBV-infected patients (figure 3F and online supplemental figure S9B). Previous studies indicated that the interaction of PIK3CA mutation and EBV protein products may activate PI3K/ATK pathway which might be an initiator in tumorigenesis and progression. PIK3CA mutation revealed high intratumoral heterogeneity Open access characterized with three to five different PIK3CA genotypes (including wildtype) in EBV-positive gastric cancer. 57 Additionally, analyzing mutation signatures in the Catalog Of Somatic Mutations In Cancer 34 indicated an intimate correlation between the TMEscore and mismatch repair associated signature 6 (online supplemental figure S9C and online supplemental table S12). Collectively, large data analyses of gastric TME The ARID1A recurrent mutation is correlated with the higher TMEscore (Kruskal-Wallis test, p=9×10 −11 ) (E). (F) The landscape of intrinsic pathway mutations (rows) is characterized for each sample (columns). Column annotations represent OS status (live, dead), molecular subtype (chromosomal instability (CIN), Epstein-Barr virus (EBV), genomic stable (GS) and microsatellite instability (MSI)); and tumor microenvironment (TME) subtype (high, low). The TMEscore is displayed in the top panel. Genomic mutations were limitedly enriched in the EBV molecular subtype, which exhibited a high TMEscore. Colors (blue to red) represent the corresponding expression levels (low to high). WT, wild type; OS, overall survival.
Open access elucidated the estimation of ARID1A and PIK3CA mutation status as a potential biomarker for immunotherapy strategies of GC.
TME-associated metabolic characteristics
Given the intriguing metabolic regulations observed in the different ARID1A-mutant statuses, we further explored transcriptomic profiles and dissected the latent intrinsic mechanism contributing to the crucial predictive capacity of the TMEscore. Metabolic signatures were estimated by PCA methodology 6 and comprehensively investigated in TCGA-STAD cohort. Correlation analysis highlighted that kynurenine metabolism, purine metabolism and cysteine metabolism were activated in the high-TMEscore subset, while glycogen metabolism, transsulfuration, and glycine serine metabolism were significantly upregulated in low TMEscore group ( figure 5A,B). Statistical analysis suggested that kynurenine metabolism was closely correlated with a high TMEscore (p=2.0×10 −53 , r=0.702; figure 5C and online supplemental table S13) and immunotherapy-favorable molecular subtypes including EBV and MSI-H (Kruskal-Wallis, p=3.3×10 −10 ; figure 5C). The downregulated kynurenine metabolism was also observed to suggest T cell exclusion, which may indicate insensitivity to ICB therapy (figure 5E). Kynurenine metabolism processes may be a promising target to restore tumor-restraining T-cell immunogenicity and therefore promote ICB therapeutic efficacy in gastric cancer, such as IDO1 inhibitor. 58 We observed that glycogen metabolism was significantly activated in low TMEscore tumors and immune exclusive molecular subtypes both in TCGA-STAD cohort and ACRG cohort (figure 5D and online supplemental figure S10A-F), which suggest that it may be correlated with immune exclusion phenotype (figure 5E and online supplemental figure S10G,H) and mediate treatment resistance of immunotherapy. Consistently, Curtis et al indicated that the interaction between cancer cells and CAFs supported glycogenolysis which funneled into glycolysis, leading to increased proliferation, immune evasion, and metastasis of cancer cells. 59 Together, we identified a collection of metabolism characteristics and biological processes associated with TME, which reflects the intricacy of the TME and indicates potential combination therapy opportunities.
Methylation regions correlate with immune activity
A prior study 60 demonstrated that a high m6Ascore indicates an immune-exclusion TME phenotype, stromal activation, decreased survival, decreased neoantigen load, and inferior response in GC. Thereafter, we attempted to identify the epigenetic immunomodulation involved in the antitumor immunity and tumor immune editing, which may be fundamental for understanding the inflammatory reaction that occurs in the diseases. Notably, a comprehensive investigation into the DNA methylation position landscape suggested demethylation of the VAMP8, was enriched in the low-TMEscore cluster, with the demethylation of the ATG7 in the high-TMEscore cluster (figure 5F-I and online supplemental table S14). Intriguingly, further exploration of corresponding methylation regions revealed that cg04877910, cg12542933, cg05656364, cg05486094 and cg20056908 of VAMP8 methylation were consistently negatively associated with high TMEscore and MSI and EBV molecular subtypes, whereas cg23752985 of VAMP8 methylation harbored a relatively diverse distribution in molecular subtypes and correlations with the TMEscore (online supplemental figure S11A,B). Enrichment of differentially methylated genes highlighted the vital role VAMP8 methylation plays in the TME regulatory network via upregulating immune pathways, comprizing pathways of leukocyte activation regulation, protein location to the membrane, antigen processing and presentation, coated vesicle, and recycling endosome (online supplemental figure S11C), which indicated the crucial role VAMP8 plays in the complex gene interactions and crosstalk in extensive signaling pathways. Additionally, the demethylation of ATG7, as a gene marker of autophagy, is significantly correlated TMEscore (online supplemental figure S11D). Further analyses of the relationship among discovered ATG7associated signatures (positive regulation of autophagy) indicated that demethylation of the ATG7 was contributed to the immune exclusion in TME, with elevated TMEscoreB and fibroblast infiltration in TCGA-STAD and ACRG cohort (online supplemental figure S11E,F). Collectively, DNA methylation, such as different methylation regions of VAMP8 and ATG7, may offer a lens into the complexity and diversity of the TME and immuneactivity determination, thereafter might assist in optimizing immunotherapy strategies.
DISCUSSION
Our studies leveraging multi-omics data highlight TME evaluation (TMEscore) as a predictor of tumor immunogenicity and objective response rates and overall survival in six independent cohorts treated with ICBs. Moreover, the synergic therapy of ICB with chemotherapy or angiogenesis inhibitor is encountering the dilemmas of lacking functional molecular biomarkers. Notably, based on a multicenter clinical gastric cancer cohort, we discovered TMEscore is robust in predicting treatment efficacy in the context of checkpoint immunotherapy alone or its combination with chemotherapy or angiogenesis inhibitor, where the predictive accuracy of immune activation relevant signatures markedly shrinks.
Given the promising predictive value of TMEscore, we systematically investigated TMEscore pertinent underlying mechanisms to reinforce our refined understanding of the interplay between tumor-intrinsic features and TME and offer novel precise methodologies to accelerate precision immunotherapy. Selection strategies of optimal biomarkers remain controversial due to complicated clinical applications. 9 10 For example, though PD-L1 expression level indicated therapeutic benefit, patients with PD-L1 <1% also responded to ICBs. 1 In current study, Open access the TMEscore substantially outperformed the counterparts including PD-L1 abundance, TMB, and MSI-H in discriminating response to ICBs. 9 10 Merits of TMEscore is mainly attributed to the accurate identification of immune microenvironment activation, especially high CD8+ T cell infiltration tumors, immune exclusion and EBV infection status. Notably, EBV infection commonly accompanies with a low TMB, but is a unique marker with a high potential for response to ICB in GC, 5 48 which was consistently confirmed by Subudhi et al in the setting of prostate cancer. 61 Although TMB is a wide-recommended biomarker, specific alterations usually initiate carcinogenesis and neo-antigens generation but their roles in immune therapy sensitivity remain obscure. We identified mutations of ARID1A and PIK3CA associated with immune activation facilitating checkpoint immunotherapy. ARID1A is a component of the SWI/SNF chromatin remodeling complex, 62 frequently mutated in GC. 11 12 ARID1A deficiency closely correlates with the ICB response, 54 potentially attributing to impairing MMR and elevating PD-L1 expression. Our study unprecedently proposed that ARID1A deficiency reformed the TME, with two specific ARID1A mutation locations of p.D18550Tfs*33 and p.F2141Sfs*59 harboring markedly higher TMEscore. Current work also indicated a potential interaction between ARID1A 48 and PIK3CA 11 mutations and EBV infection, partially explaining the elevated TMEscore in EBV subtype.
Open access
Metabolically speaking, we discovered that activation of kynurenine metabolism was correlated with EBV infection and MSI-H status subsequently upregulate immune suppressive markers, such as PD-L1 and IDO. Consistently, a recent report indicated the mechanistic link between kynurenine metabolism and the immunosuppressive microenvironment. 63 64 Therefore, the inhibition of kynurenine metabolism may be a potential target for combinational therapy to improve the efficacy of ICB. 58 DNA methylation guided the epigenetic regulation of genes, which was not limited in cancer cells but also immune cells and stromal cells, thereafter hypomethylation of specific genes could modify TME components and their interactions. 65 Xiao et al have emphasized the contribution of the specific gene SOCS1 methylation of CAFs made in reprogramming the TME induced by PDAC cells. 66 Similarly, our analysis of DNA methylation landscape highlighted another gene methylation, VAMP8, correlated with the TME and immune-activity-related pathways. Additionally, extensive exploration of different methylation regions of VAMP8 exhibited an inverse trend in different TMEscore groups, thereby offering a novel understanding of complex interplay linking methylation with TME. Macroautophagy is an essential cellular catabolic process required for survival under conditions of starvation. Recent study indicated that loss of ATG7 in cancer cells which mediates autophagy disruption can enhance antitumor immune responses. 67 Our data suggest that ATG7 demethylation was closely associated with immune exclusion and CAF infiltration, which may provide insights into possible mechanisms.
Despite the TMEscore presenting high sensitivity in predicting immunotherapy efficacy, its application may be limited across diverse cancer types. 15 Tumor heterogeneity and tissue specificity are presumed to be the main reasons and could also be interpreted by the various immune microenvironments. We are collecting a large number of gastric cancer samples before immunotherapy to determine an appropriate TMEscore cut-off value for consequent clinical practice. To develop TMEscore into a clinical-grade immunotherapy biomarker, we are devoted to carrying out two clinical trial of gastric cancer treated with ICBs (NCT04850716, NCT04850729).
CONCLUSIONS
Collectively, we optimized a TME evaluation tool that may serve as a robust biomarker and integrated it as an open-source R package for further application in clinical implementation. The predictive capacity of TMEscore was verified in two advanced gastric cancer cohorts, which highlighted the predictive efficacy of tumor microenvironment evaluation. The intrinsic features involving the ARID1A and PIK3CA mutations, kynurenine metabolism, glycogen metabolism, ATG7 and VAMP8 methylation provide new insight into the potential mechanisms of TMEscore-guided precision immunotherapies. Competing interests None declared.
Patient consent for publication Not required.
Ethics approval Patients' samples were collected and analyzed after informed consents were obtained and approved by the Human Ethics Committee (SYSEC-KY-KS-2019-171) of Sun Yat-sen Memorial Hospital, Sun Yat-sen University. Written informed consent was obtained from individual or guardian participants.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available in a public, open access repository. Data are available upon reasonable request. Data may be obtained from a third party and are not publicly available. The raw sequencing data have been deposited at the European Nucleotide Archive and are available under accession number RJEB25780. The analytic code and package used to estimate the TMEscore and prevalent signature are provided for non-commercial use at GitHub:https:// github. com/ DongqiangZeng0808/ TMEscore and https:// github. com/ IOBR/ IOBR. A detailed README file is also available, complete with examples of how to use the package.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See https:// creativecommons. org/ licenses/ by/ 4. 0/. | 2021-08-12T06:23:47.732Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "0aad0b8d1206afe495e139af8f79b1d5342605fc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1136/jitc-2021-002467",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce18979e3a74e10900b77710ceafb65e6e2d4bd8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233169195 | pes2o/s2orc | v3-fos-license | Implications of the detection of sub-PeV diffuse $\gamma$ rays from the Galactic disk apart from discrete sources
Very recently, the Tibet-AS$\gamma$ collaboration reported the detection of $\gamma$ rays from the galactic disk in the energy range of 100 TeV -- 1 PeV. Remarkably, many of these $\gamma$ rays were observed apart from known very high energy (E$>$ 100 GeV) $\gamma$-ray sources. These results are best understood if these diffuse $\gamma$ rays: 1) were produced by a conventional rather than an exotic (i.e. dark matter decay or annihilation) process, 2) have a hadronic rather than a leptonic origin, 3) were produced in impulsive rather than stable sources or, alternatively, in optically thick sources. In addition to that, the detection of the sub-PeV diffuse $\gamma$ rays implies a limit on the flux of neutrinos from the Galactic disk and a lower limit on the rigidity of the cutoff in the Galactic cosmic ray spectrum.
γ rays of very high (E > 100 GeV) and super high energy (E > 100 TeV) may be detected with ground-based installations such as imaging atmospheric Cherenkov telescopes (IACT) [13][14][15][16] and air shower arrays (e.g. [17][18][19]). Very recently, the Tibet-ASγ collaboration reported the discovery of diffuse γ rays concentrating towards the Galactic plane [20] (hereafter A21). This observation has a number of interesting and important theoretical implications, some of which are considered below. In particular: 1. a conventional (astrophysical) production mechanism of these γ rays is favoured over an exotic mechanism (i.e. from dark matter decay or annihilation) (Sect. II) 2. the hadronic production mechanism is more likely than the leptonic one (Sect. III) 3. the high fraction of γ rays detected apart from discrete sources implies that the cosmic ray acceleration sites are either optically thick to these γ rays or that these accelerators were more active in the past than now (Sect. IV) 4. galactic cosmic ray models with a very low energy of the proton "knee" are excluded if the change in the spectral index of elemental spectra is large enough (Sect. V). * timur1606@gmail.com In addition, we note that diffuse Galactic γ-rays may help constraining the Galactic component of IceCube neutrinos (e.g. [21]).
II. CONVENTIONAL OR EXOTIC PRODUCTION MECHANISM?
Using the model of [22] (hereafter LV18) assuming the production of diffuse γ rays by cosmic rays in hadronuclear interactions, A21 show that their data are reasonably well approximated with the LV18 model. However, one could speculate that the flux of γ rays reported in A21 could be produced by decay or annihilation of dark matter particles. In this section we assume that the largescale distribution of Galactic dark matter follows the Navarro-Frenk-White (NFW) density distribution [23].
LV18 proposed a test of dark matter origin for Galactic diffuse γ rays using their distribution on the Galactic latitude (see Fig. 17 of LV18 and associated text). Following the approach of LV18, we calculated the angular distributions for the case of dark matter annihilation and decay and compare these with data presented in A21 for the 158-398 TeV energy bin (Fig. 1). Here, for simplicity, the effects of non-uniform sky exposure of the Tibet-ASγ array and γ-ray absorption in the Galaxy [24][25][26] were neglected. Estimates show that the proper account of the exposure non-uniformity and the γ-ray absorption result in a broadening of the latitude distribution.
The decay model poorly fits the data: the resulting latitude distribution is far too broad. Even for annihilating dark matter, this distribution does not provide a good fit to the data. Moreover, the annihilation model is less attractive in view of the unitarity limit on the mass of dark matter particle [27]. Detailed constraints on dark matter decay time / annihilation cross section are in preparation and will be published elsewhere.
III. HADRONIC OR LEPTONIC γ RAYS?
Cosmic rays excite turbulence in the interstellar medium, inhibiting the cosmic ray transport outside of their sources [28]. Assuming the diffusion coefficient according to eq. (3) of [29] with r z = 10 pc, r t = 100 pc, β = 1, δ = 0.35, R 0 = 4 GV, D 0 = 4.0 × 10 28 cm 2 /s, D z = D 0 /100, we estimate the typical time needed to travel the central 20 pc as ∼ 100 years (this time is somewhat greater for the greater radius of 100 pc, ∼ 200 years). The typical synchrotron cooling time for electrons is ≈ 2(B/100µG) −2 (E e /500T eV ) −1 years (e.g. [30]), i.e. about 100 years for E e = 500 TeV and B = 15µG. We conclude that for the typical distance to the source in excess of 1 kpc these electrons would be confined inside a 1 • circle as seen by a distant observer, resulting in a very sharp concentration of γ-rays near discrete sources, in stark contradiction to the results of A21 [31]. We note that a similar qualitative argument was put forward in A21, without, however, quantitative estimates. Additional constraints could be obtained from the balance of energy gain and losses during the acceleration process.
IV. THE NATURE OF COSMIC RAY SOURCES
Now consider the escape of protons and nuclei from the sources. The typical escape time is ∼ 100 years (see the previous section). The typical acceleration time up to the knee [32][33][34][35][36] t acc ∼ D/v 2 s = (cE)/(3eBv 2 s ) (v s is the shock front velocity). For stable Galactic hadronic PeVatrons such as star forming regions [37][38][39][40][41] is t acc ∼ 10 3 years or even more.
The typical lifetime of 3 PeV cosmic rays in the Galactic volume is ∼ 5 × 10 4 years (e.g. [42]). The typical con- trast of gas densities between the sources and the Galactic volume is about 10 2 − 10 3 . The number of produced γ rays is proportional to the concentration of the gas and the time spent inside particular regions (i.e. inside the discrete sources and inside the Galactic disc, but outside the discrete sources). We conclude that the time spent in sources should be less than several hundred years in order to not overproduce γ-rays near the discrete sources, in stark contrast to the above estimates. We conclude that the sources are likely to be impulsive or optically thick for > 100 TeV γ rays.
V. COSMIC-RAY KNEE CONSTRAINED WITH γ RAYS
The spectrum of γ-rays measured with the Tibet-ASγ array together with several model curves is shown in Fig. 2. For model curves, the primary proton spectrum was assumed to follow eq. (2) of [43]. Only primary protons were considered. Black curve corresponds to the proton spectral index below the knee γ 1 = 2.7, ∆γ = 2, the energy of the knee E br = 1 PeV and ǫ c = 10. Blue curve is for the same parameters, except E br = 3 PeV, magnenta curve is for the same parameters as black curve, except ∆γ = 1. Remarkably, results for smaller ǫ c down to 1 are similar to those presented in the graph. We conclude that relatively small values of E br < 1 PeV are excluded for sufficiently large values of ∆γ. We note that much better constraints could likely be achieved using the data of the LHAASO experiment [44].
VI. CONCLUSIONS
The discovery of diffuse superhigh energy γ-rays with Tibet-ASγ opened a new area of study in γ-ray as-tronomy, capable of constraining dark matter properties, probing the Galactic neutrino component, and unveiling the nature of cosmic ray sources. New data are expected from the LHAASO experiment shortly [45]. Directions around γ rays registered with Tibet-ASγ (and, hopefully, LHAASO) could be studied with existing IACT arrays H.E.S.S., MAGIC, VERITAS, as well as with the forthcoming CTA array [16,46] in order to put further constrain on the possible contribution from discrete sources to the diffuse γ-ray flux.
ACKNOWLEDGMENTS
The author is grateful to Prof. P. Lipari and Dr. S. Vernetto for sharing their model of γ-ray absorption in the Galaxy (Ref. [24]). Helpful discussions with Prof. I.V. Moskalenko and Prof. S.V. Troitsky are gratefully acknowledged. This work is supported in the framework of the State project "Science" by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2020-778. All graphs in the present paper were produced with the ROOT software toolkit [47]. This research has made use of the NASA ADS bibliographical system. | 2021-04-08T01:16:07.437Z | 2021-04-07T00:00:00.000 | {
"year": 2021,
"sha1": "c711b0a541a2089b8089d9d6b2d7beb0d7f83927",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c711b0a541a2089b8089d9d6b2d7beb0d7f83927",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
28186654 | pes2o/s2orc | v3-fos-license | Implantable cardioverter defibrillator specific rehabilitation improves health cost outcomes Findings from the COPE-ICD randomized controlled trial
Objective: The Copenhagen Outpatient ProgrammE – implantable cardioverter defibrillator (COPE-ICD) trial in cluded patients with implantable cardioverter defibrillators in a randomized controlled trial of rehabilitation. After 6–12 months significant differences were found in favour of the rehabilitation group for exercise capacity, general and mental health. The aim of this paper is to explore the long-term health effects and cost implications associated with the rehabilitation programme; more specifically, ( i ) to compare implantable cardioverter defibrillator therapy history and mortality between rehabilitation and usual care groups; ( ii ) to examine the difference between rehabilitation and usual care groups in terms of time to first admission; and ( iii ) to determine attributable direct costs. Methods: Patients with first-time implantable cardioverter defibrillator implantation ( n = 196) were randomized (1:1) to comprehensive cardiac rehabilitation or usual care. Outcomes were measured by implantable cardioverter defibril lator therapy history from patient records and national register follow-up on mortality, hospital admissions and costs. Results: No significant differences were found after 3 years for implantable cardioverter defibrillator therapy or mor tality between rehabilitation and usual care. Time to first admission did not differ. The cost of rehabilitation was 335 USD/276 Euro per patient enrolled in rehabilitation. The to tal attributable cost of rehabilitation after 3 years was –6,789 USD/–5,593 Euro in favour of rehabilitation. Conclusion: No long-term health outcome benefits were found for the rehabilitation programme. However, the rehabilitation programme resulted in a reduction in total attributable direct costs. ( ii ) to examine the difference between rehabilitation and usual care groups in time to first admission; and ( iii ) to determine attributable direct costs.
comprehensive cardiac rehabilitation that includes both exercise training and psycho-educational components is recommended for patients with various heart conditions (1). However, evidence from studies of patients with complex conditions, such as those with an implantable cardioverter defibrillator (IcD), is sparse (2). the copenhagen outpatient ProgrammE -implantable cardioverter defibrillator (coPE-IcD) trial, initiated in 2007, included 196 IcD patients in a randomized controlled trial (Rct) on rehabilitation. the comprehensive cardiac rehabilitation intervention consisted of an exercise training component and a psycho-educational component. Primary and secondary outcome analyses after 6-12 months showed significantly increased VO 2 after exercise training compared with usual care (mean 23.0 (95% confidence interval (95% cI) 20.9-22.7) vs 20.8 ml/min/kg (95% cI 18.9-22.7) in the control group (p = 0.004)). Furthermore, comprehensive cardiac rehabilitation significantly increased general health and mental health compared with usual care (3).
Rehabilitation trials often evaluate intermediate or surrogate outcomes, such as Vo 2 , if they are too short-term to capture all the major health effects and resource implications associated with the treatment (4). In IcD rehabilitation, evidence of reduced risk of ventricular arrhythmia or IcD shock therapy is called for (5). Furthermore, hospitalization and healthcare costs have seldom been measured (2). Such long-term posthoc analyses were pre-planned for the coPE-IcD trial (6). the objective of this paper is to examine the 3-year long-term effects of a comprehensive cardiac rehabilitation programme for first-time IcD recipients from the coPE-IcD trial (6); more specifically, to: (i) compare IcD therapy history and mortality between rehabilitation and usual care groups; (ii) to examine the difference between rehabilitation and usual care groups in time to first admission; and (iii) to determine attributable direct costs.
IMPlAntAblE cARDIoVERtER DEFIbRIllAtoR SPEcIFIc REHAbIlItAtIon IMPRoVES HEAltH coSt outcoMES: FInDIngS FRoM tHE coPE-IcD RAnDoMIzED contRollED tRIAl
MEtHoDS the design and methods of the coPE-IcD trial have been described in detail elsewhere (6) and are outlined briefly below.
Setting and intervention
the coPE-IcD trial was conducted in a large university hospital with a volume of approximately 300 first-time IcD implantations each year during the trial period. Inclusion criteria were: patients who received a first-time IcD implant and agreed to participate in the entire programme. the intervention included a comprehensive, disease-specific cardiac rehabilitation approach, with exercise-training and psychoeducation in addition to usual care. Patients were randomized in a 1:1 ratio to rehabilitation or usual care. the approach for the psycho-educational part of the intervention was inspired by Parse's human becoming practice methodologies (7). the topics discussed were: events and experiences leading up to the IcD implantation, present thoughts and questions, implications for everyday life, avoidance behaviour, exercise training, impact on family, information (including technical) and recommendations, shock and phantom shock, body image, driving and sexuality. the patients consulted the nurse in person or by phone once a month for 6 months, and every 2 months thereafter for the following 6 months. the psycho-educational part of the intervention was performed by 2 nurses, each with 10 years of clinical experience in the care of patients with IcDs. three months after the IcD implantation, patients began to participate in training sessions twice a week for a 12-week period. the physical training programme consisted of an individual consultation with a physiotherapist and an individually tailored training programme. Patients in the control group followed a usual care programme, which included medical follow-up and an invitation to participate in a 2-h group session including information about the IcD and exchange of experiences among patients.
All patients in the comprehensive cardiac rehabilitation group participated in the exercise training component of the programme: 46% exercised in hospital, 26% outside the hospital, and 28% did both. A total of 66% of the patients in the usual care group participated in a physical training programme, 17% participated in an exercise programme at a local hospital, 41% participated in exercise training outside the hospital, and 8% did both. trial discontinuation did not differ significantly between the intervention and usual care group (28.8% vs 30.3% drop-outs; p = 0.64).
because of the nature of rehabilitation, the interventions were openlabelled to the staff and the patients. A blinded investigator performed data collection and administration. blinded outcome analyses were conducted.
Outcomes
Descriptive information on age, sex, marital status and citizenship was available through national registers. Information on comorbidity was obtained from the Danish national Patient Register (nPR) (8), which holds information on all admissions to all Danish hospitals since 1977. We calculated the tu comorbidity index (9) utilizing information on primary and secondary diagnoses from all in-and out-patient contacts 10 years before the index admission. the following diseases are included in the tu score: congestive heart failure, cardiogenic shock, arrhythmia, pulmonary oedema, malignancy, diabetes, cerebrovascular disease, acute/chronic renal failure, chronic obstructive pulmonary disease. All diagnoses are weighted equal.
Information on IcD indication and disease demographics for participants was available from patient records.
Implantable cardioverter defibrillator therapy. IcD therapy history was found in patient records up to June 2013. Data registration and analysis was blinded. Programming of IcD therapy was done according to local practice. Ventricular arrhythmias, anti-tachycardia pacing (AtP) and shock therapy were assessed from the time of randomiza-tion in 2007-2009 until May 2013. IcD therapy during the first 30 days after IcD implantation was not included, since the intervention had not started. All therapies were initially evaluated by a trained technician and, subsequently, by an electro-physiologist with special competences in device therapy. only appropriate therapy was included. In the assessment of appropriate vs inappropriate therapy, standard clinical criteria were used, including A-V relationship (if available), morphology, regularity of V-signals, and onset of tachycardia.
Mortality and hospital admissions. Information on vital status was available through the civil Registration System (10) up to June 2013. Admissions after the first 30 days following randomization were available through the nPR. We followed the participants for 3 years and measured and evaluated short-term (1 year) and long-term (3 years) effects on admissions. We obtained information on all admissions, first admission, first acute (non-elective) admission and first acute heartrelated admission, including only admissions with an International classification of Diseases -10 th edition (IcD-10): I00-I99 diagnosis.
Costs. costs attributable to the intervention were calculated by measuring time spent on an average patient in the intervention group, priced by the salaries of nurses, physicians and physiotherapists (salaries include pension and vacation allowances). A category of other variable costs was included (purchase of pulse watches and t-shirts for use during the training programme). the calculation only considers operational costs and does not include production loss, cost of transportation or the costs of buildings (rent) and equipment.
An estimate of the 3-year cost of hospitalization, outpatient treatment (including emergency ward visits), and care in the primary sector (general practitioner, physiotherapist and psychologist) for both the cardiac rehabilitation group and the usual care group are made. the net costs are given by the difference between average costs in the 2 groups.
the nPR was used to measure the costs of hospital services (hospitalization, outpatient treatment and emergency ward visits). the nPR contains information on a mean price-rate measured by Diagnosis Related groups (DRg) for each contact with hospitals.
Primary sector costs were measured by use of the Danish national Health Service Register (11), which contains information on services performed in primary care by practitioners, specialists, physiotherapists, chiropractors, etc., which are fully or partially financed by public funding. Data on general practitioners, physiotherapists and psychologists are included in the analysis.
costs are measured at 2007 prices by use of price indices on health sector costs from Statistics Denmark. Since the study period is 3 years, second and third year costs are discounted to present values by a discount factor of 3%. All costs were measured in 2007 Danish kroner, but were translated into uS Dollar Purchasing Power Parities (uSD-PPP) and Euro Purchasing Power Parity (EuRo-PPP) by use of organisation for Economic co-operation and Development (oEcD) state extracts. PPP is used in order to take into account the differences in prices between countries.
Statistical methods
categorical variables are presented as frequencies and percentages. continuous variables are presented as mean and standard deviation (SD). baseline data are presented as similarities across groups by number and percentage. As recommended, no significant test for detecting baseline differences was performed (12,13).
comparison of mortality and IcD therapy history was performed by χ 2 -test for categorical variables and Mann-Whitney U testing for non-symmetrical variables.
time to first admission (general, acute or acute heart-related) was analysed by Kaplan-Meier survival analyses and the log-rank test.
Since cost-data are skewed to the right, the 95% confidence interval (95% cI) was computed by use of a non-parametric bootstrap analysis (1,000 replications).
All statistical analyses were conducted using SAS 9.3 software.
Ethics
Patients gave their written informed consent after receiving oral and written information. All data was treated in confidence and patients were assured anonymity. the trial followed the recommendations of the Declaration of Helsinki II. the trial was approved by the regional ethics committee (j.
RESultS
During the inclusion period, october 2007 to november 2009, 589 patients received a first-time IcD implantation at the setting. A total of 196 patients were included, 99 randomized to the comprehensive cardiac rehabilitation and 97 to usual care (Fig. 1). the baseline demographics and clinical characteristics of the participants (rehabilitation and usual care groups) are presented (table I).
Rehabilitation vs usual care. the number of IcD shocks delivered did not differ significantly between rehabilitation and usual care groups; a mean of 0.6 vs 0.5 shocks (p = 0.90). likewise, no significant difference was found in ventricular tachycardia/ventricular fibrillation or AtP (table II) the total cost of the intervention was 335 uSD-PPP/276 Euro-PPP per patient enrolled in rehabilitation. the total direct cost after 3 years in the rehabilitation groups was 19,664 uSD-PPP/16,199 Euro-PPP vs 26,453 uSD-PPP/21,792 Euro-PPP in the control group. the total attributable cost of the intervention after 3 years was -6,789 uSD-PPP/-5,593 Euro-PPP (table IV).
DIScuSSIon
As previously reported by berg et al. (3), significant differences were found between groups in physical capacity and mental health after rehabilitation. the aim of this paper was to explore the long-term health effects and cost implications associated with the rehabilitation programme. looking at the 2 groups, rehabilitation and usual care, the long-term followup revealed no difference in IcD shock, mortality or time to first admission between the groups. the total attributable cost of the intervention was -6,789 uSD-PPP/-5,593 Euro-PPP in favour of the intervention.
Implantable cardioverter defibrillator therapy shock no difference was found in AtP or IcD shock, which is in accordance with previous findings in combined programmes (14) and from psycho-educational IcD programmes (15)(16)(17)(18)(19). looking at the exercise-only programmes, the evidence is somewhat conflicting, as the large trial Heart Failure: A controlled trial Investigating outcomes of Exercise training (HF-ActIon) (n = 1,285 IcD patients) found no difference in IcD shock (20,21), but a smaller (n = 82) non-randomized trial found nonparticipation in outpatient rehabilitation to be a predictor for IcD shock (odds ratio (oR) 4.6, 95% cI 1.5-17.8, p < 0.05) (22). Davis et al. adjusted for exercise limitation, but not for other co-morbidities that might have confounded the data in the non-randomized design. A randomized trial by belardinelli (n = 52) of exercise vs no exercise found that 8 patients in the control group had sustained Vt, while no Vt events were found in the intervention group (23).
Mortality
Examining mortality, we found no difference between the groups, which matches previous findings (20,23). However, findings from Ischemic rehabilitation show that a one metabolic equivalent (MEt) higher level of maximal aerobic capacity (equivalent to 1 km/h faster running) was associated with a 13% reduction in mortality (24). Several possible explanations exist for not obtaining the same positive effect in IcD patients. In the present trial the difference between the 2 groups after exercise was 0.6 MEt, which might not have been enough to have an effect on mortality. Furthermore, the difference between the 2 groups was diluted, since the usual care group exercised on their own, which may explain the limited effect (3). Furthermore, the cardiac disease, which indicated for IcD implantation, may vary and may determine an inhomogeneous response to physical and psycho-educational training.
Admissions
Readmission is considered an indicator of morbidity and quality and efficiency of care and, from the literature on heart failure, we know that patient education and home-based follow-up reduces readmissions (25). We found no difference in time to first admission. none of the previous IcD rehabilitation trials reported time to first admission. However, in the Rct by Dunbar et al. no difference was found in the number of emergency department visits and hospitalizations after 12 months (19), which is in accordance with our findings. In the heart failure literature readmission rates after 6 months are as high as 45%; we found a rate of 39.8% after 1 year and 67.9% after 3 years. Costs the total cost of the intervention was 335 uSD-PPP/276 Euro-PPP per patient. this cost is relatively low compared with previous findings of the mean cardiac rehabilitation costs of 3,671 in 2003-uSD (4,139 in 2007 (26). oldridge et al. found a cost of 1,365 in 2003-uSD (1,539 in 2007-uSD) for a combined exercise and behaviour programme (27). using a group-/home-based format has been shown to be equally effective, and the cost savings are evident (26). Furthermore, we may have had lower costs using nursing consultations in the psycho-educative approach and we did not include the costs of housing, equipment and management. We found a lower cost of physiotherapy in the rehabilitation group. this could be explained by the fact that experimental physiotherapy is calculated into the total cost of rehabilitation in the rehabilitation group, whereas outpatient physiotherapy is calculated on its own. Another explanation could be that rehabilitation is preventive for further needs. We found the total attributable cost of the coPE-IcD trial to be -6,789 uSD-PPP/-5,593 Euro-PPP in favour of the intervention. this cannot be explained by significantly lower numbers of admissions or length of stay, and therefore must be explained by more expensive treatment. the programme thus appears to result in a cost saving. none of the previous IcD rehabilitation trials have reported net savings.
Generalizability
External validity is high, since this population was included following the guidelines for IcD implantation from 2006. the baseline measures were mostly similar to findings from trials conducted in the uSA and Europe (15,28,19). the use of blinded outcome assessment increased the validity of the data.
Study limitations
Study limitations include the fact that selection bias may exist, as we did not include patients if they were already included in other trials. looking at the baseline measures it seems as though the randomization worked as there are comparable values. A slightly higher number of patients in the usual care group had a history of ischaemic heart disease and new York Heart Association class (nYHA III) than in the rehabilitation group; however, no significant difference in 6MWt and Vo 2 were seen between groups before the intervention occurred (3). the usual care group might have been contaminated by the information given during the project inclusion, suggesting that psycho-educational assistance and exercise training might be beneficial after IcD implantation. this information may have led to usual care patients seeking rehabilitation elsewhere. collateral intervention occurred when some patients were offered cardiac rehabilitation at their local hospital, which may have reduced the effects of the experimental intervention, but resulted in conservative estimations of differences, by groups.
We used register-based follow-up information, which ensured close to complete follow-up.
costs were calculated using average costs at a national level. Micro-costing based on accurate resource utilization is likely to be more accurate and reliable. use of DRg in pricing of hospitalizations may be inadequate in capturing the true benefit of the intervention, as a minimization in, for example, bed days will not be reflected in a lower overall DRg (4). costing of the rehabilitation intervention only included variable costs and did not include capital costs, indicating an underestimation of the costs.
Clinical and research implications
We continue to see high readmission rates in this population and the beneficial effect of IcD-specific rehabilitation on mortality and IcD shock is still poorly investigated. Even though the "hard" endpoints, adverse events, did not seem to be affected this should be interpreted with caution due to low numbers and the explorative nature of these analyses.
However, we found that exercise training, in combination with psycho-educational consultations by a nurse, improves exercise capacity, general and mental health and seems to produce a cost saving over time. there are reasons to believe that this approach is beneficial in clinical practice in terms of quality of life and from a cost perspective. larger multicentre trials designed with adverse events outcomes are needed to determine the effects on adverse events.
In conclusion, no difference in IcD shock, mortality or time to first admission was found between the groups. the total attributable cost of the intervention was -6,789 uSD/-5,593 Euro in favour of the intervention. AcKnoWlEDgEMEntS this study was funded by the tryg Foundation and copenhagen university Hospital, Rigshospitalet. | 2018-04-03T01:06:45.885Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "ab5105169bb92a9856f46f2bb8270119ef40b567",
"oa_license": null,
"oa_url": "https://www.medicaljournals.se/jrm/content_files/download.php?doi=10.2340/16501977-1920",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fa4072f32bd3a824ece6c90bd1d17479be852e7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6729957 | pes2o/s2orc | v3-fos-license | Bicarbonate enhances expression of the endocarditis and biofilm associated pilus locus, ebpR-ebpABC, in Enterococcus faecalis
Background We previously identified ebpR, encoding a potential member of the AtxA/Mga transcriptional regulator family, and showed that it is important for transcriptional activation of the Enterococcus faecalis endocarditis and biofilm associated pilus operon, ebpABC. Although ebpR is not absolutely essential for ebpABC expression (100-fold reduction), its deletion led to phenotypes similar to those of an ebpABC mutant such as absence of pili at the cell surface and, consequently, reduced biofilm formation. A non-piliated ebpABC mutant has been shown to be attenuated in a rat model of endocarditis and in a murine urinary tract infection model, indicating an important participation of the ebpR-ebpABC locus in virulence. However, there is no report relating to the environmental conditions that affect expression of the ebpR-ebpABC locus. Results In this study, we examined the effect of CO2/HCO3-, pH, and the Fsr system on the ebpR-ebpABC locus expression. The presence of 5% CO2/0.1 M HCO3- increased ebpR-ebpABC expression, while the Fsr system was confirmed to be a weak repressor of this locus. The mechanism by which the Fsr system repressed the ebpR-ebpABC locus expression appears independent of the effects of CO2- bicarbonate. Furthermore, by using an ebpA::lacZ fusion as a reporter, we showed that addition of 0.1 M sodium bicarbonate to TSBG (buffered at pH 7.5), but not the presence of 5% CO2, induced ebpA expression in TSBG broth. In addition, using microarray analysis, we found 73 genes affected by the presence of sodium bicarbonate (abs(fold) > 2, P < 0.05), the majority of which belong to the PTS system and ABC transporter families. Finally, pilus production correlated with ebpA mRNA levels under the conditions tested. Conclusions This study reports that the ebp locus expression is enhanced by the presence of bicarbonate with a consequential increase in the number of cells producing pili. Although the molecular basis of the bicarbonate effect remains unclear, the pathway is independent of the Fsr system. In conclusion, E. faecalis joins the growing family of pathogens that regulates virulence gene expression in response to bicarbonate and/or CO2.
Background
Enterococci are part of the normal flora in human intestines and are also a leading cause of nosocomial infections [1,2]. These organisms are somehow able to migrate from the gastrointestinal tract into the bloodstream and cause systemic infections such as bacteremia and even endocarditis [2][3][4]. Although many strains of enterococci seem to be harmless commensals, particular subgroups of Enterococcus faecalis and Enterococcus faecium predominate among isolates from nosocomial enterococcal infections. In E. faecalis, numerous factors important for virulence have been characterized. For example, the Fsr system, a homologue of the staphylococcal Agr system, has been shown to be important for virulence due, at least in part, to its control of gelatinase and a serine protease expression via a quorum-sensing mechanism [5][6][7]. Microarray studies also indicated that the Fsr system regulates other genes important for virulence [8], one of which is the locus encoding Ebp pili [8], whose subunits are encoded by the ebp operon [9]. A non-piliated ebp mutant, producing much less biofilm than the parent strain, was shown to be attenuated in a rat model of endocarditis [9] and in a murine urinary tract infection model [10]. We previously described EbpR as an important activator of the ebpABC operon encoding the pili in E. faecalis OG1RF [11]. Although ebpR is not essential for ebpABC expression, we detected 100-fold less ebpABC mRNA in a ΔebpR mutant compared to the OG1RF parent strain. In addition, even in the presence of an intact ebpR gene, only 5-20% of the cells, grown aerobically in BHI or in TSBG, were found to produce pili (detected by electron microscopy or immunofluorescence) [9,11]. These results imply that other regulatory and/or environmental factors may affect pilus production.
Bicarbonate is a major element of the mammalian body for reaching and maintaining homeostasis. In equilibrium with CO 2 , H 2 CO 2 and CO 3 2-, depending on pH, temperature, and CO 2 pressure, bicarbonate does not diffuse freely across the membrane and needs specific transporters [12]. In the stomach, HCO 3 is secreted by the surface mucus cells, where it gets trapped in the mucus and forms part of the mucus-HCO 3 barrier, thereby maintaining a pH gradient of pH 2 in the lumen to pH 7 at the mucosal epithelium interface. Interestingly, some microbial pathogens have been shown to respond in vivo to CO 2 (from 5 to 20%) and/or HCO 3 -(10-100 mM) by enhancing production of factors important for virulence (Staphyloccocus aureus [13], Vibrio cholerae [14], group A streptococcus [15], Bacillus anthracis [16,17], Cryptococcus neoformans [18] and Citrobacter rodentium [19]). Regulatory proteins have been described which mediate the CO 2 /HCO 3 response at the transcriptional level in B. anthracis (AtxA-like proteins [20]), in Group A streptococci (Mga [21]) and, recently, in C. rodentium with RegA [19]. For E. faecalis, except for a report showing an increase in cytolysin expression when grown in 80% H 2 -20% CO 2 [22], we could find no other report of a CO 2 /HCO 3 effect on known virulence-associated genes. A candidate for such study is the ebpABC operon and its regulator, ebpR, a gene encoding a transcriptional regulator affiliated with the AtxA/Mga family; as mentioned above, this family is known to have its regulon activated in response to elevated CO 2 [15,23].
In the present study, we report the identification of environmental conditions affecting the expression of the ebpR-ebpABC locus and, consequently, pilus production. In addition, we found that Fsr repressed the ebpR-ebpABC locus in all conditions tested, independent of the CO 2 /bicarbonate effect. Finally, among the dozens of genes that are differentially expressed after being exposed to bicarbonate, the majority belong to the PTS system and ABC transporter families.
Results
ebpR and ebpA expression profiles when grown aerobically in TSBG We previously identified an E. faecalis transcriptional regulator, EbpR, which positively affects the expression of the endocarditis and biofilm-associated pilus operon, ebpABC [11]. To further explore ebpR and ebpABC expression profiles, we created lacZ fusions with the ebpR and ebpA promoters (P ebpR ::lacZ and P ebpA ::lacZ). We first tested the time course of expression of ebpR and ebpA in OG1RF grown aerobically in TSBG (our standard biofilm medium) from mid-log growth phase to late stationary. In these conditions, each fusion showed the same general dome-shape pattern that reached a peak between 5 and 6 hr (Fig. 1A); specifically, the β-gal units for OG1RF carrying the ebpA promoter were 2.4, 5.4, and 0.4 at mid-log (3 hr after starting the culture), entry into stationary (5 hr) and late stationary growth phase (24 hr), respectively, while the ebpR fusion generated consistently lower β-gal units than the ebpA fusion.
Since β-galactosidase assays reflect translation as well as transcription, we also directly explored the steadystate mRNA levels of transcripts of ebpR and ebpA with qRT-PCR in the same conditions used above (TSBG, aerobically) compared to the housekeeping gene gyrB. At the peak of ebpR expression, which occurred between mid-and late log phase growth, the ratio between ebpR and gyrB transcript levels was 0.04 (Fig. 1B). After entry into stationary phase, ebpR expression decreased to an ebpR/gyrB ratio of 0.004 representing a 10-fold decrease when compared to late log growth phase levels. Likewise, ebpA expression also peaked at the late log growth phase with an ebpA/gyrB ratio of 1.5 and decreased to a ratio ebpA/gyrB of 0.12 (also a 10-fold reduction when compared to ebpA expression level at late log growth phase). The ebpA steady-state mRNA levels were an average of 37-fold higher than ebpR steady-state mRNA levels. Overall, the patterns between qRT-PCR and the β-gal assays were similar except for a one-hour delay for peak expression in the β-gal assays, probably due to a delay between transcription and translation.
The CO 2 -NaHCO 3 induction effect on ebpR and ebpA expression As we previously noted [11], EbpR shares some homology with transcriptional regulators of the AtxA/Mga family. In this family, it has been shown that AtxA and Mga activate their regulon from mid-log to entry into stationary phase and that their regulon is affected by the presence of 5% CO 2 /0.1 M NaHCO 3 [15,23]. We therefore tested the effect of CO 2 /NaHCO 3 on ebpR and ebpA expression during growth using the P ebpR :: and P ebpA ::lacZ fusions in OG1RF as shown in Fig. 2A. For the aerobic cultures, both ebpR and ebpA β-gal profiles followed the dome-shaped pattern over time, as described above. However, the presence of CO 2 / NaHCO 3 led to a 2-3 fold increase in the β-gal units early during growth and, after the cultures entered stationary phase, ebpR and ebpA expression levels continued to increase for two hours and then showed only a slight decrease from 8 hr to 24 hr. At 24 hr, the β-gal units for OG1RF carrying the ebpA promoter were 13.9 in the presence of CO 2 /NaHCO 3 compared to 0.4 aerobically, a 33-fold difference. Similarly, the β-gal units for OG1RF carrying the ebpR promoter were 1.2 in presence of CO 2 /NaHCO 3 compared to 0.13 aerobically, a 9-fold difference.
To determine whether the CO 2 /NaHCO 3 effect on ebpA expression was dependent on the presence of ebpR, we tested ebpA expression in an ebpR deletion mutant (TX5514). Using the ebpR deletion mutant (TX5514) containing P ebpA ::lacZ, β-gal production was assessed in air and in the presence of 5% CO 2 /0.1 M NaHCO 3 and β-gal production remained at the background level in both conditions (Fig. 2B). These results combined with our previously published results [11] indicate that, in air as well as in the presence of 5% CO 2 /0.1 M NaHCO 3 , ebpR is important for ebpA expression and that the 5% CO 2 /0.1 M NaHCO 3 effect on ebpA expression level also requires the presence of ebpR.
We previously reported that only a fraction of the OG1RF cells were positive for pilus expression by immunofluorescence ( [11]). To examine whether the presence of CO 2 /NaHCO 3 affected the amount of pili per cell or the percentage of cells positive for pilus production, we used flow cytometry. As early as entry into stationary growth phase, a difference in the percentage of pilus positive cell was visible ( Fig. 3A) with 53% positive when grown in air compared to 87% positive when grown in the presence of CO 2 /NaHCO 3 . The difference in the percentage of positive cells remained in later stages of growth. Specifically, Fig. 3B shows that, at 6 hr, 76% of the cells were positive when grown in air compared to 99% when the cells were grown in the presence of CO 2 /NaHCO 3 . The mean fluorescence intensity, between growth conditions and growth phases, remained constant with an average of 268. We also used anti-EbpC antibodies to probe mutanolysin extracts spotted on a dot blot for pilus production. An approximately four-fold increased signal density was observed in cells grown in the presence of CO 2 /NaHCO 3 compared to the cells grown in air (Fig. 3C). Additionally, Figure 1 ebpR and ebpA expression profiles in OG1RF. A. Expression levels of ebpA and ebpR using gene promoter::lacZ fusions. OG1RF containing either P ebpR ::lacZ (black triangle) or P ebpA ::lacZ (black square) were grown in TSBG. For β-gal assays, samples were collected every hour from 3 to 8 hr, then at 10 and 24 hr after starting the culture (x axis). The left axis represents the β-gal units (OD 420 nm /protein concentration in mg/ml). The right axis indicates the OD 600 nm readings. All sets of cultures presented were analyzed concurrently. This figure is a representative of at least three independent experiments. B. qRT-PCR with RNA purified from OG1RF cultures grown aerobically in TSBG. The left axis represents the level of transcript normalized to gyrB transcript level. The right axis indicates the OD 600 nm readings. The dashed line shows the mean (with standard deviation) of 5 independent cultures of OG1RF grown in TSBG. The transcript levels of ebpR (black triangle) and ebpA (black square) shown represent two different data sets, each tested in duplicate that were normalized using gyrB transcript levels.
no signal was detectable under either growth condition in the mutant lacking ebpR, confirming the importance of ebpR for ebpABC expression and pilus production aerobically as well as in the presence of 5% CO 2 /0.1 M NaHCO 3 .
The Fsr system effect on the ebp locus
We previously presented data in our microarray study suggesting that Fsr repressed the ebpR-ebpABC locus. However, the Fsr effect was only seen at one time point (during late log growth phase) using BHI grown cells [8]; in this medium, fsrB expression increased from midlog to entry into stationary phase and then decreased rapidly [6]. Since our current study used mainly TSBG (our biofilm medium) as growth medium, we investigated the fsrB expression profile in TSBG. fsrB expression also increased until entry into stationary growth phase, reaching 66% of the expression detected in BHI broth, but then remained relatively constant throughout stationary phase (Fig. 4). These results indicate that fsr expression is variable in different conditions. We next tested ebpR and ebpA expression using the P ebpR :: and P ebpA ::lacZ fusions in OG1RF and TX5266 (ΔfsrB mutant), grown in parallel in TSBG aerobically. Both ebpR and ebpA gene expression profiles followed the same pattern in TX5266 as they did in OG1RF with an increase in expression until the culture reached stationary phase followed by a slow decrease (Fig. 5A). However, ebpR expression was 2-fold lower in OG1RF with 0.3 β-gal units compared to 0.8 β-gal units in TX5266 at entry into stationary phase. Similarly, ebpA expression was 4-fold lower in OG1RF with 3.7 β-gal units compared to 14.1 β-gal units in TX5266 early in stationary phase. These results confirm the role of the Fsr system as a repressor of the ebpR-ebpABC locus in TSBG, adding to the results obtained by microarray at one specific growth phase using cells grown in BHI.
To determine whether the CO 2 /NaHCO 3 effect on ebpA and ebpR expression is mediated through Fsr, we looked at ebpR and ebpA expression in TX5266 in air and in the presence of 5% CO 2 /0.1 M NaHCO 3 . As shown in Fig. 5B, the ebpA and ebpR expression profiles in TX5266 grown aerobically and in the presence of 5% CO 2 /0.1 M NaHCO 3 presented the same general profile as in OG1RF ( Fig. 2A). That is, ebpA expression increased from 6.8 β-gal units at mid-log growth phase to 13.8 β-gal units at late log growth phase and decreased gradually to 0.6 β-gal units by 24 hr (late stationary). In the presence of 5% CO 2 /0.1 M NaHCO 3 , ebpA expression increased from 16.8 β-gal units at mid-log growth phase to 56.5 β-gal units (5-fold more than with cultures
Figure 3
Detection of EbpC produced by OG1RF, ΔfsrB, and ΔebpR. A. Flow cytometry analysis of OG1RF grown in air (black) or in the presence of 5% CO 2 /0.1 M NaHCO 3 (green) labeled with an anti-EbpC rabbit polyclonal immune serum and detected with phycoerythrin. The cells were collected at "T4", which corresponds to the entry into stationary growth phase (4 hrs after starting the culture). The percentages between brackets indicate the percentage of positive cells (WinMDI 2.9, marker set for 500-1024). In red is represented OG1RF grown in air incubated with a pre-immune serum and detected with Phycoerythrin as negative control. B. Flow cytometry analysis was done in the same conditions as above with samples collected at "T6" which corresponds to early stationary growth phase. C. An equal amount (by BCA protein assay) of mutanolysin extract preparation was 2-fold serial diluted and spotted onto a nitrocellulose membrane. Pilus presence was detected with an anti-EbpC rabbit polyclonal immune serum. Figure 4 fsrB expression profile in OG1RF. For β-gal assays, samples were collected every hour from 3 to 8 hr, then at 10 and 24 hr after starting the culture (x axis). All sets of cultures presented were analyzed concurrently. The figure is a representative of at least two experiments. The growth curves are represented in brown for cells grown in BHI-air and purple for cells grown in TSBG (thin line when grown in air, dense line when grown in the presence of 5% CO 2 /0.1 M NaHCO 3 ). OG1RF containing P fsrB ::lacZ was grown in BHI air (brown closed diamond), in TSBG-air (purple closed diamond) or in TSBG-5% CO 2 /0.1 M NaHCO 3 (purple open diamond). A. OD 600 nm readings. B. β-gal assays (β-gal units = OD 420 nm /protein concentration in mg/ml). grown in air) at 6 hr and remained stable with 55.3 β-gal units at 24 hr. ebpR expression profile in TX5266 also remained higher in the presence of 5% CO 2 /0.1 M NaHCO 3 vs. in aerobic conditions with 0.2 and 2.6 β-gal units, respectively, at 24 hr. Finally, we also examined the effect of CO 2 /NaHCO 3 on fsrB expression by transferring the P fsrB ::lacZ fusion into OG1RF and followed expression in air and in the presence of CO 2 /NaHCO 3 . In those conditions, fsrB expression was not significantly affected by the presence of CO 2 /NaHCO 3 (Fig. 4). Our observation of a further increase in ebpR and ebpA expression in TX5266 in the presence of CO 2 /NaHCO 3 as was observed in OG1RF ( Fig. 2A and 5B), together with the lack of an effect of CO 2 /NaHCO 3 on fsr expression, indicate that HCO 3 is not stimulating ebpR and ebpA expression via an effect on the Fsr system. Finally, at the protein level, pilus production from the ΔfsrB mutant was compared with that of OG1RF. Cells were grown in TSBG aerobically or in presence of 5% CO 2 /0.1 M NaHCO 3 , and collected at 7 hr (stationary phase). As shown in Fig. 3C, a 3-5 fold increase in pilus production was observed in the ΔfsrB mutant compared to OG1RF with cells grown aerobically or in presence of 5% CO 2 /0.1 M NaHCO 3 . Similarly, 3-5 fold increase in pilus production was also seen with cells grown in the presence of 5% CO 2 /0.1 M NaHCO 3 versus cells grown aerobically for both OG1RF and the ΔfsrB mutant. In conclusion, the differences observed in ebp mRNA expression levels between OG1RF and the ΔfsrB mutant and between the conditions used in this study (growth in air versus in the presence of 5% CO 2 /0.1 M NaHCO 3 ) translated into comparable variations in pilus production at the surface of the cells.
ebpR threshold level
In the results obtained above, the ebpR and ebpA steadystate mRNA levels followed a similar pattern with ebpA expression being 7-to 37-fold higher than ebpR expression, depending on the technique. To investigate whether ebpA expression was directly related to the ebpR expression level, we introduced our previously cloned ebpR under a nisin inducible promoter (pTEX5515) into wild type OG1RF and into its ΔebpR mutant, TX5514 [11].
Our previous experiments showed that, even without nisin induction, pilus production was detected at the surface of the cells of the ebpR-complemented ΔebpR mutant, but not when the ebpR mutant carried the empty plasmid [11]. In this study, we investigated the steady-state mRNA level of ebpR and ebpA in different constructs with or without increasing amounts of nisin, compared to their respective levels in OG1RF carrying the empty vector, using qRT-PCR. The ebpR expression level in the ebpRcomplemented ΔebpR mutant was 0.08 (normalized to the gyrB expression level) without induction, increased 4-fold with 0.5 ng/ml nisin to 0.26 and reached 9.33 with 10 ng/ml nisin (Fig. 6), representing a 65-fold increase from 0 to 10 ng/ml nisin. In the same background, ebpA For β-gal assays, samples were collected every hour from 3 to 8 hr, then at 10 and 24 hr after starting the culture (x axis). The left axis represents the β-gal units (OD 420nm /protein concentration in mg/ml). The right axis indicates the OD 600 nm readings. All sets of cultures presented were analyzed concurrently. Each figure is a representative of at least three experiments. A. OG1RF containing either P ebpR ::lacZ (black triangle) or P ebpA ::lacZ (black square) and ΔfsrB containing either P ebpR ::lacZ (pink triangle) or P ebpA ::lacZ (pink square) were grown in TSBG aerobically. B. The ΔfsrB mutant (TX5266) containing either P ebpR ::lacZ (triangle) or P ebpA :: lacZ (square) was grown in TSBG aerobically (pink closed symbol) or in the presence of 5% CO 2 /0.1 M NaHCO 3 (open blue symbol).
steady-state mRNA levels were only slightly affected with a basal expression level without nisin of 0.6 up to 1.5 with 10 ng/ml nisin (Fig. 6), a less than a 3-fold increase. However, as expected from our previous results, ebpA expression was 100-fold lower in the ΔebpR mutant carrying the empty vector than in OG1RF carrying the empty vector or in the ebpR-complemented ΔebpR mutant. We conclude from these experiments that, above the ebpR expression level provided by ebpR copy on pTEX5515 without induction, there is not a strong direct relationship between ebpR expression and ebpA expression.
Bicarbonate effect on ebpA expression
Studies using H. pylori have shown independent effects of pH, CO 2 , and bicarbonate on gene expression (these three environmental elements being interconnected in vivo) where pH appears to be responsible for H. pylori orientation [24]. In contrast, bicarbonate and not CO 2 appears to be the inducer of expression of the B. anthracis toxins [25]. Using the P ebpA ::lacZ fusion in OG1RF, we first investigated the independent effect of CO 2 and NaHCO 3 on ebpA in buffered TSBG with or without the presence of 0.1 M NaHCO 3 and/or 5% CO 2 . pH was controlled during the experiment and remained at pH 7.5 ± 0.25. As shown in Fig. 7, ebpA expression in TSBG-air did not differ appreciably from that in TSBG-5% CO 2 , reaching a peak of expression early in stationary phase (15.8 and 14.5 β-gal units, respectively); expression then decreased to 2 and 0.4 β-gal units, respectively, at 24 hr. In the presence of NaHCO 3 , ebpA expression peak was~4-fold higher with 46.5 β-gal units for the NaHCO 3 -air culture at entry into stationary phase (5 hr) compared to 9.8 β-gal when the cells were grown without NaHCO 3 , and 46.0 β-gal units for the 5% CO 2 plus NaHCO 3 culture compared to 12.5 β-gal when grown in presence of CO 2 only. The bicarbonate effect persisted late into stationary phase with 42.5 and 40.7 β-gal units when grown in air-NaHCO 3 and CO 2 -NaHCO 3 respectively. A similar profile with increased ebpR expression in the presence of bicarbonate but not in presence of CO 2 was also observed (data not shown). Furthermore, the differential effect of CO 2 and NaHCO 3 was also detected in BHI or when potassium bicarbonate was used as a source for HCO 3 -(data not shown). Taken together, these results demonstrate that the increase in ebpR and ebpA expression is caused by the addition of HCO 3 and not CO 2 . Since NaHCO 3 is in equilibrium with H 2 CO 3 , HCO 3 -, and CO 3 2depending of the pH, temperature and partial pressure of CO 2 , we next tested a possible pH effect on ebpA expression when cells were grown in buffered TSBG. In a preliminary experiment, OG1RF (P ebpA ::lacZ) was grown in buffered TSBG with pH ranging from 5 to 9. Severe growth inhibition was observed at pH 5 and 9 with mild growth inhibition at pH 6, compared to unaffected growth at pH 7 and 8 (data not shown). Consequently, further experiments were conducted with buffered media with pH 7 and 8 only. Without the addition of sodium bicarbonate, ebpA expression levels of cells grown at pH 8 ± 0.25 were comparable with the levels in cells grown at pH 7 ± 0.25 (Fig. 8). However, adding NaHCO 3 led to a 4-to 5-fold increase in β-gal production at either pH (pH was controlled during the experiment and remained constant with a ± 0.25 variation). For example, β-gal units were 9.4 at 6 hr for cells grown at pH 7-air, while at the same time point and pH, β-gal units were 40.1 when grown in the presence of NaHCO 3 . In conclusion, between pH (range 7-8), CO 2 Figure 6 Effect of nisin induction on ebpR and ebpA expression. Cells were grown to an OD 600 nm of~0.8 (3 hr, late log exponential growth phase) and at this point cells were left untreated (0) or treated with increasing concentration of nisin (from 0.005 to 10 ng/ml). Then, cells were collected and RNA extracted. After reverse transcription, ebpA and ebpR cDNA was quantified by real time PCR. The strains were OG1RF or ΔebpR (TX5514) carrying either the empty plasmid (-) or ebpR in trans under the nisin promoter (+). ebpR (gray bars) and ebpA (white bars) transcript levels were normalized with gyrB transcript levels. The data correspond to the mean of two independent experiments. and bicarbonate, bicarbonate appears to be the main environmental inducer of the ebpABC operon.
Effect of bicarbonate exposure on the OG1RF transcriptome
In an effort to begin to delineate the "bicarbonate regulon", we used microarray analysis with cells grown to late exponential growth phase (3 hr) and then submitted to a 15 min exposure with 0.1 M NaHCO 3 . Our goal was to define the first set of genes affected by the presence of bicarbonate. Out of the 73 genes that were differentially expressed (abs(fold)>2, P < 0.05, data deposited at ArrayExpress, additional file 1), only two genes were repressed by the presence of bicarbonate more than 5-fold (EF0082 and EF0083 with 9.9-and 7fold, respectively) while four genes were activated more than 5-fold (EF0411-3 with~10-fold, and EF2642 with 6.5-fold). EF0082 is part of the ers regulon (ers encodes a PrfA-like protein involved in the E. faecalis stress response [26,27]), but its function remains unknown, as is also true for EF0083. The EF0411-3 genes appear to be organized as an operon and encode proteins with the characteristics of a mannitol PTS system. EF2642 also appeared to be expressed in an operon with EF2641, which was also activated (4.1-fold, P < 0.05). EF2641 and EF2642 encode a putative glycine betaine/L-proline ABC transporter ATP-binding protein and permease protein, respectively. Those results were confirmed by qRT-PCR with a decrease of 32-fold for EF0082 in the presence of bicarbonate while EF0411 and EF2641 expression levels increased in the presence of bicarbonate by 24-fold and 8.5-fold, respectively (results not shown). The ebpR-ebpABC locus did not appear to be affected in these conditions (late log growth phase following a 15 min. incubation time with 0.1 M NaHCO 3 ), suggesting that the bicarbonate effect on the ebpR-ebpABC locus may be indirect, requiring a cascade of events.
Discussion
We previously noted that EbpR shares homology with the AtxA/Mga family [11]. Regulators in this family have been shown to be active toward their target(s) in the presence of CO 2 or CO 2 /HCO 3 -. While atxA is constitutively expressed, acpA and acpB (also members of the AtxA/Mga family) as well as mga are activated by the presence of CO 2 . In the work described here, we present evidence that bicarbonate is a strong inducer of the ebpR-ebpABC locus and consequently of pilus presence. Among the other environmental conditions tested, pH appears to have a weak effect in the limited conditions tested, while CO 2 had no effect. Although ebpR and ebpA expression levels share a similar pattern, we were not able to show that an increase in ebpR expression, beyond a certain level, resulted in a proportional further increase of ebpA expression. Finally, the Fsr system affects expression of the ebpR-ebpABC locus independently of either the growth phase or the presence of bicarbonate. It is interesting that ebpABC, also shown to be important for E. faecalis virulence, responded to bicarbonate. Bicarbonate influences expression of adcA (encoding an adhesin [28]) and kfc (encoding a factor important for gut colonization) in C. rodentium, which are controlled by the bicarbonate regulator RegA [19], as well as the three toxin genes in B. anthracis [25]. Bicarbonatemediated transcriptional activation may be a system to sense a change in the environment. For example, the proximal portion of the duodenum is exposed to intermittent pulses of gastric H(+) discharged by the stomach. To protect the epithelial surface, at least two HCO 3 -/Clanion exchangers have been described as being responsible for the release of HCO 3 into the duodenum lumen [29]. We postulate that E. faecalis may be sensing this signal and consequently produces adhesin structures like the ebpABC-encoded pili to favor colonization of the intestinal track, similar to adcA in C. rodentium, the expression of which is controlled by bicarbonate and whose gene product has been shown to be involved in adherence to mammalian cells [28].
From the various results obtained in this study where expression of ebpA followed the same expression profile as the ebpR expression, we postulated that the ebpA expression level was proportionally linked to the ebpR expression. To investigate our hypothesis, we used an ebpR construct under the control of a nisin regulated promoter. However, as shown in Fig. 6, the ebpR expression level was already 2-fold higher in the complemented ΔebpR strain (in the absence of nisin) when compared to its native level in wild type OG1RF (0.06 vs. 0.03) and was not detected (with a detection level of 10 -5 the level of gyrB) in the ebpR deletion mutant with the empty plasmid. We did not observe a strong effect on ebpA expression after nisin induction, leading to the conclusion that ebpR expression was already above the threshold required to significantly increase ebpA expression. We tried another construct pCJK96 (rhamnose induction [30]), but faced the same issues (data not shown). Thus, although we did not determine the threshold necessary for the ebpA expression, the presence of ebpR was confirmed to be critical for ebpA expression.
One difference between ebpR and ebpA expression profiles in the presence of bicarbonate (vs. absence of bicarbonate) occurred after entry into stationary phase. ebpR and ebpA expression without bicarbonate begins to decrease, while it remained constant in the presence of bicarbonate. This difference may be explained either by an induction pathway that remains active (in the presence of HCO 3 -) in stationary phase or by inhibition early in stationary phase of a repression pathway (e.g., quorum sensing or phase dependent regulator). The first mechanism would also explain the slight difference observed in the presence of HCO 3 during log growth phase. A potential candidate is a RegA homologue, an AraC/XylS-like regulator from C. rodentium [19]. Among the E. faecalis AraC/XylS-like regulators, none shares additional significant similarity with RegA. A second possibility would be a quorum sensing mechanism. A likely candidate would be the Fsr system [6]. However, the Fsr system, although a weak repressor of ebpR, does not appear to mediate the bicarbonate effect, since a similar ebpA expression pattern compared to OG1RF was observed in an fsrB mutant in the presence or absence of bicarbonate. Finally, we looked at the stress response pathway including ers and its regulon [26,27]. Interestingly, several members of the ers regulon were affected by a 15 min bicarbonate exposure, including EF0082-3 and EF0104-6. However, although both operons are activated by ers, EF0082-3 were strongly repressed (-8 fold), while EF0104-6 were activated (3 fold) by bicarbonate exposure. In addition, ers was not affected. In conclusion, the regulation pathways in E. faecalis resemble a network with several targets genes being under the control of independent regulation pathways illustrated by ebpR-ebpABC being independently a member of the bicarbonate and the fsr regulon, and EF0082 a member of the bicarbonate and ers regulon.
We also showed using microarray profiling that expression of many other genes (mostly PTS systems and ABC transporters) was altered in response to HCO 3 -. Among those genes are EF2641 and EF2642, which encode a putative glycine betaine/L-proline ABC transporter and permease protein, respectively. Interestingly, this ABC transporter shares some homology with the bicarbonate transporter described in B. anthracis (Tau family of ABC transporters) [25]. However, we did not find a TauA motif, that has been proposed as the bicarbonate binding motif, associated with the EF2641-2 locus or in available E. faecalis genomes including OG1RF. Interestingly, expression of ebpR-ebpABC was not affected by the 15 minutes bicarbonate exposure.
Those results could be explained by the need of a cascade of events for a bicarbonate effect on ebpR-ebpABC expression or that the cells need an unknown factor, not present at the growth phase tested. Indeed, as seen in Fig. 2, Fig. 7, and Fig. 8, the greatest difference in ebpR-ebpABC expression was observed from mid stationary to late stationary growth phases (conditions that we found unsuited for microarray due to low and unstable mRNA expression). In conclusion, although we did not detect an effect of 15 minutes bicarbonate exposure on ebpR-ebpABC by microarray, the bicarbonate regulon was shown to share some components with the ers regulon and a later bicarbonate effect on ebp expression was shown by β-gal assays, qRT-PCR and western blot.
Finally, we have previously shown in the rat endocarditis model that an fsrB mutant is less attenuated than a gelE mutant [31]. Since, in the absence of the Fsr system, weak transcription of gelE was detected, it was postulated that the increase in virulence of the fsrB mutant compared to the gelE mutant might be a consequence of the residual production of gelatinase. However, since pilus production is also important in the rat endocarditis model [9], we can now postulate that, in the absence of the Fsr system as well as in presence of bicarbonate (by far the most important buffer for maintaining acid-base balance in the blood), pilus production increases, potentially causing the increased virulence of the fsrB mutant compared to the gelE mutant.
Conclusion
Considering that bicarbonate is an activator of the ebpR-ebpABC locus and that this locus is ubiquitous among E. faecalis isolates (animal, commensal, and clinical isolates) [9], these results seem to suggest an intrinsic aptitude of this species for pilus production which could play an important role in colonization of both commensal and pathogenic niches. Future studies should assess expression of the ebpR-ebpABC locus and the role of pili in a gut colonization model.
Strains, media, growth conditions
The strains used in this study are listed in Table 1. All strains were routinely grown in brain heart infusion broth (BHI broth; Difco Laboratories, Detroit, Mich.) at 150-200 rpm aerobically or on BHI agar at 37°C, unless otherwise indicated. Tryptic soy broth (Difco Laboratories, Detroit, Mich.) with 0.25% glucose (TSBG) was used to test strains for biofilm production, one of the assays where both ebpR and ebpA mutants are attenuated compared to OG1RF [9,11].
For all assays, strains were first streaked on BHI agar with the appropriate antibiotics, as needed. Five to ten colonies were inoculated into BHI broth and grown overnight (with antibiotics when appropriate), then cells were diluted so that the starting optical density at 600 nm was 0.05. For cultures grown in the presence of bicarbonate, a solution of 9% sodium bicarbonate was freshly prepared, filtered, and added for a final concentration of 0.8% (0.1 M final). The cultures were buffered with 100 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) for a final pH of 7.5 ± 0.25 or as indicated. For comparison between cultures grown with and without bicarbonate, an equal volume of water was added to the culture without added bicarbonate. The cultures were then placed on a rotating platform set at 150 rpm at 37°C aerobically or in a 5% CO 2 atmosphere. The pH was monitored during growth and remained at 7.5 ± 0.25. For each set of results, the cultures and following assays were analyzed concurrently. The presence of none of the four lacZ constructs (P TCV , P ebpA , P ebpR , and P fsrB ) affected the growth of their host (OG1RF, ΔebpR, or Δfsr) in the conditions tested. To obtain accurate readings, cultures from 3 hr to 24 hr were diluted 5-fold before determining the OD.
Construction of the ef1091 promotor fusion
The same protocol was used to create the P ebpA ::lacZ fusion as previously described for the P ebpR ::lacZ fusion [11]. The primers cgggatccaagactacgccgaaaacc (introduced restriction sites are highlighted in bold) and ggaattcacacgaatgatttcttcca were used to amplify from 221 bp upstream to 80 bp downstream of the ebpA start codon (301 bp total). The fragment was amplified by PCR, cloned into pGEM-T-Easy vector (Promega, Madison, WI), sequenced, and then subcloned into pTCV-lacZ [32] using EcoRI and BamHI sites. After transfer into OG1RF, TX5266 (ΔfsrB), and TX5514 (ΔebpR), the plasmids were then purified and confirmed again by sequencing using previously published primers Vlac1 and Vlac2, which are located upstream and downstream of the promoter area [32].
b-galactosidase assay
Assays were performed according to the protocol of Hancock et al. [33] with some modifications. Following growth in the designated culture conditions and at each time point mentioned, a sample was collected (~2 × 10 9 CFU), centrifuged, and the pellet frozen until used. Cell pellets were resuspended in 1 ml of 1/10 Z buffer (Z buffer: 60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1 mM MgSO 4 , [pH 7.0]). The cell suspension was transferred to a 2.0-ml tube containing a 0.5 ml volume of 0.1 mm diameter zirconia beads (BioSpec Products, Bartlesville, Okla.). The cells were disrupted using a vortex adapter for 5 min, then centrifuged at 13.6 K rpm for 1 min. Serial dilution of the aqueous layer was used in a β-galactosidase assay as described by Miller [34] with a final volume of 200 μl (96-wells microtiter plate).
Twenty-five μl were assayed for total protein using the BCA protein assay kit (Pierce, Rockford, IL). Due to day to day variability, only data obtained within the same experiment (with cultures grown and samples assayed in parallel) were used for comparisons. To normalize the samples assayed in parallel, we used the total protein content as described in [33]. Experiments were repeated on at least two independent occasions and β-gal units for each experiment corresponded to OD 420 nm /protein concentration in mg/ml. The figures show data from one representative experiment.
RNA purification for qRT-PCR
To follow gene expression in OG1RF during growth in TSBG at 37°C, 150 rpm, samples were collected every hour from three to 7 hr after starting the culture. For the nisin induction assay, cells were grown to an OD 600 nm of~0.8 (3 hr, late log exponential growth phase), and at this point cells were left untreated or treated with increasing concentration of nisin (from 0.005 ng/ml to 10 ng/ml). In each case, an equivalent of OD 600 nm~1 of cells was centrifuged, and the pellet was conserved at -80°C. RNA and cDNA were prepared using the methods described before [8]. Quantitative PCR on cDNA was performed using SYBR green PCR master mix kit (Applied Biosystems, Foster City, CA) and a 7500 Real-Time PCR system (Applied Biosystems). ebpA was selected for those experiments because it is the first gene of the ebpABC operon. The following primers were used: gyrB, accaacaccgtgcaagcc and caagccaaaacaggtcgcc; ebpA, aaaaatgattcggctccagaa and tgccagattcgctctcaaag; ebpR, acggatatggcaaaaacg and agaagagcgactaatattgatgg; EF0082, aaactccttgaactgattgg and ccagataaagaatgcccata; EF0411, agctgaactaacggaacaag and tcttttaagagcgaaaccac; and EF2641, attcgtggtgttcctaaaga and catcccaccagataattgac. For each primer set, a reference curve was established using a known amount of gDNA purified from OG1RF. The amount (in ng/ml) obtained for the gene of interest transcripts were normalized with the amount of gyrB transcripts.
Microarray analysis
The BHI cultures of OG1RF were started as described above. Cultures were grown to an OD 600 nm of~0.8 (3 hr, late log exponential growth phase), and at this point 25 ml of culture were centrifuged and resuspended in either BHI-buffered or BHI-buffered with 0.1 M bicarbonate, incubated for 15 min at 37°C @ 150 rpm, then centrifuged and the pellet conserved at -80°C until use. The microarray consists of 70-mer oligonucleotides that were printed on a GAPS II slide (Corning Incorporated, Corning, NY) at the University of Texas Medical School Microarray Core Laboratory. The RNA preparation, probe labeling, hybridization, data acquisition and statistical analysis were performed following the same methods as described previously [8]. The results of the bicarbonate induction are deposited at ArrayExpress fsrB promoter cloned upstream of lacZ in pTCV-lacZ (P fsrB ::lacZ), from bp -110 to -8 (103 bp) relative to fsrB start codon; Erm R [6] pTEX5585 ebpA promoter cloned upstream of lacZ in pTCV-lacZ (P ebpA ::lacZ), from -221 bp to +80 bp (301 bp) relative to ebpA start codon. Erm R This study pTEX5586 ebpR promoter cloned upstream of lacZ in pTCV-lacZ (P ebpR ::lacZ), from -248 to + 53 bp (301 bp) relative to ebpR start codon. Erm R [11] pTEX5515 pMSP3535 with ebpR from -20 bp to +1561 bp from the ATG. This ebpR fragment contains the full ORF and the RBS of ebpR. Erm R [11] http://www.ebi.ac.uk/microarray-as/ae/ under accession number E-MEXP-2518.
Flow cytometry analysis
An equivalent of~1 OD 600 nm of culture was collected for flow cytometry analysis, centrifuged and the pellet frozen until used. The pellet was then washed twice with 1 ml of PBS (80 mM Na 2 HPO 4 , 20 mM NaH 2 PO 4 , 100 mM NaCl, pH 7.5), resuspended in 0.5 ml of paraformaldehyde buffer (4.4% w/v paraformaldehyde, 30 mM Na 2 HPO 4 , 30 mM NaH 2 PO 4 ), and incubated at RT for 15 min. The cells were pelleted and resuspended in 0.5 ml of PBS-2% BSA, and subsequently placed at -80°C for at least an hour. Before labeling, the cells were washed twice in PBS. A pellet corresponding to 10 8 CFU was resuspended in 100 μl of PBS with the anti-EbpC polyclonal rabbit serum at a 1:1000 dilution, and incubated at 4°C for 2 h. After centrifugation and two washes with PBS, the cells were resuspended in 100 μl of PBS with R-Phycoerythrin-conjugated affinipure F (ab') 2 goat anti-Rabbit IgG (H+L) (Jackson ImmunoResearch Laboratories, Inc) at a dilution of 1:100, and incubated at 4°C for 2 h. The cells were then washed twice, resuspended in 1 ml PBS, and conserved at 4°C until they were analyzed with a BD FACSCalibur™ system (BD Biosciences, San Jose, CA).
Protein extraction and dot blot
Surface protein extracts from E. faecalis OG1RF and derivatives were prepared using mutanolysin (Sigma Chemical Co., St. Louis, MO). Cells grown at 37°C in specified conditions were collected at 7 hr after starting the culture. The cells were washed and resuspended in 1/100 volume of 0.02 M Tris-HCl (pH 7.0)-0.01 M MgSO 4 buffer. Mutanolysin was added to a final concentration of 5 U for an equivalent of 1 OD 600 nm of cells and incubated at 37°C for 1 hr. The supernatants were collected after centrifugation at 13.6 K rpm for 5 min. An equal amount of mutanolysin extract preparation (quantified using the BCA protein assay kit) was 2fold serial diluted and was spotted onto NitroPure (GE Water and Process Tech., Watertown, MA) using the Bio-Dot® Microfiltration Apparatus (Biorad, Hercules, CA). The membranes were incubated with anti-EbpC rabbit polyclonal antiserum [9] at a dilution of 1:2000, followed by protein A-horseradish peroxidase conjugate (1:5000). Pilus production was then revealed using chemiluminescence (Amersham, Piscataway, NJ). | 2014-10-01T00:00:00.000Z | 2010-01-21T00:00:00.000 | {
"year": 2010,
"sha1": "70d30ab73a94676554236efb8a619a62ff30c889",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-10-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "190f13b0f91adcb43b00fad290dc7dfff2d99537",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
115172583 | pes2o/s2orc | v3-fos-license | Resonant Spectrum Analysis of the Conductance of Open Quantum System and Three Types of Fano Parameter
We explain the Fano peak (an asymmetric resonance peak) as an interference effect involving resonant states. We reveal that there are three types of Fano asymmetry according to their origins: the interference between a resonant state and an anti-resonant state, that between a resonant state and a bound state, and that between two resonant states. We show that the last two show the asymmetric energy dependence given by Fano, but the first one shows a slightly different form. In order to show the above, we analytically and microscopically derive a formula where we express the conductance purely in terms of the summation over all discrete eigenstates including resonant states and anti-resonant states, without any background integrals. We thereby obtain microscopic expressions of the Fano parameters that describe the three types of the Fano asymmetry. One of the expressions indicate that the corresponding Fano parameter becomes complex under an external magnetic field.
We here consider a class of open quantum-dot systems, where all semi-infinite leads are attached to one site of a general N -site quantum dot. For this particu-lar model, we rigorously transform the Landauer formula into a simple conductance formula expressed in terms of the discrete eigenstates, that is, the bound states, the resonant states, the anti-resonant states and the anti-bound states. To our knowledge, this is the first time the effect of resonances on the conductance is shown exactly.
Based on the simple conductance formula, we next discuss the symmetry of resonance peaks in the conductance. We are particularly interested in an asymmetric conductance peak, namely the Fano effect [82]. In the simplest theory of resonance scattering, a peak observed in, say, the scattering cross section would have a symmetric Breit-Wigner shape of a Lorentzian. In fact, some of the peaks are asymmetric. Fano proposed a theory explaining the asymmetric shape [82]. The asymmetric resonance peak has been thereby referred to as the Fano resonance. In 2002, K. Kobayashi et al. observed Fano resonance peaks in the conductance through an Aharonov-Bohm system with a quantum dot [9,10] as well as through a T-shaped quantum dot [11,12]. Asymmetric Fano peaks were clearly observed in the conductance.
It is a conventional understanding that the Fano effect arises from the coupling of continuous states in the leads and discrete states in the device [9,10,11,12]. In contrast, we here stress the importance of the interference between resonant states [17] as well as between a resonant state and a bound state when we consider the Fano conductance peak. We show that the complex eigenvalues of the resonant states of the whole system, the quantum dot with the leads, form the asymmetric conductance peak.
The present paper is organized as follows. In Sec. II, we review the theory of resonant states in open quantum systems. In Sec. III, we express, for an N -site open quantum-dot model, the retarded and advanced Green's functions in terms of the discrete eigenstates. We thereby derive a conductance formula consisting only of the local density of discrete eigenstates and the local density of states of the leads. In Sec. IV, we show that the asymmetry of the Fano conductance peak arises from the interference between the resonant states as well as between a resonant state and a bound state.
II. RESONANT STATES
As a preparation for the main part of the present paper, we review in this section mathematics of the resonant state as an eigenfunction of the Schrödinger equation [18]. It is rather common to define a resonant state as a pole of the S matrix. In fact, there are two ways of defining the resonant state. The definition based on the S matrix may be called the indirect method [83]. We here use the direct method of its definition; that is, we describe it as an explicit eigenfunction of the Schrödinger equation [18,20,21,22,23,24,25,26,27,28,29,30,31,32,84].
Suppose that we have a scatterer with several semiinfinite leads attached to it. For simplicity and concreteness, we hereafter restrict ourselves to the tight-binding model for the lead Hamiltonians. The total Hamiltonian is of the form where H d is the one-body Hamiltonian of the scatterer (namely, the dot Hamiltonian), H α is the Hamiltonian of a lead α, and H α is the coupling between the dot and the lead α. We assume that Therefore, the energy E k and the wave number k of incoming and outgoing electrons are related through the dispersion relation We can define the resonant state as a solution of the Schrödinger equation for the whole Hamiltonian H under the boundary conditions that the wave function has only out-going waves away from the scatterer [18,20,21,22]. The condition is often called the Siegert condition [21]. More specifically, we seek discrete and generally complex eigenvalues E n of the whole system H, , and E ab q of the anti-bound states (red crosses) on the complex energy plane. The upper and lower halves of the k plane respectively correspond to the first and second Riemann sheets of the E plane. A branch cut −2t < E < 2t accompanied by two branch points E = ±2t connect the two Riemann sheets.
in the first Brillouin zone −π < Re k ≤ π under the Siegert boundary condition as [18,85,86] x α |ψ n = ψ n |x α ∝ e ikn|xα| (6) for x α on any lead α, where |ψ n is the righteigenfunction and ψ n | is the left-eigenfunction [24,25,26,27,28,29,30,31,32,87,88]. (Note that ψ n | † = |ψ n in general.) The thus-obtained eigen-wave-number as well as the corresponding eigenenergy are generally complex numbers. Note here that we have two Riemann sheets of E for the entire complex plane of k (Fig. 1). A branch cut −2t < E < 2t with two branch points E = ±2t connects the two Riemann sheets. The discrete eigenstates thus obtained are classified as follows (Table I and Fig. 1). First, the eigenstates with κ n > 0 are necessarily on the imaginary axis Re k = 0 or on the edge of the Brillouin zone Re k = π. (In systems with continuous space, the bound states exist only on the imaginary k axis; the bound states on the line Re k = π appear because the leads of the present system are lattice systems.) By putting κ n > 0 in Eq. (6), we see that the eigenstates are in fact bound states. Hereafter, we use the subscript p and the superscript 'b' for the bound states as in k b p and E b p . The bound states with k r b p = 0 have real negative eigenenergies E b p < −2t while the bound states with k r b p = π have real positive ones E b p > 2t. Next, the eigenstates in the fourth quadrant of the k plane are referred to as the resonant states. Hereafter, we use the subscript l and the superscript 'res' for the resonant states as in k res l and E res l . The corresponding eigenenergies are in the lower half of the second Riemann sheet of the E plane: E i res l < 0. Third, the eigenstates in the third quadrant of the k plane are referred to as the anti-resonant states. (In the context of the condensed-matter physics, some refer to a resonance in the form of a dip of the conductance as an anti-resonance. In the present terminology, this is just another resonance, different from the anti-resonant state here.) Hereafter, we use the subscript m and the superscript 'ar' for the resonant states as in k ar m and E ar m . The corresponding eigenenergies are in the upper half of the second Riemann sheet of the E plane: E i ar m > 0. A resonant state and an anti-resonant state always appear in pair. The states of a pair are related to each other as |ψ ar m = ψ res l | † , and ψ ar m | = |ψ res We refer to a pair of the resonant state and the corresponding anti-resonant state as a resonant-state pair. Some systems have additional states on the negative part of the imaginary k axis or on the negative part of the edge of the Brillouin zone Re k = π. Such states often appear when resonant and anti-resonant states of a pair collide on the axes. We refer to them as anti-bound states [89] and use the subscript q and the superscript 'ab' as in k ab q and E ab q . Anti-bound states possess real eigenenergies but on the second Riemann sheet and still have properties of the resonant states such as diverging wave functions.
III. RESONANT SPECTRUM ANALYSIS OF AN OPEN QUANTUM N -SITE DOT
In the present section, we discuss an N -site extension of the Friedrichs-Fano (Newns-Anderson) model [82,88,90,91,92,93,94,95]. We derive a remarkably simple conductance formula for the model. The formula contains only the local density of discrete eigenstates and the local density of states of the leads.
We consider a one-body Hamiltonian of an N -site dot with semi-infinite leads {α} attached to it (Fig. 2); with where ε i , v ij , t and t α are all real parameters with v ij = v ji . The Hamiltonian H d is the tight-binding Hamiltonian of the N -site dot, while H α is the tightbinding Hamiltonian of the one-dimensional semi-infinite lead α and H d,α is the hopping between a site d 0 on the central dot and the end site x α = 0 of the lead α. We attach all the leads to the single site d 0 of the dot; this is due to a technical requirement that appears below. The system is an N -site extension of the Friedrichs-Fano model [82,88,90,91,92,93,94,95]. We will obtain the conductance G α→β (E) from the lead α to the lead β in the form where Here ρ eigen (E) is the local density of discrete eigenstates of the whole system on the site d 0 and ρ leads (E) is the local density of states of the lead Hamiltonian α H α . The subscripts p, q, l and m respectively denote sets of the bound states, the anti-bound states, the resonant states and the anti-resonant states. Let us describe the derivation of Eq. (16) hereafter. We can obtain the exact expression of the scattering states of the system (12), namely the Friedrichs solution [90] |ψ F k , with the eigenvalue E k = −2t cos k; see Appendix A. The completeness with respect to the scattering states is given by [96] where |ψ b p is a bound state and |ψ F k is a scattering state given in Appendix A, and we used the notation We first express the retarded and advanced Green's functions in the spectral representation; where the integration contours C R BZ and C A BZ cover the Brillouin zone as indicated in Fig. 3. the third quadrant: We then have Here C R (κ 0 ) indicates the sum of the paths parallel to the real axis and C R ⊥ (κ 0 ) the sum of the paths perpendicular to the real axis including the contributions from the antibound states. Note that κ 0 of the modified integration contour must be positive and greater than the imaginary parts of all the resonant eigen-wave-numbers.
At this point, we sum up the retarded and advanced Green's functions; The sum of the contributions of the integration contour C R ⊥ (κ 0 ) and C A ⊥ (κ 0 ) is equal to the contribution of the bound states and anti-bound states except for the sign; On the other hand, we proved that the contributions of the parallel integration contours C R (κ 0 ) and C A (κ 0 ) vanish for the states on the central dot; i.e. for |d i and |d j with any i and j, we have See Appendix B for the proof. Equation (28) does not seem to hold if the semi-infinite leads are not attached to a single site of the central dot. This is why we focused on the present system (12). Thus we find that the sum of the retarded and advanced Green's functions is equal to the contributions of only the discrete eigenstates for the states on the central dot, {|d i }, (Fig. 5); where We also use the fact that the difference between the retarded and advanced Green's functions is generally given by [2] where with the self-energies of the semi-infinite leads and (29) shows that the real part of the Green's function is given by the discrete eigenstates, while Eq. (31) shows that the imaginary part of the Green's function is given by the inverse of the van Hove singularities at the branch points E = ±2t [2]. The simultaneous matrix equations (29) and (31) results in the matrix Riccati equations The solution gives each Green's function in terms of the contribution of the discrete eigenstates, Λ(E), and the contribution of the branch-point singularities, Γ(E). Using the fact that d 0 |Γ|d 0 is the only non-zero element of the matrix d i |Γ|d j for the present system (12, we first solve the above equations for i = j = 0, then for i = 0 with general j and for j = 0 with general i, and finally for general i and j. where The sign in front of the square root of Eqs. (38) and (39) is chosen according to the rule given in Appendix D.
Using the Fisher-Lee relation [97], we arrive at the conductance G α→β (E) from the lead α to the lead β in the form where is the maximum possible conductance from the lead α to the lead β. (In the transformation of Eq. (41), we again used the fact that the matrix Γ (α) has only the (0, 0) element for the present system (12); see Eq. (32).) Equation (41) gives a remarkably simple formula where Here ρ eigen (E) is the local density of discrete eigenstates of the whole system H on the site d 0 , whereas ρ leads (E) is the local density of states of the lead Hamiltonians α H α , which has the van Hove singularities at the band edges E = ±2t. Note that ρ eigen (E) has singularities at the discrete eigenvalues, whereas ρ leads (E) has singularities at the branch points. The conductance itself has singularities due to the discrete eigenstates but not due to branch points. We exemplify ρ eigen (E) and ρ leads (E) in Fig. 6 for a two-site dot with two leads with t 1 /t = t 2 /t = 1, ε 0 /t = 5, ε 1 /t = 0.5 and v 01 /t = v 10 /t = 0.5.
To summarize the present section, we reveal the effect of resonances on the conductance explicitly and rigorously. To our knowledge, this is for the first time the conductance is exactly given in terms of the sum of simple poles of the discrete eigenstates.
IV. QUANTUM INTERFERENCE EFFECT OF DISCRETE EIGENSTATES
In the present section, we argue that the Fano conductance arises as a result of interference between discrete eigenstates. The conductance formula (43) has a square of the local density of discrete eigenstates. Therefore, we have crossing terms within a resonant-state pair (between a resonant state and an anti-resonant state), between two resonant-state pairs (two sets of a resonant state and an anti-resonant state), and between a resonant-state pair and a bound state. We show in the present section that discrete eigenvalues decide the symmetry or the asymmetry of the conductance peaks in addition to the location of the conductance peaks, using several examples. We thereby derive the Fano parameter microscopically. In Subsecs. A, B, and C of the present section, we consider the system (12) with the following restrictions: only two leads α = 1, 2; the coupling t 1 = t 2 = t; the number of sites in the dot N = 1, 2, 3. We consider the effect of changing t α in Sec. IV D. Throughout the present section, we computed the conductance using the Fisher-Lee relation (41) and obtained all discrete eigenvalues solving Eq. (C30).
A. Point contact system: N = 1 First we show the conductance as well as the discrete eigenvalues of the one-site dot with two leads, namely the point contact shown in Fig. 7. There are only two bound states and no resonant state. We plot in Fig. 8 the conductance with the eigenvalues of the two bound states for ε 0 /t = 0, 1, 1.5, 2, 2.5. The conductance of the point contact has no peculiar behavior such as the Breit-Wigner peak or the Fano peak. Upon increasing the potential ε 0 , the eigenvalues of the two bound states move away from the branch points E = ±2t. This decreases the contribution of the local density of the discrete eigenstates ρ eigen (E) and hence deflates the conductance gradually. We next show the conductance and the discrete eigenvalues of the two-site quantum dot with two leads, namely a T-shaped quantum dot shown in Fig. 9. This system is a minimal model that possesses a resonantstate pair (a resonant state and the corresponding antiresonant state) and may be directly related to Fano's original argument [82]. We plot in Fig. 10 the conductance, the eigenvalues of the two bound states, E b 1 and E b 2 , and the eigenvalues of the resonant-state pair, E res and E ar , for ε 0 /t = 0, 1, 3, 5, ε 1 = 0 and v 01 /t = v 10 /t = 1.
We have a Breit-Wigner dip for ε 0 = 0, but for ε 0 = 0, we have an asymmetric peak, namely the Fano conductance peak. Maruyama et al. [98] claimed that the asymmetry of the conductance peak of the T-shaped quantum dot is proportional to ε 0 . We here discuss the asymmetry from the viewpoint of interference among the discrete eigenstates.
The conductance formula (43) contains the square of the sum over the discrete eigenvalues of the form where Since the conductance formula (43) is given in the form the symmetry or the asymmetry of the quantity Ω(E) 2 is directly reflected on the symmetry or the asymmetry of the conductance peak. Equation (45) therefore implies that the symmetry or the asymmetry of the conductance peak is strongly affected by crossing terms, or the interference between states with discrete eigenvalues. We hereafter show that the Fano conductance peak arises from two types of the interference, or two types of crossing terms. First, we have a crossing term within the resonant-state pair, or the interference between the resonant state and the anti-resonant state. Second, we have a crossing term between the bound states and the resonantstate pair.
We compare in Fig. 11 the following quantities: The second quantity (50) contains a crossing term between the resonant state and the anti-resonant state. The third quantity (51) contains crossing terms between the resonant state and a bound state as well as crossing terms between the anti-resonant state and a bound state. We can see in Fig. 11 that the asymmetry of the conductance peak comes partly from the asymmetry of the term Ω pair (E) and partly from the crossing term Ω b-pair (E). The quantity Ω b (E) is almost symmetric. In order to derive the Fano parameters for the asymmetry of the two terms Ω pair (E) and Ω b-pair (E) microscopically, we expand the terms (50) and (51) in the neighborhood of E = E res r = E ar r by using the normalized energyẼ We first rewrite ρ pair (E) in the forms where we express the coefficient of the local density of the resonant state with the amplitudeÑ and the phase θ:Ñ e iθ ≡ d 0 |ψ res ψres |d 0 π .
Note that this is generally a complex number because the left-eigenvector ψres | is not generally Hermitian conjugate to the right-eigenvector |ψ res for a resonant state (see Eq. (9)). We then rewrite the local density of the resonant-state pair in the form or where q pair ≡ tan θ.
The parameter (57) controls the asymmetry of the term (50) and hence may be called the Fano parameter, although Eq. (56) is different from the form originally derived by Fano [82]: The asymmetry caused by the above interference between a resonant state and the corresponding anti-resonant state may be missing from Fano's argument.
On the other hand, the crossing term (51) produces asymmetry of Fano's original form (58). In order so see this, we approximate the local density of two bound states as in the neighborhood of E = E res r . We therefore have the crossing term between the resonant-state pair and the two bound states as where In order to derive a Fano parameter q b-pair that controls the asymmetry of the term Ω b-pair (E), we extract the form on the right-hand side of Eq. (58) by putting We obtain the Fano parameter q b-pair by solving the equation and choose the solution with the same sign as s. This controls the asymmetry of the term (51), a Fano parameter that is different from the one given by Eq. (57), but that conforms to Fano's original form (58). We show in Fig. 12 how the two Fano parameters q pair and q b-pair depend on the system parameter ε 0 . In the particular case of Fig. 12, q b-pair tends to dominate over q pair as we increase the system parameter ε 0 . This is in coordination with the decrease of |E res i |. We can see in Eq. (62) that a small imaginary part |E res i | causes a particularly strong asymmetry of the term Ω b-pair (E). This is indeed demonstrated in Fig. 10, where, as we increase ε 0 , the asymmetry rapidly develops while the the resonant eigenvalue approaches the real axis. Incidentally, the present system has the particle-hole symmetry E ↔ −E for ε 0 = ε 1 = 0, and hence q pair = q b-pair = 0, for which the resonance peak takes the form of a symmetric Lorentzian as shown in Fig. 10(a). Third, we discuss the conductance of the three-site quantum dot with two leads shown in Fig. 13. This system have two resonant states for some parameter values. This situation was not considered in Fano's argument [82]. We show in Fig. 14 the conductance, the eigenvalues of the two bound states, E b 1 and E b 2 , as well as the eigenvalues of the two resonant-state pairs, E res 1 , E ar 1 , E res 2 and E ar 2 , for ε 1 /t = −1.5, −1, −0.5, 0 with ε 0 /t = 0, ε 2 /t = 0.5, v 01 /t = v 10 /t = 0.8, v 02 /t = v 20 /t = 0.5 and v 12 /t = v 21 /t = 0.4. Upon increasing the parameter ε 1 , the conductance dip that is generated by the resonant state on the left-hand side, E res 1 , approaches to the other conductance dip that is generated by the resonant state on the right-hand side, E res 2 . Then the latter conductance peak develops strong asymmetry. For the present system, we have yet another Fano parameter due to a crossing term between one resonantstate pair and the other resonant-state pair. The conductance formula (43) contains the square of the sum FIG. 14: The conductance (curve for the left axis) for the three-site dot with (a) ε1/t = −1.5, (b) ε1/t = −1.0, (c) ε1/t = −0.5 and (d) ε1/t = 0, plotted with all the discrete eigenvalues (crosses for the right axis). We fixed ε0/t = 0, ε2/t = 0.5, v01/t = v10/t = 0.8, v02/t = v20/t = 0.5 and v12/t = v21/t = 0.4. over the discrete eigenvalues of the form where We compare in Fig. 15 the following quantities: We can see that the following three terms are asymmetric: first, Ω pair 2 (E) 2 , which contains the crossing term between the resonant eigenstate ψ res 2 and the anti-resonant eigenstate ψ ar 2 ; second, Ω b-pair 2 (E), which is the crossing term between the bound states (ψ b 1 , ψ b 2 ) and the resonant-state pair (ψ res 2 , ψ ar 2 ); third, Ω pair-pair (E), which is the crossing term between the two resonant-state pairs (ψ res 1 , ψ ar 1 ) and (ψ res 2 , ψ ar 2 ). In order to derive the Fano parameters for the asymmetry of the three terms, we expand the terms (70)- (72) in the neighborhood of E = E res r2 by using the normalized energyẼ We can analyze the terms Ω pair 2 (E) and Ω b-pair 2 (E) in the same way as in the previous subsection. We again use the expressionÑ Then the Fano parameter controlling the asymmetry of the term Ω pair 2 (E) is given by Following the same logic as in Eqs. (52)-(65), we obtain the Fano parameter that controls the asymmetry of the term Ω b-pair 2 (E) by solving (E) (chained blue curve) and Ω pair-pair (E) (dotted purple curve). The system is the three-site dot. We fixed ε0/t = 0, ε1/t = 0, ε2/t = 0.5, v01/t = v10/t = 0.8, v02/t = v20/t = 0.5 and Next, in order to discuss the quantity Ω pair-pair (E), we (red curve) and q pair-pair (purple curve), plotted with the difference of the real parts of the two resonant eigenvalues, E res r2 − E res r1 . Use the right axis for the Fano parameters and the left axis for the eigenvalue difference. We fixed ε0/t = 0, ε2/t = 0.5, v01/t = v10/t = 0.8, v02/t = v20/t = 0.5 and v12/t = v21/t = 0.4. use the expansion We then approximately have the crossing term between the two resonant-state pairs as with We thus have yet another Fano parameter q pair-pair as the solution of We show in Fig. 16 how the three Fano parameters q pair 2 , q b-pair 2 and q pair-pair depend on the system parameter ε 1 . In the particular case of Fig. 16, the third Fano parameter q pair-pair is the greatest in most of the range. This may be due to the following reason. The first term of s ′ for the parameter q pair-pair contains the Lorentzian Therefore, s ′ grows fast as the resonant-state pair E res r1 approaches the resonant-state pair E res r2 up until |E res r1 − E res r2 | ∼ |E res i1 |. This is in contrast to the first term of s for the parameter q b-pair 2 , which contains for p = 1, 2. This is indeed demonstrated in Fig. 14, where, as we increase ε 1 , the asymmetry rapidly develops while the resonant-state pair (E res 1 , E ar 1 ) approaches (E res 2 , E ar 2 ).
D. The effect of the hopping energy tα between the central dot and the leads Finally, we briefly show the effect of the hopping energy t α between the central dot and the lead α. We here use the case of the three-site dot with two leads with there are three resonant-state pairs and no bound states. We have corresponding three sharp peaks in the weakly coupled case t 1 /t = t 2 /t = 0.1 as in Fig. 17 (a). Upon increasing the hopping energy t 1 = t 2 , the second peak corresponding to the resonantstate pair with the least modulus of the imaginary part develops asymmetry. At t 1 /t = t 2 /t = 1/ √ 2, the resonant and anti-resonant states of a resonant-state pair collide and become two anti-bound states, which leaves two resonant-state pairs. For t 1 /t = t 2 /t > 1/ √ 2, the second peak continuously develop the asymmetry. (The antibound states become bound states before t 1 = t 2 = t.)
V. CONCLUSION
We carried out the spectrum analysis of the open quantum N -site dot with multiple leads. We obtained the simple conductance formula (43) in terms of the local density of discrete eigenstates (the bound states, the resonant states, the anti-resonant states and the anti-bound states), ρ eigen (E), and the local density of states of the leads, ρ leads (E). To our knowledge, this is the first time the conductance is exactly give by the sum of all the simple poles.
We then showed that the Fano conductance arises from the crossing terms of three origins; first between a pair of a resonant state and an anti-resonant state, second between a resonant-state pair and a bound state, and finally between two resonant-state pairs. We also presented microscopic derivation of the Fano parameter.
The analysis in the present paper is applicable only to non-interacting systems. It is an interesting and challenging problem to generalize the present approach to interacting systems. The Kondo effect, for example, has been observed in recent experiments on quantum dots FIG. 17: The conductance (curve for the left axis) for the three-site dot with (a) t1/t = t2/t = 0.1, (b) t1/t = t2/t = 0.3, (c) t1/t = t2/t = 0.6 and (d) t1/t = t2/t = 0.8, plotted with all the discrete eigenvalues (crosses for the right axis) The gray curves and the gray crosses indicate the conductance and the discrete eigenvalues for t1/t = t2/t = 1, the same data as plotted in Fig. 14 . We fixed ε0/t = 0, ε1/t = 0, ε2/t = 0.5, v01/t = v10/t = 0.8, v02/t = v20/t = 0.5 and v12/t = v21/t = 0.4. and attracts much theoretical interest. The present approach may be particularly useful in analyzing the interplay between the Fano resonance and Kondo resonance.
Acknowledgments
This work is supported by Grant-in-Aid for Scientific Research No. 17340115 from the Ministry of Education, Culture, Sports, Science and Technology as well as by Core Research for Evolutional Science and Technology (CREST) of Japan Science and Technology Agency. In the present appendix, we solve the Lippmann-Schwinger equation for the present system (12) to obtain the Friedrichs solution [90] of the scattering states. The Lippmann-Schwinger equation may be written down as where the state |k, α is an eigenstate of H 0 (more specifically, of H α ) with the eigenvalue E k = −2t cos k, and δ is a positive infinitesimal ensuring that the solution is an outgoing wave.
The formal solution of the Lippmann-Schwinger equation (A1) is given in the form Using the resolution of unity we then have where In order to transform the final term on the right-hand side of Eq. (A6), we calculate the following: We thereby have We therefore arrive at We describe in Appendix C how we can calculate the Green's function G R ij .
APPENDIX B: PROOF OF EQ. (28) In the present Appendix, we prove Eq. (28). Using the expression (A10) of the scattering state, we have We therefore have where we used Eq. (C14) for the Green's functions with the expression (C33) for the effective potential.
On the paths C R (κ 0 ) and C A (κ 0 ), we let k = k r ± iκ 0 and integrate with respect to k r . For k = k r + iκ 0 , the element e −ik grows in the limit κ 0 → ∞ in the denominators of two of the three factors on the right-hand side of Eq. (B2). For k = k r − iκ 0 , the element e ik grows in the limit κ 0 → ∞ again in the denominators of two of the three factors. Therefore the integral (B2) vanishes on the paths C R (κ 0 ) and C A (κ 0 ) in the limit κ 0 → ∞. Thus Eq. (28) is proved for the system (12).
APPENDIX C: THE GREEN'S FUNCTION IN THE CENTRAL DOT AND CALCULATION OF THE RESONANCES
In this appendix, we describe the calculation of the Green's function G R ij (E) for the states in the central dot, {|d i }. The calculation utilizes the self-energy of the semi-infinite leads [2,85,86,99,100,101,102,103,104,105,106,107,108]. Using the expression of the Green's function, we also give an equation that gives the resonant states.
The basic statement is the fact where the thus-defined effective Hamiltonian H R eff has degrees of freedom only on the central dot. Below, we will review the derivation of the following form: where Therefore, we can calculate the Green's function G R ij by inverting an N × N matrix (C2).
There are several ways of deriving Eq. (C1). One way is to use the resolvent expansion where In calculating G R ij (E) defined in Eq. (C1), we should note the following. Let H d denote the Hilbert space spanned by the states on the central dot, {|d i }, and H lead denote the Hilbert space spanned by the states on the leads, {|x α }. Then we have That is, the operator (E − H 0 + iδ) −1 , when applied to a state either in H d or H lead , does not change its Hilbert space, whereas the operator H 1 switches it. Therefore, all terms of odd orders of H 1 in the resolvent expansion of G R ij vanish. All terms of even orders of H 1 (except the zeroth order) have powers of the following factor: We will show below that this quantity is equal to V R eff (E) defined in Eq. (C4). We therefore have which can be summarized as The remaining task is to calculate We then use the resolvent expansion (C17) Similar reasoning as the one described in Eqs. (C5)-(C14) leads us to Thanks to the translational invariance, we should have G R lead (E; 0) = G R lead (E; 1). Then, Eq. (C18) reduces to a quadratic equation which is followed by where we fixed the sign in front of the square root so that the imaginary part may be negative. Thus the quantity (C12) was indeed shown to be equal to V R eff (E) defined in Eq. (C4).
To summarize the above, the retarded Green's function given by (C14) is expressed in the form (C1) with the effective potential V R eff defined in Eq. (C4). The Green's functions that are used in the expression of the scattering state (A10) are therefore obtained by inverting the N × N matrix d i |(E − H R eff (E))|d j for a fixed value of E. Incidentally, the infinitesimal +iδ in the denominator of the definition (C1) is not necessary anymore because V R eff already has an imaginary part. In fact, the advanced Green's function is given by flipping the sign of the imaginary part; Although we have derived the expression (C1) particularly for the present system (12) with all the leads attached to a single site, the expression (C1) itself holds for more general systems with appropriate changes of definition of the effective Hamiltonian (C2); see Refs. [2,85,86,99]. We can reduce the calculation of the Green's function further for the present system (12), using the resolvent expansion (C13) again. For i = j = 0, Eq. (C13) now gives where Summing the series we obtain This reduces the calculation of G R 00 from inversion of an non-Hermitian matrix (E − H R eff ) to a Hermitian matrix (E − H d ). For j = 0 or i = 0, we have respectively, and for general i and j we have Now we show how we can calculate all resonant states for the system (12). As is evident in the Fisher-Lee relation (41), the conductance of the present system has poles in the complex energy plane wherever the Green's function G R 00 (E) has poles. The expression (C26) immediately gives the equation for the resonant states in the form The resonant states given for the examples in Sec. IV were thus calculated. Equation (C30) holds particularly for the present system (12) with all the leads attached to a single site. For more general case, the Green's function G R 00 is given by inversion of the matrix (E − H R eff (E)). Therefore, all resonant states can be calculated by solving the equation The above discussion leads us to a much simpler way of deriving Eq. (C1) [99]; we formulate the Green's function so that it may have poles for the resonant states. Since a resonant state satisfies the boundary condition (6), we have x α + 1|ψ res = e ik res x α |ψ res (C32) with Re k res ≥ 0. This terminates the Schrödinger equation for the semi-infinite leads with the effective potential Solving the dispersion relation E k = −2t cos k = −t(e ik + e −ik ), we have which again gives Eq. (C4). See Ref. [99] for details.
APPENDIX D: CHOOSING THE SIGN OF THE SOLUTION TO THE RICCATI EQUATION
In this appendix we will derive a criterion to choose either the plus or the minus sign in the solution to the matrix Riccati equation, Eqs. (38) and (39). We will consider the matrix element G R 00 (E) ≡ d 0 |G R (E)|d 0 , which appears in the conductance (41). For i = j = 0, the equation (36) reduces to iG R 00 Γ 00 G R 00 + G R 00 (2 + iΓ 00 Λ 00 ) + Λ 00 = 0, where G R 00 ≡ d 0 |G R (E)|d 0 , Λ 00 ≡ d 0 |Λ(E)|d 0 , Γ 00 ≡ d 0 |Γ(E)|d 0 and we made use of the fact that the matrix Γ has only the (0, 0) element, Γ 00 , for the present system (12); see Eq. (32). The solution is given by We here use Eqs. (29) and (31) for i = j = 0, which gives iΓ 00 = G A 00 − G R | 2011-09-09T08:27:44.000Z | 2009-05-25T00:00:00.000 | {
"year": 2009,
"sha1": "6294557eccecccc610a95e1ee279c7d9abfc5abe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0905.3953",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6294557eccecccc610a95e1ee279c7d9abfc5abe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
22172384 | pes2o/s2orc | v3-fos-license | Internal jugular vein: Peripheral vein adrenocorticotropic hormone ratio in patients with adrenocorticotropic hormone‑dependent Cushing’s syndrome: Ratio calculated from one adrenocorticotropic hormone sample each from right and left internal jugular vein during corticotrophin releasing hormone stimulation test
Background: Demonstration of central: Peripheral adrenocorticotropic hormone (ACTH) gradient is important for diagnosis of Cushing’s disease. Aim: The aim was to assess the utility of internal jugular vein (IJV): Peripheral vein ACTH ratio for diagnosis of Cushing’s disease. Materials and Methods: Patients with ACTH‑dependent Cushing’s syndrome (CS) patients were the subjects for this study. One blood sample each was collected from right and left IJV following intravenous hCRH at 3 and 5 min, respectively. A simultaneous peripheral vein sample was also collected with each IJV sample for calculation of IJV: Peripheral vein ACTH ratio. IJV sample collection was done under ultrasound guidance. ACTH was assayed using electrochemiluminescence immunoassay (ECLIA). Results: Thirty‑two patients participated in this study. The IJV: Peripheral vein ACTH ratio ranged from 1.07 to 6.99 ( n = 32). It was more than 1.6 in 23 patients. Cushing’s disease could be confirmed in 20 of the 23 cases with IJV: Peripheral vein ratio more than 1.6. Four patients with Cushing’s disease and 2 patients with ectopic ACTH syndrome had IJV: Peripheral vein ACTH ratio less than 1.6. Six cases with unknown ACTH source were excluded for calculation of sensitivity and specificity of the test. Conclusion: IJV: Peripheral vein ACTH ratio calculated from a single sample from each IJV obtained after hCRH had 83% sensitivity and 100% specificity for diagnosis of CD.
disease (CD)] that are usually small. [3,4] Contrast-enhanced dynamic magnetic resonance imaging (MRI) has improved the detection rate for pituitary adenomas causing CD. However, possibility of incidental pituitary microadenoma in a patient with CS further complicates the problem. [4] Ectopic ACTH secretion (from a non-pituitary tumor) is responsible for about 10-15% of cases of ACTH-dependent CS. Therefore, it is important to document central: Peripheral ACTH gradient to differentiate CD from ectopic ACTH syndrome (EAS). Bilateral inferior petrosal sinus sampling (BIPSS) with corticotrophin releasing hormone (CRH) stimulation is currently the gold standard for the diagnosis CD. [5] BIPSS is not widely available because it is technically demanding. Although rare, the procedure is associated with serious neurological complications and venous and pulmonary thromboembolism. [6] We studied the feasibility of direct ultrasound-guided internal jugular vein (IJV) sampling for ACTH in a small cohort of patients with ACTH-dependent CS. [7] An IJV: Peripheral vein gradient for ACTH was observed in two-thirds of patients with CD. Here, we report the results of CRH stimulated IJV: Peripheral vein ACTH ratio in patients with ACTH-dependent CS.
materials aNd methods
Patients with ACTH-dependent CS were the subjects for this study. Children less than 10 years of age, patients with pituitary macroadenoma, and very ill patients (patients with multiple vertebral fractures, severe myopathy, etc.) were excluded.
IJV ACTH sample collection was done in the ultrasound room in the Radiology department. Basal samples were collected from a previously placed IV cannula at cubital vein at 5 and 0 min. One hundred micrograms of human CRH (hCRH) (Ferring) was given through the peripheral IV cannula. IJV blood collection was done under ultrasound guidance (direct puncture) by a dedicated radiologist. Blood was collected with a 21-G needle at the level of mandible with the patient in supine position as described previously. [8] Needle was inserted keeping the tip toward the medial wall of IJV. Blood was collected at 3 and 5 min following intravenous hCRH from right and left IJV, respectively. Simultaneously (with IJV sample), peripheral vein samples were also collected at 3 and 5 min. The patient was asked to do Valsalva maneuver during IJV sampling.
Blood samples were collected in pre-chilled plastic tubes containing ethylenediaminetetraacetic acid (EDTA) and were sent to laboratory immediately. ACTH was assayed using electrochemiluminescence immunoassay (ECLIA).
Two monoclonal antibodies specific for ACTH 9-12 and for the C-terminal region (ACTH 36-39) were used for ACTH assay. [9] The measuring range for this assay was 1-2000 pg/ml. Ratios of IJV: Peripheral vein ACTH were estimated for right and left IJV (i.e., right IJV ACTH/simultaneously collected peripheral vein ACTH and left IJV ACTH/simultaneously collected peripheral vein ACTH) separately. From these two values, the higher number was used for analysis.
This being a pilot study, a sample size of 30 was planned based on the number of subjects likely to be available over a 2-year period. The study protocol was approved by the institutional ethics committee. Informed consent was taken from patients (parents in case of subjects less than 18 years of age, in addition to assent from the patient).
results
This study was carried out over a period of 26 months starting from March 2010. During this period, 52 patients (37 females and 15 males) were diagnosed to have CS, 4 were ACTH independent (adrenal adenoma), and 48 were ACTH dependent. Among the ACTH-dependent patients, four had pituitary macroadenoma, six patients were too ill (all patients had multiple vertebral fractures and severe myopathy) to undergo the procedure, one patient did not give consent (a 13-year-old boy who was apprehensive about the procedure), and five had not completed investigations/ treatment. Thirty-two patients [23 females and 9 males, age 12-55 years (mean ± SD, 26 ± 11)] participated in this study. Details of these patients are given in Table 1. Eleven of these 32 patients had more than 80% suppression of plasma cortisol on high dose dexamethasone suppression test (HDDST). MRI (contrast-enhanced dynamic scans) revealed lesions in 19 of these 32 patients, with size ranging from 2 to 8 mm. Five lesions were more than 5 mm in size. Two patients were diagnosed to have EAS with computed tomography (CT) and (68)Ga-DOTANOC positron emission tomography-computed tomography (PET-CT).
The IJV: Peripheral ACTH ratio ranged from 1.07 to 6.99 (n = 32). It was more than 1.6 in 23 (more than 3 in 11) patients. CD could be confirmed in 16 on histopathology; 4 are in remission following pituitary surgery although tumor could not be identified on histopathology. Two young women (cases 21 and 22) underwent pituitary exploration twice, but no tumor could be identified at surgery or histopathology. Case 23 underwent bilateral total adrenalectomy as life-saving procedure. There were nine patients who had IJV: Peripheral vein ACTH ratio less than 1.6. Three of them had corticotroph tumor confirmed on histopathology, EAS could be confirmed in two patients (one thymic carcinoid and another pulmonary carcinoid), and the other four patients underwent bilateral total adrenalectomy as they had severe hypercortisolism and the test results were discordant.
Peripheral CRH stimulation test (using a cut-off of 50% rise for ACTH and ≥13% for cortisol) showed positive test for the 23 cases with IJV: P vein ACTH ratio more than 1.6 (one patient who had no cortisol response showed a positive ACTH response and another had a positive cortisol response, while there was no ACTH response). There was a more heterogeneous pattern among those with IJV: P ratio less than 1.6. The three patients who had CD gave a positive response, while the two EAS patients showed negative response both for ACTH and cortisol. Of the four who underwent adrenalectomy, one had positive ACTH and cortisol response, one had positive ACTH response with negative cortisol response, one had negative ACTH response with positive cortisol response, and one was negative for both ACTH and cortisol.
Twenty-four of these 32 were CD (19 pituitary tumors with positive immunohistochemistry for ACTH, 4 remission after pituitary surgery, and 1 adrenalectomy) and 2 were EAS. Six patients were grouped under ACTH source unknown and are on follow-up. For calculation of sensitivity and specificity of the test (IJV: P ACTH ratio), these six cases were not included.
Taking IJV: Peripheral vein ACTH ratio > 1.6, this test showed 83% sensitivity with 100% specificity for diagnosing CD [ Table 2]. Both the patients with EAS had lower ratios.
All patients tolerated the procedure well. Postoperatively, all the patients experienced CRH flushing, and five patients complained of local neck discomfort, which resolved spontaneously. There were no other adverse events during or following the procedure.
disCussioN
The present study was undertaken to assess the utility of IJV: Peripheral vein ACTH ratio for the diagnosis of CD. Ultrasound-guided direct venous puncture was used for blood collection from IJV. Samples were collected 3 and 5 min after intravenous hCRH administration, from right and left IJV, respectively, along with simultaneous peripheral vein sample. A single sample was collected from right IJV at 3 min and another sample was collected from the peripheral vein simultaneously. Similarly, a single sample was collected from the left IJV at 5 min, along with a sample from the peripheral vein at the same time. IJV: Peripheral vein ACTH ratio was calculated for both right and left IJV separately, and the higher number was taken as the ratio. Thirty-two patients (23 females and 9 males), with age ranging from 12 to 55 (mean ± SD 26 ± 11) years, were enrolled in this study.
The CRH stimulated IJV: Peripheral ACTH ratio ranged from 1.07 to 6.99. Eleven patients had IJV: Peripheral vein ACTH ratio equal to or more than 3, while 23 had ratio more than 1.6. Among the 23 with ratio more than 1.6, 20 had CD while the ACTH source could not be confirmed in 3. Among the nine patients with IJV: Peripheral vein ACTH ratio less than 1.6, four had CD, two had EAS, and the ACTH source could not be identified in three patients. Using a cut-off of 1.6, this test had a sensitivity of 83% with a specificity of 100% (CS cases with unknown ACTH source were excluded for calculation of sensitivity and specificity) for diagnosis of CD.
P. C. Scriba reported the first successful ACTH estimation in samples obtained from IJV in patients with ACTHdependent CS in 1966. [8] He had demonstrated IJV: Peripheral vein ratio 1.5 ± 0.15 in four out of five CD patients. There was absence of C: P gradient in three ectopic Cushing's patients. Since then, there have been several reports of IJV sampling. [10][11][12][13] As noninvasive imaging techniques for diagnosis of pituitary tumor became available, this procedure became less popular. [5] Erickson, et al. [14] Ultrasound-guided IJV sampling is less invasive and can be done along with CRH stimulation test. In the present study, the peripheral CRH stimulation test correctly identified more CD patients, than the IJV: Peripheral vein ratio. The peak ACTH response was seen between 5 and 15 min in the peripheral vein samples [ Figure 1]. The IJV samples were collected at 3 and 5 min after CRH administration. For calculation of sensitivity and specificity, the unknown has been excluded CST, CRH stimulation test, ACTH: Adrenocorticotropic hormone Had these samples been collected later like at 7 and 10 min, they may have given a greater gradient and better sensitivity. It needs to be tested in more patients.
The main limitation of this study is the number of patients with unknown ACTH source. We have a final diagnosis in only 26 of the 32 cases. The other 6 (19%) cases will require further follow-up to identify the source of ACTH excess. Some of these may be CD, while others may have occult ectopic or non-neoplastic cause for hypercortisolism. [15] Etiology of CS remains occult in about 10% of patients even after extensive investigations. [5,16] aCkNowledGmeNts Mr. Leslie George and Ms. Shiji, technologists in the Department of Endocrinology, are acknowledged for their help in hormonal assays. | 2018-04-03T02:54:54.331Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "4a24ce7827032a51c02b2534574fc04b67e5429d",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2230-8210.107843",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f1b6c6b363a7621d4fdde4f5aa76fdb0ef1d177c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4782605 | pes2o/s2orc | v3-fos-license | To Predict with Confidence, Plan for Freedom
• Prediction may not be what matters anyway. If we abandoned hope of predicting the future, we could still describe a compelling outcome of transportation investment, one that motivates many people who will never care about a ridership prediction or economic impact analysis. We could also predict it in the sense that we can predict the continued value of pi. That idea is freedom, as transportation expands or reduces it.
• We can reach many strong conclusions without knowing. A surprising number of facts about transportation, including some fairly counterintuitive insights that would be transformative if widely understood, can be described and justified solidly with little or no empirical ground, because they are matters of geometry and physics or of nearly axiomatic principles of biology.
• Prediction may not be what matters anyway. If we abandoned hope of predicting the future, we could still describe a compelling outcome of transportation investment, one that motivates many people who will never care about a ridership prediction or economic impact analysis. We could also predict it in the sense that we can predict the continued value of pi. That idea is freedom, as transportation expands or reduces it.
The Limitations of Prediction
When I presented a proposal for redesigning Houston's bus network to the board of directors of the transit agency there, the board chair asked me: "What will the ridership growth be?" When it became clear that nobody wanted to hear me explain why ridership is not really predictable, or why other outcomes might matter more, I offered my best professional guess: 20% ridership growth after two years, net of all external changes. A run of the regional model later came up with the same answer.
To Predict with Confidence, Plan for Freedom
Two years after the plan was implemented, there is no way to prove that prediction wrong or right because many events have impacted Houston and the business of urban transportation. Gas prices have fallen, causing job losses in Houston's petroleum-based economy as well as increasing driving overall. Uber and Lyft have grown their market share. There are also internal factors that are hard to separate: Houston Metro opened two light rail extensions in the months before the bus network plan was implemented. Even if those events had been farther apart, the process of ridership growth after an improvement can take years.
When I said my prediction was "net of all external changes" I was defining the limits of what our network design could be responsible for, as you would expect a careful consultant to do. But in saying that, I was also rendering my claim unverifiable. There is no widely agreed upon way to sort out the causes of ridership, so there is no way to know the actual ridership change "net of external changes." Under duress, I had performed a prediction but not really made one.
To some who hear this, it sounds as if I have confessed to some sort of con. But I had said what I meant, and I had given the board everything that I could responsibly offer. While I could not give them certainty about the future, my willingness to make an educated guess conveyed that I am a responsible professional whose view is worth taking seriously. Making predictions, even untestable ones or ones that nobody will care about later, are part of the cultural process for establishing authority.
Political statistician Nate Silver's fame rests on having predicted the outcome of the 2012 presidential election in every state, and all but one state in 2008. But what Silver really predicted was a distribution of possible vote percentages for each state, each indicating that one candidate had slightly better odds than the other. He was fortunate that every state's outcome landed in that better-than-50% part of its predicted probability range, because he certainly had no basis for predicting that (Silver 2012). Still, the notion that he predicted-in the sense of seeing the future-is the basis of his mystique.
Prediction is also the essence of the sales pitch "Buy this, and you will be happy." Purveyors of technology have always regaled us with exciting predictions of how life will be in the future. Can we sort out the role of self-interest in the predictions we hear? Do the ultimate consumers of these predictions even want to?
None of this is to question the tremendous value of good predictions, or the work of modeling in fields from climate to transportation. The models worth trusting, though, are not just predictions but descriptions of mechanisms whose operations are more or less understood. Weather forecasting is more reliable than political forecasting for exactly this reason.
There have been great ages of theory in urban planning, but ours is an age when empiricism reigns. Data, preferably big data, suffuses transportation debates as though it were a final authority, as though one could translate data into information without assumptions. But we do not need to do more experiments to verify the value of pi, or the fact that organisms consume nutrients and excrete waste. These concepts are axioms, deriving from our definition of a circle, and of life, respectively. You could argue that pi is true in Euclid's space but not Einstein's, so let's add an important clarification: These principles are undoubtable axioms of the world at human scale, the world we are talking about in urban planning.
A key feature of this kind of knowledge is that to know it is true now is to know it is true throughout time and space. To know what pi is, and to know what kind of knowledge it is, is to know the value of pi in 2040, and on Mars.
What mechanisms could we describe if we confined ourselves to such concepts, avoiding the more empirical terrain of social and cultural studies? What predictions could we make, with a level of confidence that is not just a spread of probabilities, but real certainty about the future? To think about the future, let's think about something equally unknowable: an alien world.
Bortworld: A Thought Experiment 1
Suppose that somewhere else in the universe, there's another planet with intelligent life. We don't know what they look like, or what gases they breathe, or whether they're inches or miles tall. We don't know whether they move by hopping, drifting, or slithering. We don't know what they call themselves, so let's call them borts. Let's make just a few assumptions about them.
First, let's assume that the borts tend to cluster in certain places on their planet, which enables trade, creativity, ritual, or whatever other activities give value to their lives. Let's call these places cities. Since cities are places where borts are relatively close together, they have relatively little space per bort. Cities, by definition, are places where space is scarce.
Second, assume that these cities are large enough that a bort can't easily hop, drift, or slither around the city fast enough to reach all the needs and pleasures of daily life. Given this reality, they must have invented vehicles of some kind that carry them faster and farther; if they hadn't, their cities could not have grown so large. The causation can just as well be described the other way: Because borts have invented such a vehicle, their cities are now too large to be reached solely by hopping, drifting, or slithering.
Do we have to make an assumption about their communications? If the borts had either perfect telepathy or perfect virtual reality, then they would never need to move for any of the purposes of interaction. But in this case, why would they have cities? Let's assume-because this really arises from assumption number two-that their communications are not so perfect, and that they do need to move around to do whatever borts do. Specifically they would not be in cities if they did not need to meet each other in physical space, which requires two or more borts to show up at the same time. So the borts must have a concept of timeliness, which implies the possibility of haste and an interest in travel time.
Scarcity of space, as of any resource, triggers the law of supply and demand. For any organism, securing a scarcer resource requires a greater expenditure of energy. Call this energy expenditure the price. Social structures may affect who pays this price, but it must be paid.
Perhaps the borts have tried using a personal locomotion vehicle in their cities. Call it a bortcar. It gives a bort freedom to move at high speeds, but it's much bigger than the bort's body, so it takes much more space per bort inside the city, all the more because the faster it 1 This section is adapted from my article "How Universal Is Transit's Geometry?" HumanTransit.org. Blog, March 1, 2011.
moves the more space it needs for stopping distance. To say nothing of parking or storage. Only a world of perfectly balanced demand, where trip origins and attractions were evenly matched at every moment, would no storage be needed, even in a world of automated bortcars. These vehicles will not fit well into the low space per bort that is the city's defining feature. If there are no alternatives and disincentives to bortcar use, the result will be congestion. Whenever a scarce resource is priced below its true cost (as with half-price ticket sellers or Soviet grocery stores) a queue will form, and that is what congestion is. If you don't pay in money, you will pay in time.
To address this problem, bort society would have to have selected some mix of the following solutions: • Reduce the amount of travel through limiting use. Bort society may have mechanisms for deciding who can drive a bortcar in the city and who can't. These may be expressions of hierarchy, or rationing systems, or some kind of exchange with other objects of scarcity-i.e., pricing.
• Reduce the amount of travel through mixed-use urban planning. A more egalitarian way to the same end is to design the city to minimize the travel borts need to do to do whatever they do. We call this mixed-use planning. Borts must have a similar issue because they go places to meet one another, so it matters where those places are.
• Faster individual movement in very little space. The borts may have invented small vehicles that allow them to move faster without taking up much more space than a bort does itself. Call them bortcycles, though they could just as well be bortjetpacks or bortsegways.
• Increase vehicle occupancy. Sharing bortcars will work at small scales, but at high density where the shortage of space per bort is acute, only larger vehicles, some form of mass transit, will let every bort travel as needed within that constraint.
If the problem is congestion, these are the options. No others are mathematically coherent. There are solutions that soften these problems around the edges, such as allowing automated bortcars to join temporarily into trains, but none that transform the basic math. For example, if bortcars join to form trains, then there are still areas where they run alone; if not, borts would just use a train. If bortcar trains are assembling and separating within the complex pattern of everywhere-to-everywhere travel inside a dense urban core, then there will still be plenty of solo bortcars where space is scarce, which returns us to the same geometry problem. The acts of separation and joining, especially if performed at speed, will also take space. These bortcar trains could use space and energy efficiently where there is longer distance travel and densities are lower, such as between what we would call outer suburbs. But that is not where the worst problem of space lies.
Any combinations of these tools will also have to manage their conflicts, which at higher speeds will require a degree of separation. Bortcycles, which can't be as armored as bortcars, will be dangerous if too vulnerable to collision with bortcars. Borts hopping, drifting, or slithering under their own power will face the same danger. Likewise, bort transit vehicles will be less useful if stuck in the congestion generated by single-occupant bortcars, so their success in this context would require a large portion of bort society to have no alternatives but to use them.
To Predict with Confidence, Plan for Freedom
To get this far, the only assumptions I've made are those needed to generate active cities and a problem of transportation. Other than that, I've been relying on concepts about which we have perfect or near-perfect certainty. Geometry defines the facts of urban space. Physics governs crash risk in relation to speed. The concept of scarcity-and thus the interaction of supply and demand-is biological but will exist for anything that we could recognize as an organism, so we have assumed this in assuming that we can think about the borts at all.
In short, if I confine myself to this kind of knowledge, I can make absolutely confident predictions about our world. After several more decades of exponential technological and cultural change, a future studded with surprises, our world will continue to resemble the bort world in all the respects I've described. Technology never changes facts of geometry or physics, at least not as experienced at human scale. By technology I specifically mean inventions rather than discoveries.
We could go much further, and lay out many of the facts about how vehicle sharing works.
Obviously we can't predict bort ridership patterns, but there is a powerful thing we could describe and predict: Borts will have a degree of freedom defined by where they could get to in a fixed amount of time. This degree can be visualized as an isochrone around the bort's location, as in an example from our own world shown in Figure 1.
To Predict with Confidence, Plan for Freedom
The size and shape of this isochrone can be geometrically derived from several features about a public transit network. Of course, borts, like us, may be more interested in going to destinations than just covering distances, so what matters is what is in those isochrones, not just their size.
If the goal is to get as many trip attractions as possible into the isochrones of as many trip generations as possible, transit service would be focused on places that are dense. Density-a purely geometric concept-means that more homes and trip attractions are near each possible public transport stop. That means that a larger share of the population has access to its benefits, and that an isochrone of any size will contain more useful destinations. Therefore a network that optimizes isochrones for the most borts will be most useful to bort homes and activities located at higher densities.
Other factors about where borts live will also determine the potential for transit to expand their freedom. The ease with which they can hop, drift, or slither to their transit stops and the degree to which their development pattern is conducive to running transit in straight lines are also predictable from our assumptions. Even the large-scale mixture of uses in the city is a geometric fact with geometric consequences: If all the borts need to go in one direction at the same time, because their homes are on one side of the city and their destinations are on the other, up to half of all bort transit resources will be spent running empty in the opposite direction, producing a network that is less cost-effective at maximizing freedom for the most possible borts.
We can even predict that bort transit travel will be governed by the main ingredients of transit travel time: frequency, in-vehicle travel time, and access time to/from stops. This will give rise to the same network design strategies we use. For example, the high effectiveness of the highfrequency grid at expanding many borts' access is a fact for bortworld, as it is of ours.
But are the fixed routes I've been describing obsolete? Who wants to hop, drift, or slither to a bus stop? Why can't there be little vehicles that go to where each bort is, and then exactly where they want to go? Bort taxis may exist, and maybe they can be scaled up so that they carry a few borts going in the same general direction. But deviating to particular borts takes a lot of time, and that will mean fewer borts can be served in every hour a bort transit vehicle is operating.
Who cares, if the vehicles are cheap to operate? Maybe they're automated. But even if this could be made energy-efficient compared to fixed transit, there will still be that ultimate geometry problem: space. Any demand-responsive vehicle will be useful to fewer borts, while taking more space, than an effective network of large fixed route vehicles. If the bort city grows in any dimension, this problem will grow more acute. Higher density means more borts competing for the same street space. Horizontal growth means longer average trip distances, which also means each bort needs more street space. Where space is scarce, borts are most effectively served if they gather at stops located along a defined path. The higher the space efficiency required, the more fixed the route has to be.
All this must be as true for the borts as for us, because all this arises, geometrically, from our basic assumptions without reference to culture or behavior. Wherever there are cities in which large numbers of people must travel beyond their walking distance, this will be the math.
To be more precise, these assumptions arise without reference to the sort of behavior that can only be studied empirically. There are facts of behavior that are axiomatic to the idea of an organism, such as eating and excreting. If you must predict human behavior, it is safest to predict To Predict with Confidence, Plan for Freedom behavior that has been evident far into the past. Marchetti's constant, an estimate of human tolerance of travel time derived from the study of ancient cities, is an example. Evolutionary explanations based on the conditions of prehistoric human life are even firmer. This is practically the opposite of how human behavior is predicted in many contexts today, where the claim is really about the continuation of recent and therefore possibly transient fads. Streetcars in the United States were popular in the 1920s, despised in the 1950s, and popular again in the 2000s.
On what ground do we predict that they will be popular in the future?
Can We Predict Anything Useful?
We could never predict ridership on the bortworld, but we can predict something that they probably care about and that humans certainly do. To plan without prediction is to plan for freedom. Rather than trying to predict what people will do, what if we tried to maximize what they could do?
We hear little about freedom as a planning outcome, but businesses selling transportation talk about freedom incessantly. Airlines want you to know about all the places you could go. New private entries into the taxi market such as Uber and Lyft want you to feel free to go anywhere in your city in a way that government-protected taxi monopolies never bothered to advertise. A century ago, freedom-the sudden expansion of "where you could go"-was the winning argument for the private car, one that overran thousands of prescient objections about how private cars could damage our cities, our health, our environment, and even our manners.
Yet the transportation professions seem reluctant to discuss freedom as an outcome of transportation planning. Most evaluation focuses on things that can be organized under the "triple bottom line." This trinity of types of impact-economic, environmental, and socialencompasses many urgent goals, but these goals all describe outcomes in a predicted society. They require us to study and predict what people do, but they assign little value to what people have the option of doing-which is to say their freedom. Prediction and freedom are opposites: to the extent we can predict your behavior, you are not free.
When you went shopping at a particular store, does it matter that you could have gone shopping somewhere else, or shopped online while in bed, or embraced an ascetic spiritual path of buying as little as possible? A study of freedom would be intensely interested in that, while conventional planning would merely record what you did and use that to predict what you, despite your illusion of freedom, will continue to do.
When freedom goals do appear in planning, they do so half-concealed, usually in relation to some economic or social outcome that will result from people being free. Policy makers worry about access to jobs, education, and other opportunities. The discussion of equity in transportation (though often hung up on concepts like "minority neighborhoods" or "minority routes" that imply demographic determinism) is also, at its best, a study of the equal distribution of freedom.
The most robust freedom claims that appear in transportation planning are about changes in travel time. Reframing this concept is the key to making freedom visible and quantifiable as a possible evaluation criterion. Not just the freedom to do what some public policy wants you to do-find a job, get training-but freedom in the broadest sense.
To Predict with Confidence, Plan for Freedom
It may seem too broad to use freedom to talk about what is technically called access or accessibility. There are freedoms that you could exercise without transportation, mostly online, but many freedoms still require leaving the house (again, short of perfect telepathy and virtual reality). Once you must go places to do things, access and freedom are the same.
A map of your freedom-of where you can go and thus what you can do-is an isochrone, as in Figure 1. The isochrone is not a new concept and what it describes-changes in accesshas always been a key input to predictive models. But it's only recently that we are looking at it, discussing it in public, and finding the courage to speak of the freedom it describes. A leader in this field is the Accessibility Observatory at the University of Minnesota, whose most recent publications (soon to be updated) are called Access Across America. These analyses show where you could get to in a specified amount of time for each point in a city, and are computed for different modes: walk, bike, walk plus transit, and auto.
You can look at an isochrone for a specified location of interest, such as your house, or a possible business site, or a community of concern. You can also draw an access heatmap of a city, as in Figure 2, which colors each pixel based on the number of jobs or residents that you can get to from there, by a specified mode, in a fixed amount of time. This would have obvious relevance to how the real estate and development industry understands transit access. Heatmap showing how many jobs can be reached on transit from each point in Oakland, California. (Owen et al. 2016) To Predict with Confidence, Plan for Freedom To sum up, freedom should be a central evaluation criterion for transportation projects and equity analysis for two reasons: 1. People care about freedom. Showing people their freedom speaks to something they value, even if they don't care about predictions. If journalists could be encouraged to write about freedom instead of predictions, their work would be "news you can use." 2. Freedom is largely predictable because its quantification relies almost entirely on geometry: the design of a transportation network in relation to urban structure. 2 No sociology or psychology is required, and there is no risk of presuming the permanence of transient fads through the assumptions of a model.
Imagine, for example, if discussions of transit oriented development looked at where you could get to if you lived at a proposed site, rather than just whether an attractive transit technology is running nearby. Imagine if equity were understood as the just distribution of freedom to go wherever you want, as only an isochrone can measure. Imagine if planning documents spoke respectfully of people's freedom rather than only of how experts can predict their behavior. Such things are possible. | 2018-04-12T20:09:16.858Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "9415929d87093d0cb7368f446251b312f26b8511",
"oa_license": null,
"oa_url": "https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1824&context=jpt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b8cd6a61ed9f65b8c2c2fc078b475194b2f1fb02",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
234475847 | pes2o/s2orc | v3-fos-license | Descriptive epidemiology of nasopharyngeal carcinoma at Tikur Anbessa Hospital, Ethiopia
Background Nasopharyngeal cancer is distinguished from other cancers of the head and neck in its epidemiology, histopathology, clinical characteristics, and therapeutic outcome. Its unique clinico-epidemiologic pattern of the disease is an area focus for this investigation. Accordingly, the study investigated the demographic and histologic characteristics, as well as the clinical stage at presentation of nasopharyngeal carcinoma patients at Tikur Anbessa Specialized hospital. Methods Hospital based retrospective descriptive study was conducted from September 2017 – October 2020. All biopsy proven incidental cases during the study period are included. SPSS version 26 is used for data entry and analysis. Result A total of 318 patients with histologically confirmed nasopharyngeal carcinoma cases during the study period were included. There were 218 males and 90 females, with a male: female ratio of 2.5:1. The age of patients ranges from 13 to 81 years with a mean age of 37.8 ± 15 years. The median age at diagnosis was 38 years. Age distribution has two peaks for males, first between 30 to 39 and second 50 to 59 years of age. While the peak age of occurrence for females is in the 20–39 age range. Juvenile cases constituted 34% of the study group. The study revealed, nonkeratinizing carcinoma as the most prevalent histology at 94.3% (undifferentiated type 85.9% and differentiated keratinizing squamous cell carcinoma 8.4%) and 5.7% of the cases showed keratinizing squamous cell carcinoma. Majority of the patients, 86%, presented late with stage III and IV disease. Conclusion Nasopharyngeal cancer is commonly found among the young and productive age group, under the age 30. Nonkeratinizing carcinoma is the predominant histopathologic variant resembling that seen in endemic areas of the world. Thus, genetic and early life environmental exposures should be well studied to identify possible risk factors in the region. Late-stage presentation at diagnosis impacts the treatment outcome of patients, thereby indicating the need for a raised index of suspicion among health professionals for early diagnosis and better prognosis of patients.
Introduction
Nasopharyngeal carcinoma (NPC) is a malignant tumor arising from the squamous epithelial lining of nasopharynx, frequently from the area of fossa of Rosenmüller [1].
The disease is considered as one of the rare forms of cancer worldwide where only 86,500 cases of nasopharyngeal carcinoma were reported in 2012, accounting for only 0.6% of all cancers diagnosed in that year [2]. It is notable for its high incidence in selected geographic and ethnic populations. Globally the highest incidences have been observed in populations living in or originating from Southern China whereas Southeast Asia, North Africa and Inuits (Eskimos) of Canada and Alaska, all have intermediate incidence [3]. According to the 2015 population based cancer registry data of Addis Ababa, nasopharyngeal cancer was found to be the 5th commonest cancer in males and the 17th in females [4]. The incidence of NPC in men is shown to be higher than in women, with a ratio of 2-3 to 1 in both endemic and non-endemic areas of the world [5].
The histological classification of nasopharyngeal carcinoma proposed by World Health Organization (WHO), categorized tumors into three pathological types: keratinising squamous, non-keratinising (differentiated and undifferentiated subtypes), and basaloid squamous [6]. Keratinising squamous cell carcinoma is a WHO type I, which shares similar characteristic features with other head and neck squamous cell carcinomas, whereas the non-keratinising nasopharyngeal carcinoma (NK NPC), types II and III refer to non-keratinising differentiated and undifferentiated tumors, respectively [7]. The NK NPC that comprises over 95% of NPC in high-incidence areas, is correlated with raised titres of Epstein-barr virus (EBV) serology; in contrast, type I NPC is predominant in nonendemic regions, and may have an etiology distinct from that of the other two histologic types with a reduced EBV serologic titres [8,9].
The distinct geographic and ethnic variations of NPC worldwide suggest that both environmental factors and genetic traits contribute to its development [5]. Childhood intake of preserved foods is studied as the main risk for development of NPC in endemic populations of Chinese, natives of Southeast Asia, Arabs of North Africa and natives of Artic region. These locally consumed preserved foods are assumed to share common carcinogenic substances mainly nitrosamines and EBV activating substances [10,11]. The link between NPC and Epstein-Barr virus is well established, as patients with this malignancy were found to have a raised antibody titers against the virus [12]. A number of non-dietary environmental exposures, including domestic exposure to smoke from burning wood and incense, occupational exposure to dust, smoke and chemical fumes and, tobacco smoking, have also been suggested as risk factors for NPC [13].
Given the limited epidemiological evidence regarding NPC in Ethiopia, the present study tries to investigate the clinico-epidemiologic pattern of Nasopharyngeal Carcinoma of patients at Tikur Anbessa Specialized Hospital (TASH).
Methods and materials
Study area and period The study was conducted at Tikur Anbessa Specialized Hospital (TASH), Addis Ababa, Ethiopia. Cases of biopsy-proven Nasopharyngeal Carcinoma (NPC), between September 2017 and October 2020 were investigated. TASH is the largest referral hospital in the country. It is also an institution where clinical services that are not available in other public or private institutions are rendered to the whole nation. Until recently, TASH was the only hospital in the country providing oncology service for cancer patients and carries the entire cancer burden of the country.
Study design
Institution based retrospective review of document was carried out on histologically confirmed nasopharyngeal carcinoma patients who attended TASH during the study period.
Study populations and sampling techniques
All incidental cases of biopsy-proven NPC during the study period were included in the study.
Operational definition
Cases of Nasopharyngeal carcinoma were defined as; 'A nasopharyngeal malignancy arising from the nasopharyngeal squamous epithelial lining' as confirmed on histopathological examination of biopsy obtained from nasopharyngeal specimen.
Data collection techniques
Medical charts of 318 NPC cases diagnosed between September 2017-October 2020 were reviewed using structured data collection questionnaire. Data collected on all patients included age, gender, residence, religion, educational background as well as clinical data of pathology results. The clinical stage at presentation was assessed according to the American Joint Committee on Cancer (AJCC) 2018 system.
Statistical analysis and quality assurance
To ensure data quality, the collected data was checked for completeness, clarity and consistency by the investigators immediately after the data has been collected by trained data collectors. The principal investigator also closely monitored data collection and data entrance. Coding of individual questionnaires was done before data entry in to the software. The data was entered & analyzed using SPSS version 26 for statistical analysis. The descriptive statistics such as proportions, percentages, ratios, frequency distributions and appropriate cross-tabulation presentations besides measures of central tendency and measures of dispersion was used for describing data.
Ethical considerations
Approval of this study was obtained from Institutional Review Board of College of Health Science, Addis Ababa University (Protocol No. 084/19/ENT). All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from all study subjects. For subjects under 18, consent was obtained from a parent and/or legal guardian. After the diagnosis of NPC was established patients were linked to the Oncology unit, for possible radiotherapy and chemotherapy.
Sociodemographic characteristics
In the study, a total of 318 histopathologically confirmed nasopharyngeal carcinoma (NPC) patients were included and their medical records were reviewed. Among these 228 (71.7%) are male and 90 (28.3%) of the participants are females. The youngest patient is 13 years old and the oldest 81. The mean and median ages respectively are 37.8 and 38 with a standard deviation of 15.05. The age distribution for both sexes showed a peak between the ages of 20-39 years for both sexes, which accounts for 44% of cases ( Table 1).
The age distribution of NPC patients with respect to gender has shown different patterns of distribution. Female participants had a peak between 20 and 40 years of age, without a clear bimodal age distribution. While male participants had two peaks 30-39 years and 50-59 years of age. For both sexes the number of cases declined above the age of 60.
We observed a significant number of juvenile NPC cases, aged under 20 years (12%) and 20-29 (22%) years old (young adult). Overall, under 30 year age group represented 34% of all cases.
Geographic distribution of patients showed, 38.4% from Oromia region, followed by Amhara region 22% Addis Ababa 20.1%, Tigray 4.4%, SNNPR 6.9%, Somali 4.4% and Afar, Harari & Gambella each accounting < 2%. Sixty-two percent of the cases are from rural area of the country and 38% from the urban regions of the country.
Clinical stage and histopathologic result
Among the total cases found in the study, 56% (n = 178) are stage III while 30.2% (n = 96) are stage IV. Stage II and I account for 9.7% (n = 31) and 4.1% (n = 13) of the cases, respectively. According to the World Health Organization (WHO) classification of histopathologic variants, non-keratinizing nasopharyngeal carcinoma is found in 94.3% of the cases studied. Of these, undifferentiated type constituted 85.9%, while differentiated nonkeratinizing carcinoma accounted 8.4% of the cases. The keratinizing type of squamous cell carcinoma was found in only 5.7% of the cases.
Discussion
According to the data presented above, males outnumber females, with a ratio of 2.5:1, which is found to be a consistent feature across all populations in both endemic and nonendemic regions. Almost all literatures reviewed also demonstrated the incidence of nasopharyngeal carcinoma (NPC) in men being higher than women, with a ratio of 2-3:1.
The study found age distribution with similar patterns to those of intermediate incidence countries of North Africa and Southeast Asia. The peak age in the study in both male and female was found between the ages of 20-39 years (age range of 13-80) with a mean age of 37.8. Males had another peak at 50-59 years of age after which it shows a significant decline for both sexes. In an epidemiological study done in Malaysia, the incidence in both sexes rose after the age of 20-29 years and reached a plateau between 40 and 49 years. No further rise was exhibited after age 60 years [14]. In another study conducted in North African Maghreb countries, the age at diagnosis of the cases ranged from 11 to 81 years, with an apparent bimodal distribution with peaks at 20 years and 40 years [15]. The peak age differences in gender, found in this study has also shown a similar pattern to a finding of a study conducted in Ibadan Cancer Registry in Nigeria, where the overall the mean age was 41.1 years. The research also showed a peak age group of incidences for females was 20-29 and 50-59 for the males with a sharp incline in the 4th and 5th decades and rapid decline after the age of 60 [16]. The bimodality in the age distribution found in this study at Tikur Anbessa Specialized Hospital (TASH) has shown a similar pattern to that of moderate incidence regions, which might be related to the early age environmental carcinogen exposure and probable chronic Epstein-barr virus (EBV) infection although the EBV status of the cases is unknown. Hence, these findings mandate the incorporation of EBV serologic studies as a routine workup for NPC patients. On the other hand, the rapid decline in incidence after 60 years of age indicated that NPC in the study population is less likely associated with the usual risk factors related with other head and carcinomas.
Generally, NPC is uncommon in individuals under the age of 20 years, whereas in Northern Africa, an endemic area, it is found that 20% of patients are below age 30 [17]. Studies have shown notable age differences of North African and Southeast Asian countries, suggesting that it could result from a distinct combination of etiological factors. One intriguing characteristic of North African NPC is its bimodal age distribution with a secondary peak of incidence in the range of 15-25 years, not observed in Asian NPC [18]. In accordance with this finding we observed a significant number (34%) of juveline NPC cases, aged under 30 years in our study. The higher incidence of juveline NPC may reflect a possible genetic susceptibility. Thus, familial aggregation should be well investigated in the region, as family history of NPC is more likely associated with endemic forms of the disease.
Besides geographical variation, some ethnic groups also seem to have a predisposition for nasopharyngeal carcinoma. From our study participants the highest percentage, 38% of patients are from Oromia region, but this finding cannot be supported due to small sample size of the study and the Oromia region being the most populous in the country. Thus, the distribution of cases in the present study is equivalent to population size of each region.
The study also found 86.2% of patients presented with locoregional disease and late advanced clinical stage that is, stages III and IV disease. This is similar to the 89% of patients in advanced stage disease from a Kenyan study [19], and in a Tanzanian study where, 80% of patients were found to be in Stage IV [20]. Our study shows similar values for stage III and IV diseases when compared to studies from endemic areas, where stage III and IV tumor in general account for close to 80-90% of cases at presentation. These finding can be ascribed to a more aggressive course of the disease noted in the undifferentiated histopathologic variant of the disease commonly found in endemic areas of the world [21].
In addition to the geographic based age distribution, NPC shows varying histopathologic distribution among different populations. In this study, nonkeratinizing nasopharyngeal carcinoma (NK NPC) is found in 94.3% of the cases, while the keratinizing subtype was found in 5.7% of the cases. The histological findings of our study resembles that of endemic areas worldwide, with a predominance of NK NPC, especially that of undifferentiated type which accounts for 85.9%. The findings in the present study can be paralleled with a study conducted in North Africa, an endemic region, where almost all (92%) cases were undifferentiated carcinomas (UCNT) and in Indonesia which is estimated to have an intermediate incidence of NPC, is also characterized with majority of NK NPC type [22].
On the other hand, a study conducted in a similar institution, Tikur Anbessa Specialized Hospital (TASH), between 2016 and 2017 showed 70% of cases as nonkeratinizing undifferentiated nasopharyngeal carcinoma while the remaining as keratinizing nasopharyngeal carcinoma [23]. In comparison, the recent growing number of predominant NK NPC in the present study reflects the recent changing trends of environmental, lifestyle and possibly genetic factors of the region, related to NPC.
In general, the histopathological variants of NPC correlate with the etiology of the disease. Most notably, the nonkeratinizing neoplasms as evidenced in this study are likely caused by chronic subclinical EBV infection similar to what has been observed in endemic regions. Since EBV carrier state is a ubiquitous condition, where more than 90% of adults worldwide have been infected with EBV, other carcinogenic cofactors, are also implicated in the etiogenesis of NPC [24]. It is therefore indicated that EBV alone is not a sufficient cause for this malignancy. Therefore, environmental exposures and/or genetic risk factors are also more likely to play a role in the pathogenesis of EBV related NK NPC in endemic regions of the world [25].
Limitations of the study
This study used a cross-sectional design on a single institution providing only a descriptive report of the patients diagnosed at the hospital, hence it cannot represent the true distribution of NPC for Addis Ababa or Ethiopia. The EBV serologic status of the patients is unknown and thus the study provides limited evidence to explain the predominance of undifferentiated histopathologic variant in our setup, and therefore further association with EBV infection cannot be provided based on this study.
Conclusion
According to the findings of the present study, nasopharyngeal cancer was common in the young and productive age group, which has far reaching implications in terms of their socioeconomic output. Overall, the age distribution and the pathologic findings of, nonkeratinizing carcinoma being the predominant histopathologic variant resembles that seen in endemic areas of the world. Indicating that Ethiopia is still an unexplored region regarding nasopharyngeal carcinoma, large scale studies need to be conducted to study the endemicity, as well as identify the sole etiologic factors in the region. | 2021-05-13T14:01:21.888Z | 2021-05-12T00:00:00.000 | {
"year": 2021,
"sha1": "caab34173acf8e574f30f5366dffb58b673bd4c9",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-08311-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceb37af4587a1e9f3b6ec8d82163206b0531cb98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52462560 | pes2o/s2orc | v3-fos-license | Macroeconomic Information and the Implied Volatility : Evidence from India VIX
The present study attempts to examine the scheduled macroeconomic announcement effects on the India VIX using OLS regression model and the EGARCH model. The empirical results show that the information content of macroeconomic news on report day and day after does not have significant influences on India VIX, except MCIR. Besides, the findings reveal no significant response of India VIX during one day before the scheduled news announcements. This is due to the fact that India VIX market is more uncertain before the declaration of the results of MCIR convention of RBI, Export, Import, Fiscal Deficit, GDP, IIP and Inflation (CPI/WPI). The study shows that investors do not need to consider the scheduled macroeconomic announcement except MCIR meeting day and one day after in the valuation of options pricing or financial planning.
Introduction
The impact of information releases on stock prices has been well documented in the financial economic literature rather than the impact on option prices.One of the main challenges in options pricing is to understand the information contents that determine the volatility of asset prices.The renowned Black-Scholes-Merton option pricing model stated that call and put options pricing was reliant on the price of the underlying, strike price, risk free rate of interest, time to expiry and volatility.The market traders are conscious about these factors, except future volatility, at the time of shorting an option.The future volatility is unknown and subject to personal anxiety or fear of the option seller.This volatility is known as implied volatility and it replicates the sentiment of the option seller.If the option seller believes that the future volatility is high, higher premium is demanded for shorting an option, which makes option prices higher, and vice versa.Low value of Volatility Index (VIX) indicates stability in the market while high value indicated strain, fear and concern.The VIX measures the implied volatility in the market using the price levels of the index options.It is necessary in successful options trading to appraise the fair level of implied volatilities i.e. evaluate whether options are traded at too low or too high costs.
As the option prices replicate investors' sentiments and future cash flows, option implied volatility can include wider information set which incorporates scheduled macroeconomic news announcements, scheduled earnings announcements, etc.However, the market participants are conscious that some information will be provided to the market on a precise date but the content of the release is anonymous.Due to the ambiguity linked to the informational content of the announcement, investors anticipate a higher volatility on the releasing or reporting day.Accordingly, it should be observed that the implied volatility gradually rise during the announcement period, reach high on before the day news disseminates, and comeback to its normal level afterwards [1].Academics, market participants and market analysts are of great interest in the behaviour of implied volatility as it provides a superior forecast of future volatility.This is due to the fact that when the new information disseminates in the market, under the market efficiency hypothesis, it will reflect in the underlying stock prices and thus VIX is the expectation of market participants for the future volatility.Besides, the information content of macroeconomic indicators holds great deal of interest since investors are aware of the important scheduled news and consider this news in their potential investment and portfolio risk management.A clearer conception about the announcements on the macroeconomic indicators that influence the slope is important for budding new option pricing models and devising appropriate hedging and investment strategies.Option traders and financial analysts closely monitor the behaviour of implied volatility index as they believe that it carries important information regarding the economic structure and the risk aversion of the participants in the market.In this study, the attempt has been made to investigate the behavior of India VIX during, before and after the scheduled macroeconomic news release.
Literature Review
The earlier studies such as Ederington and Lee [2] [3], Thorbecke [4], Bomfim [5] and Kearney and Lombra [6] had shown that asset prices volatility and implied volatility increases significantly prior to the macroeconomic announcement and goes normal on the day of news release.Graham et al. [7], Nikkinen and Sahlstrom [1], Nikkinen et al. [8] and Chen and Clements [9] studied the impact of macroeconomic announcements on stock return and implied volatility index and found that these markets respond significantly to the scheduled news and also revealed that VIX increases prior to the scheduled announcement and remain more stable on the day of announcement.Gospodinov and Jamali [10] analysed the impact of Federal Open Market Committee (FOMC) statement releases on US VIX index and found that volatility index responds positively and significantly on federal fund rate surprises.Also, the study results showed that macroeconomic variables viz.industrial produce, employment rate, GDP growth rate and Inflation significantly affects implied volatility index.Marshall et al. [11] concluded that implied volatility is significantly affected by macroeconomic news release and also showed that scheduled announcements make the implied volatility falls on the announcement day but no considerable change observed on pre and post announcements.On the other hand, Aijo [12] explained that good (bad) news caused implied volatility to fall (rise).
It is clearly seen that the empirical evidences on the subject is found to be contradictory.To the best of knowledge, the Shaikh and Padhi [13] is the only study attempted to investigate the response of Implied Volatility Index towards the scheduled macroeconomic announcements in the Indian context.Using the Ordinary Least Squares (OLS) and Generalised Autoregressive Conditional Heteroscedasticity (GARCH) models, they found that IVIX rises significantly prior to the scheduled announcement and remains more stable on the new releases.
However, the GARCH models used in their study assumes the symmetric behaviour of market reactions towards positive and negative news and therefore, cannot capture the leverage effect.Meanwhile, in another study, Shaikh and Padhi [14] investigated the asymmetric contemporaneous relationship between India VIX and NIFTY Index and they proved that the changes in India VIX occur bigger for the negative return shocks than the positive returns shocks.In this situation, the application of GARCH-type models by Shaikh and Padhi [13] are mis-specified and lead to biased estimates of volatilities, as well as to inaccurate forecast intervals.To model the asymmetry in implied volatility index and allow the possibility to measure the different impact on the conditional variance of bad and good news, the Exponential Generalised Autoregressive Conditional Heteroscedasticity (EGARCH) model offered by Nelson [15] and Engle et al. [16] was used in the present study to investigate the behavior of India VIX during, before and after the scheduled macroeconomic news release.Besides, the EGARCH model, unlike the linear GARCH models, uses the natural logarithm of the conditional variance to relax the non-negativity constraint of the model's coefficients and to allow for the persistence of shocks to the conditional variance.
Methodology
In order to investigate the behaviour of India VIX on scheduled macroeconomic announcements, the present study employed OLS (Ordinary Least Squares) and Exponential Generalized Autoregressive Conditional Heteroscedasticity (EGARCH) models based on following specifications.The analysis was done with the help of EVIEWS 9 econometric software package.
βs explains the impact of scheduled macroeconomic announcements on VIX and the macroeconomic variables associated with βs are the dummy variables, assumes one on the important announcements days otherwise it is zero.0 α captures the behaviour of India VIX during the non-announcement periods.t, t − 1 and t + 1 represents time period, viz.reporting day, one day before and one day after the announcement of macroeconomic news, respectively.Based on the study of Ederington and Lee [5], Nikkinen and Sahlstrom [13] and Chen and Clements [3], we expect that during the non-announcements days, the VIX rises hence intercept (α 0 ) should be non-zero and positive, and also should be statistically significant.On the scheduled macroeconomic news release day, the VIX should go down, and goes its normal level, as it impounds the information disclosure.Hence, all the slope coefficients (βs) scheduled news of macroeconomic indicators should be negative, and should be statistically significant.Before the scheduled news announcements VIX rises significantly.Hence, all the slope coefficients ( s β ) scheduled news of macroeconomic indicators should be posi- tive and statically significant.After the announcements of scheduled news, VIX goes its normal level, i.e. it declines up to certain days.Hence, all the slope coefficients ( s β ) scheduled news of macroeconomic indicators should be negative, and should be statistically significant.Under the EGARCH Models, t h repre- sents the conditional variance; i δ shows the GARCH effect i.e. persistent Large GARCH coefficient implies that higher persistence of volatility shocks.λ cap- large negative return caused by bad news than it is to a large positive return of the same magnitude due to good news.The exponential nature of EGARCH guarantees that the conditional variation is always positive even if the parameter values are negative; thus there is no requisite for parameter restrictions to inflict non-negativity.
The macroeconomic schedule announcements on Gross Domestic Product
Empirical Results
Table 1 shows the descriptive statistics of IVIX for the sample period of 2 nd March 2009 to 31 st August 2016.The average IVIX for the sample period average IVIX is found to be 21.36 points with an average negative return of 0.06 percent.
The highest VIX points observed for the period is 56.07 and lowest is 11.56.The table results show that log-returns series of IVIX is positively skewed and leptokurtic.The Jarque-Bera statistic rejects the null hypothesis, at one percent significant level, that the return series is normal against the alternative hypothesis that the return series is non-normal.The modified Dickey-Fuller test statistic (known as the DF-GLS test) is statistically significant at one percent level, signifies that the log-returns series is stationary.Moreover, the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test statistic accepts the null hypothesis that the log-return series is stationary.Besides, the Figure 1 shows the time-series plot of IVIX index, implying that there is no problem of trend in the time series of IVIX.To test whether there is ARCH effect in the log-return series of IVIX during the study period, the Engle [17] ARCH-LM test statistics was employed and it reveals significant ARCH effect at one per cent level, hence the results warrant for the estimation of GARCH family models.
In order to examine the scheduled monetary and macroeconomic announcement effects on the India VIX, the OLS model and EGARCH model were employed and the results are depicted in Table 2. Based on the proposition, devel-oped by the previous studies such as Ederington and Lee [3], Nikkinen and Sahlstrom [1] and Chen and Clements [9], it is expected that during the non-announcements days, the IVIX rises hence the intercept is expected to be non-zero (positive) and should be statistically significant.On the scheduled macroeconomic news release day, the IVIX should falls and reaches its normal level, as it impounds the information disclosure.Hence, the slope coefficients of EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT, INFLATION and MCIR expected to be negative and statistically significant.Before the scheduled news announcements, the IVIX rises significantly and thus, the slope coefficients of EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT, INFLATION and MCIR expected to be positive and statically significant.After the announcements of scheduled news, VIX reaches its normal level, i.e. it diminishes up to certain days.Hence, the slope coefficients of EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT, INFLATION and MCIR expected to be negative and statistically significant.
Model 1 and 2 studies the behavior of IVIX during the report day of the announcements using OLS and EGARCH model, respectively.It is under the assumptions that investors consider the scheduled macroeconomic announcements in their financial planning.It is observed that the intercept appears non-zero but positive in the case of report day and statistically insignificant.This shows that on the non-announcement day there is an uncertainty in the market and the investor overreacts but it is does not impacted the IVIX.Besides, the slope coefficients of EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT and INFLATION on the report day are found to be insignificant.Only the slope of the MCIR is found to be negative and statistically significant on the macroeconomic announcements day in the case of Model 1 and 2. This implies that the IVIX falls on the report day with regards to the announcements on MCIR news, suggesting the information content of MCIR have the greatest impact on the valuation of financial assets.
Model 3 and 4 studies the behavior of IVIX during a day before the macroeconomic announcements.The intercepts in both the models are found to be positive, but statistically insignificant during one day before the scheduled announcements.In addition, it is seen that there are no slope coefficients are found to be statistically significant in the case of both models except MCIR information contents.Model 5 and 6 shows the behavior of IVIX a day after the macroeconomic announcements.The intercept coefficients in both the model are found to be positive and statistically insignificant during a day after the scheduled announcements.The slope of the MCIR announcements is found to be positive and statistically significant in the case of both models, implying that the IVIX rises after the scheduled announcements of MICR content.It is expected that the IVIX should decrease after the scheduled announcements of macroeconomic information.However, the scheduled macroeconomic news releases are not found to be important in influencing the behavior of India VIX except MCIR.Under the EGARCH estimations, the GARCH coefficients are found to be statistically significant, suggesting that once a shock has occurred, volatility tends to persist for long periods.Besides, the leverage coefficients in all the models are found to be positive and statistically significant signifying the leverage effect i.e. positive shocks (good news) have a greater impact on conditional volatility of the India VIX than negative shocks (bad news) of equal magnitude.Moreover, the ARCH-LM tests statistics are found to be insignificant in all the estimated EGARCH models and confirms the absence of any further ARCH effects.
Table 3 provides the empirical results of scheduled monetary and macroeconomic announcement effects on the India VIX which are estimated with the inclusion all the variables during the report day, one day before and after scheduled announcements.From the Model 7 and 8, it is clearly seen that the, except MCIR, the slope coefficients of EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT and INFLATION on the report day are found to be insignificant.The MCIR report day is found to be negative and statistically significant in the case of Model 7 and 8 and this confirms that the IVIX falls on the report day with regards to the announcements on MCIR news.It is also seen that there are no slope coefficients of scheduled announcements one day before are found to be statistically significant in the case of both OLS and EGARCH models.Moreover, the findings reveal that the slope of the MCIR announcements is found to be positive and statistically significant in the case of both models, implying that the IVIX rises after the scheduled announcements of MICR content.Under the EGARCH estimate, the GARCH effect is found to be statistically significant, suggesting that once a shock has occurred, volatility tends to persist for long periods.Besides, the leverage coefficient is positive and statistically significant indicating that positive shocks (good news) have a greater impact on conditional volatility of the India VIX than negative shocks (bad news) of equal magnitude.Moreover, P. Srinivasan
Conclusions
The present study attempts to examine the scheduled monetary and macroeconomic announcement effects on the India VIX using OLS regression model and the EGARCH model.This study is based on the behavior of India's VIX during the series of scheduled macroeconomic news releases such as EXPORT, FISCAL DEFICIT, GDP, IIP, IMPORT, INFLATION (CPI/WPI) and MCIR for the period 2 nd March 2009 to 31 st August 2016.It is anticipated that the IVIX should decrease on the day of scheduled announcements of macroeconomic information and the day after information release.However, the empirical results show that the information content of macroeconomic news on report day and day after does not have significant influences on India except MCIR.Besides, the findings reveal no significant response of India VIX during one day before the scheduled news announcements.This is due to the fact that India VIX market is more uncertain before the declaration of the results of MCIR convention of RBI, Export, Import, Fiscal Deficit, GDP, IIP and Inflation (CPI/WPI).Based on the previous studies, it is postulated that before the announcement of macroeconomic news, the implied volatility index increases significantly and on the day of macroeconomic announcements, uncertainty about the news gets resolved and VIX goes its customary level.Moreover, the efficient market hypothesis explained that if the market is efficient, it responds to the important macroeconomic information releases.Accordingly, the investors should take into account these news releases in their portfolio selection.However, the major findings of the study suggests that investors need not to consider the scheduled macroeconomic announcement except MCIR meeting day and one day after in the valuation of options pricing or financial planning due to the inefficiency of the Indian options market.These results are contradictory with the findings of Shaikh and Padhi [13] in the Indian context.The study can be extended in terms of response of India VIX for the economic news of Europe, US and other emerging economies.The present study is based on index level investigation, thus there is scope for concerning the behaviour of India implied volatility to the firm-specific announcements.Further studies may examine the response of predictive power of volatility index and examine the co-movements of IVIX with other global volatility indices.
(
GDP), Index of Industrial Production (IIP), Fiscal Deficit, Export, Import, Inflation (represented by CPI/WPI) and MCIR are considered for the study.As per the IMF (International Monetary Fund) guidelines, every country has to maintain special data dissemination standard (SDDS), under this provision RBI disseminates the data on Monetary and Credit Information Review (MCIR) which mainly consists of information announcements on Reverse Repo Rate, Repurchase Rate and Cash Reserve Ratio.The sample period for the study is considered from 2 nd March 2009 to 31 st August 2016.The macroeconomic indicators considered in the study are available based on their schedule, after the normal market is open around 11.00 am (ISD).Accordingly, the content of the news enter in the market on the day of actual announcement.The necessary information on daily closing prices of the India VIX is taken from the web site of National Stock Exchange of India (NSE) and the macroeconomic schedule announcements on each indicator viz.Gross Domestic Product (GDP), Index of Industrial Production (IIP), Fiscal Deficit, Export, Import, Inflation (represented by CPI/WPI) and MCIR are collected from the Bloomberg database.
Figures in the parenthesis( ) indicates p-value.*denote the significance at one percent level.
Figure 1 .
Figure 1.Time Series Plot of IVIX index.
Figures in the parenthesis
Table 2 .
Impact of Macroeconomic announcements on the implied volatility index (Dependent Variable: Log-Returns of IVIX).
*denote the significance at one percent level.Parentheses ( ), [ ] and {} indicates t-values, z-values and probability values, respectively.
Table 3 .
Results based on the report day, one day before and after scheduled announcements (Dependent Variable: Log-Returns of IVIX). | 2018-09-15T21:26:48.964Z | 2017-03-16T00:00:00.000 | {
"year": 2017,
"sha1": "3bba5962c8e48ac1e1c59935aeee98006611151a",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75420",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3bba5962c8e48ac1e1c59935aeee98006611151a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
174815486 | pes2o/s2orc | v3-fos-license | An Updated Model of Chronic Ankle Instability
Lateral ankle sprains (LASs) are among the most common injuries incurred during participation in sport and physical activity, and it is estimated that up to 40% of individuals who experience a first-time LAS will develop chronic ankle instability (CAI). Chronic ankle instability is characterized by a patient’s being more than 12 months removed from the initial LAS and exhibiting a propensity for recurrent ankle sprains, frequent episodes or perceptions of the ankle giving way, and persistent symptoms such as pain, swelling, limited motion, weakness, and diminished self-reported function. We present an updated model of CAI that aims to synthesize the current understanding of its causes and serves as a framework for the clinical assessment and rehabilitation of patients with LASs or CAI. Our goal was to describe how primary injury to the lateral ankle ligaments from
A nkle sprains are among the most common injuries in the general population, and the injury reported most frequently by competitive athletes. [1][2][3] Damage to the lateral ligaments of the ankle accounts for the majority of ankle sprains, regardless of patient demographics. 4,5 The prevalence of lateral ankle sprains (LASs), coupled with high rates of reinjury, persistent symptoms, and reduced self-reported ankle function, makes LASs and their sequelae a public health concern. 6,7 Chronic ankle instability (CAI) is a condition characterized by repetitive episodes or perceptions of the ankle giving way; ongoing symptoms such as pain, weakness, or reduced ankle range of motion (ROM); diminished self-reported function; and recurrent ankle sprains that persist for more than 1 year after the initial injury. Specific diagnostic criteria for CAI have been recommended by the International Ankle Consortium. 8 Doherty et al 9 performed a prospective study of patients with first-time ankle sprains who sought treatment in a hospital emergency department and found that 40% had developed CAI, as defined by these criteria, at 12-month follow-up.
Our understanding of the factors contributing to CAI has evolved over the past 6 decades. Freeman et al [10][11][12] presented the first comprehensive theory of ankle instability in 1965. They coined the term functional instability, which they operationally defined as ''the disability to which patients refer when they say that their foot tends to 'give way' in the months and years after initial ankle sprain.'' 12(p678) It must also be noted that the ankle giving way was not the patients' only complaint: Every patient whose foot gave way stated that such incidents occasionally caused the ankle to be painful or swollen, sometimes to such an extent that the ankle could be said to have been sprained. . . . For this reason no patient complained only of a tendency for the foot to ''give way.'' 10(p666) Freeman 11 was adamant that mechanical instability due to pathologic laxity of the ankle was only rarely the initial cause of the functional instability of the foot. Mechanical instability was specifically defined as increased varus tilt of the talus under inversion stress. Instead, Freeman et al 12 asserted that (1) the afferent nerve fibres in the capsule and ligaments of the foot and ankle subserve reflexes which help to stabilise the foot during locomotion, and (2) when the foot or ankle is ''sprained'' partial deafferentiation of the injured joints occurs, so that (3) reflex stabilisation of the foot is impaired and the foot tends to ''give way.'' 12(p678) Additionally, Freeman et al 12 provided evidence that patients who performed coordination exercises during their recovery from ankle sprains demonstrated a lower incidence of functional instability.
Tropp et al [13][14][15][16][17] conducted a series of studies in the 1980s that aimed to further the understanding of the causes of CAI. Using the mechanical instability-functional instability dichotomy as a starting point, 15 they concluded that functional instability could not be due to proprioceptive (ie, sensory) deficits alone, as originally hypothesized by Freeman et al, [10][11][12] but was also due to changes in the motor component of sensorimotor control, particularly impaired postural control, [13][14][15] diminished ankle-eversion strength, 16 and alterations in motor control of the muscles proximal to the injured ankle. 17 This led to a shift in the literature from describing functional instability as a persistent symptom after LAS, as originally described by Freeman et al, [10][11][12] to the idea that functional instability represented the sensorimotor cause of persistent injury. Functional instability was viewed as contrasting with mechanical instability, due to pathologic ankle-joint laxity, as the cause of recurrent and persistent instability after LAS. 15 Hertel 18 published a comprehensive literature review on functional ankle instability in 2000 that summarized the evidence of sensorimotor deficits related to ankle instability, including impairments in balance, joint position sense, peroneal muscle reaction time to inversion perturbation, peripheral nerve-conduction properties, muscle strength, and ROM. Two years later, Hertel 19 presented an expanded model consisting of a Venn diagram with 2 overlapping circles representing the potential mechanical and functional (sensorimotor) contributions to CAI. In this model, the condition was explicitly labeled CAI in an effort to avoid the confusion over whether functional instability was the involved deficit or a potential cause of the involved deficit. 19 Additionally, the terms mechanical instability and functional instability were not used in the model; instead, mechanical insufficiencies and functional insufficiencies were described as specific contributors to the development of CAI. 19 Mechanical insufficiencies in the model included pathologic laxity, arthrokinematic restrictions, degenerative changes, and synovial changes, whereas functional insufficiencies included impairments in proprioception, neuromuscular control, strength, and postural control. 19 The components of both mechanical and functional instability could now be named, described, and studied to show the relationships within and between the interrelated causes. This model suggested that when insufficiencies were identified clinically in individual patients, treatments to address the specific insufficiencies could be developed in an effort to improve patient outcomes. 19 In 2011, Hiller et al 20 proposed an extension of the Hertel 19 model in the form of multiple clinical subgroups for classifying patients with CAI: mechanical instability, perceived instability, and recurrent sprains, or combinations of these 3 conditions. 20 Importantly, the authors validated their model by fitting patients with CAI into the predetermined subgroups. Evaluating 108 ankles with CAI, they found that 56% fit into 1 of the 3 primary categories and 44% did not. 20 However, all of the ankles did fit into 1 of the 7 subgroupings when the primary categories were combined. 20 Although the Hiller et al model 20 evolved the understanding of CAI, the advent of new evidence in areas such as self-reported function, health-related quality of life, kinesiophobia, altered movement patterns, and physical activity levels, as well as contemporary injury paradigms such as the biopsychosocial model, 21,22 dynamic systems theory, [23][24][25][26] and neuromatrix of pain theory 27,28 emphasizes the need for an updated model. The purpose of our article was to describe an updated model that provides a theoretical framework for the contemporary understanding of the causes of CAI while simultaneously offering a framework for clinicians evaluating and treating patients with LASs and CAI.
UPDATED MODEL OF CAI
The updated model of CAI has 8 primary components: (1) primary tissue injury, (2) pathomechanical impairments, (3) sensory-perceptual impairments, (4) motor-behavioral impairments, (5) personal factors, (6) environmental factors, (7) component interactions, and (8) the spectrum of clinical outcomes ( Figure 1). All patients with CAI will have had a primary injury to the anterior talofibular ligament (ATFL) and possibly the calcaneofibular ligament (CFL) at the time of their index LAS. Each specific impairment listed under the categories of pathomechanical, sensory-perceptual, and motor-behavioral impairments is a factor that has been identified in the literature as being different between patients with CAI and healthy participants without a history of LAS. The list of many specific impairments in the model is not meant to imply that every patient with CAI will present with each individual impairment; instead, these are characteristics that the patients as a group are likely to demonstrate. Patientspecific personal and environmental factors play critical roles in how an individual responds to injury and its consequences. 21,22 The component interactions are drawn from dynamic systems theory [23][24][25][26] and the Melzack neuromatrix theory of pain 27,28 and used to hypothesize how the primary tissue injury, the 3 categories of impairments, and personal and environmental factors may interrelate to produce a patient's clinical outcome. Lastly, the spectrum of outcomes ranges from a fully successful recovery (coper) to an indisputably unsatisfactory outcome (CAI).
Primary Tissue Injury
For CAI to develop, a patient must first sustain an index LAS. Lateral ankle sprains are typically caused by excessive supination of the rearfoot on an externally rotated tibia. These injuries are often referred to as inversion ankle sprains, but this term represents a reductionist approach to describing the mechanism of injury and ignores the oblique axes of rotation of the talocrural and subtalar joints. 19 Through robust analysis of several LASs that occurred in athletes and were captured on video, the kinematics of the injury mechanism were shown to consist of both excessive inversion and internal rotation of the rearfoot on the tibia. 29,30 Interestingly, this work has also challenged the dogma of LASs as plantar flexion-inversion injuries by demonstrating that in some athletes, the peak angles and angular velocities of inversion and internal rotation occurred not while the ankle was in plantar flexion but when it was in sagittal-plane neutral or dorsiflexed. 30 Perhaps the term inversion-internal-rotation sprain would be a more apt kinematic description of the mechanism of injury for LAS.
The ATFL is the ligament injured most commonly during an LAS. 1 Concurrent injury of the CFL is present in many more severe ankle sprains. 1 Clinicians must be cognizant of other potential injuries when evaluating patients who have experienced an inversion-internal-rotation mechanism of injury, including but not limited to fibular fracture, fifth metatarsal fracture, osteochondral lesion of the talus, high ankle sprain (injury to the anterior inferior tibiofibular ligament and tibiofibular syndesmosis), subtalar-joint sprain, bifurcate ligament sprain, fibularis tendon and retinacular lesions, and injury to the superficial fibular, tibial, or sural nerve.
An initial LAS results in stretching or disruption of the collagen fibers of the lateral ligaments, causing structural tissue damage. After an LAS, patients quickly develop the clinical signs and symptoms of pain, swelling, and inflammation. Simultaneously, but often less obviously, alterations in sensorimotor function also occur. Together, the injured tissues, accompanying inflammatory responses, and the patient's psychological and emotional responses to the injury (eg, pain and mechanical and sensorimotor alterations in response to ligamentous injury) drive the specific impairments that can cause an individual to deviate from successful healing toward CAI.
Pathomechanical Impairments
Pathomechanical impairments are operationally defined as structural abnormalities to the ankle joint and surrounding tissues, secondary to an index LAS, that contribute to ankle dysfunction and CAI. The impairments in this category represent the biological component of the biopsychosocial model.
Pathologic Laxity. Loss of the structural integrity of the lateral ankle ligaments results in pathologic laxity of the talocrural joint and possibly the subtalar joint. This laxity represents the mechanical instability described in earlier models of CAI. Disruption of the ATFL is associated with increased anterior drawer, or translation, of the talus within the tibiofibular mortise. Although most often evaluated with a common physical examination test, increased anterior translation of the talus has also been consistently demonstrated among patients with CAI using objective measurements such as instrumented arthrometry [31][32][33] and stress radiographic 34 and ultrasound imaging. 35,36 Excessive internal rotation of the talus on the tibia has also been described in relation to lateral ankle instability. [37][38][39] The anterolateral drawer test is performed by passively internally rotating the rearfoot while stabilizing the tibia. [37][38][39] The absence of a firm end feel at maximal internal rotation indicates a rupture of the ATFL. In some patients with extensive laxity, a ''clunk'' of the talus may be felt, similar to that found with a positive pivot shift test in a patient with an anterior cruciate ligament-deficient knee. Although this test is popular in some orthopaedic circles, [37][38][39] further research is needed to validate the diagnostic properties of the anterolateral drawer test.
Integrity of the CFL is most often assessed using the inversion stress test. This test is performed by passively inverting the rearfoot to its end ROM. Similar to the anterior drawer test, the inversion stress test has also been quantified using arthrometry. [31][32][33]40 The CFL may be better isolated by conducting the inversion stress test in a dorsiflexed position, whereas the integrity of both the CFL and ATFL can be evaluated by performing the test in a plantar-flexed position. 41 Clinicians must also be cognizant of the potential for increased laxity in adjacent joints, including the distal tibiofibular and subtalar joints, as a subset of patients with LAS and CAI presents with instability of these joints. 1 Evidence of an initial increase in laxity after acute LAS and a subsequent return toward preinjury laxity in the weeks and months afterward has been reported in a few prospective studies 42,43 ; some residual laxity is likely to remain in most patients who incur an LAS.
Arthrokinematic Restrictions. In contrast to pathologic laxity, particular accessory joint motions may be limited after LAS or with CAI. Over the past 2 decades, substantial advances in our understanding of arthrokinematic restrictions in the ankle and foot complex have emerged in the manual therapy literature. [44][45][46][47] Restrictions in anterior-toposterior glide of the talus on the tibia have been well documented as being associated with limited osteokinematic dorsiflexion of the talocrural joint in patients with lateral ankle instability. 44,45,48 Also, small amounts of anterior displacement of the talus on the distal tibia may be associated with restricted glide of the talus. 49 Furthermore, many patients have demonstrated anterior displacement of the distal fibula relative to the tibia and associated restriction of anterior-to-posterior glide of the distal fibula. 50,51 Lastly, the potential for arthrokinematic restrictions at the subtalar, midtarsal, and tarsometatarsal joints has also been described. 46,47 Osteokinematic Restrictions. Patients recovering from LAS or with CAI often demonstrate restricted dorsiflexion ROM. Possible causes of this deficit include the previously mentioned restriction of anterior-to-posterior talar glide and soft tissue restrictions in the triceps surae. These soft tissue restrictions may be due to inflexibility of the musculotendinous structures, neuromuscular spasm mediated by the c motor-neuron system, myofascial constraints, or a combination of these. 45 Patients with longstanding CAI may also exhibit limitations in foot and ankle motion in multiple planes as a consequence of osteoarthritis in the ankle complex. 52,53 Secondary Tissue Injury. As mentioned earlier, clinicians must be vigilant in assessing concomitant injuries to structures other than the lateral ligaments in patients who have sustained LASs. Similarly, repetitive bouts of excessive inversion-internal rotation, which may result in recurrent ankle sprains or less severe giving-way episodes, can result in further insult to the ATFL and CFL as well as secondary tissue damage about the ankle complex. Of particular concern are lesions of the fibularis longus and brevis tendons, the osteochondral surfaces of the talus and tibia, the synovial membrane of the talocrural and subtalar joints, and the ligaments of adjacent joints on the medial side of the ankle. 54 Ultimately, ankle osteoarthritis can be a serious sequela of CAI. 55 Tissue Adaptations. Injured tissues will adapt to the demands placed on them over time and may develop alterations that are not identifiable on routine physical examination. For example, the involved ATFL of both CAI and coper groups has been demonstrated to be substantially thicker than in healthy controls who have never incurred an LAS. 56 Additionally, subclinical alterations in the osteochondral surface of the talus as identified by higher T1q 57 and T2 58 relaxation times during advanced magnetic resonance imaging have been identified in patients with CAI compared with controls. Also, volume alterations have been seen in the intrinsic and extrinsic foot muscles of patients with CAI. 59 Clinicians should be mindful that such ''hidden'' structural changes may be contributing to specific impairments identified during the physical examination and functional testing of these patients.
Sensory-Perceptual Impairments
Sensory-perceptual impairments are operationally defined as conditions that the patient senses or feels about the body, the injury, or the self. These impairments represent physiological constructs such as somatosensation (bio in the biopsychosocial model), psychophysiological constructs such as pain (biopsycho), and psychosocial constructs such as kinesiophobia. These latter 2 constructs represent the patient's perceptions of the injury and the effects they have on his or her well-being. This grouping purposely includes impairments that involve both conscious and unconscious sensation and perception.
Diminished Somatosensation. Several domains of somatosensation have been noted to be impaired in patients with CAI. These impairments are hypothesized to occur because of damage to the ligamentous and articular proprioceptors during injury and possible nerve injury secondary to ligament injury. Deficits have been reported in both the active and passive joint position sense of frontaland sagittal-plane ankle motion, with CAI groups demonstrating more proprioceptive errors. 60,61 The inability of patients with CAI to accurately sense the position of their ankle joint before initial contact during gait or landing has been theorized to increase the risk of recurrent ankle sprain because the foot is likely to contact the ground in a position that predisposes the ankle to move into supination rather than pronation during the loading response. 62 Measures of force sense in all directions of ankle motion among patients with CAI have indicated that the ability to sense and regulate muscle-contraction output is impaired after joint injury, even in the absence of musculotendinous injury. [63][64][65][66][67][68] Interestingly, weak and nonsignificant correlations were found between measures of active position sense and force sense in patients with CAI, suggesting that these measures assess different constructs of somatosensation. 69 Differences in cutaneous sensation have also been demonstrated between CAI and control groups. The CAI groups have displayed poorer plantar sensation as evaluated with both vibrotactile stimuli 70 and Semmes-Weinstein monofilaments 71,72 at the heel, base of the fifth metatarsal, and head of the first metatarsal. Burcal and Wikstrom 72 observed impaired sensation over the sinus tarsi in both CAI and coper groups versus a healthy control group. Interestingly, the sinus tarsi was the only site at which the coper group exhibited sensory deficits, whereas the CAI group had deficits in plantar sensation in addition. 72 The ability to integrate different sensory inputs appears to be compromised in CAI. Song et al 73 performed a metaanalysis to investigate postural control in eyes-open and eyes-closed positions. Compared with healthy controls, patients with CAI relied more heavily on visual information than somatosensory information during unipedal-stance balance tasks. Additionally, those with CAI appeared to be unable to dynamically reweight sensory inputs to the same extent as healthy controls. 74 The physiological mechanism of these differences is currently unknown. To date, only 1 group 75 has evaluated somatosensory cortex activity in patients with ankle instability; they found no differences in electroencephalography-derived somatosensory cortex activity during a controlled ankle-joint-loading task among CAI, coper, and healthy groups.
Pain. Pain is a hallmark of most chronic musculoskeletal conditions. Surprisingly, quantification of pain has received relatively little attention in the CAI literature, 76 although clinical experience tells us that persistent pain is a common reason for patients with CAI to seek health care. The Melzack neuromatrix theory of pain 27,28 indicates that in chronic pain conditions, the pain is generated not exclusively from the sensory input evoked by injury, inflammation, or other damage at the site of symptoms but is instead produced by the output of the neuromatrix, a widely distributed neural network in the brain. Chronic psychological and physical stress associated with chronic pain can further diminish a patient's ability and willingness to participate in functional activities. 27,28 The influence of pain on other impairments commonly seen among patients with CAI is likely to be clinically important, but currently these relationships are poorly understood.
Perceived Instability. A common complaint of those with CAI is the perception that the ankle is unstable or that it is at risk of giving way during functional activities. Patients reporting perceived instability may or may not actually experience episodes of excessive ankle inversion; however, the perception of instability represents a clinically important impairment. 20 The Cumberland Ankle Instability Tool (CAIT) 77 and Identification of Functional Ankle Instability (IdFAI) 78 questionnaire have both been widely used in the CAI literature as screening tools. Both survey instruments ask individuals to self-report the frequency and circumstances of the perceived instability episodes. The CAIT consists of 9 questions, 1 about pain and 8 about perceived instability. A score of ,27 points out of a possible 30 points was originally considered the threshold for identifying functional ankle instability. 77 However, a CAIT 24 is now considered a diagnostic criterion of CAI. 79 The IdFAI consists of 10 questions about anklesprain history and perceived instability. A score of 11 out of a possible 37 points is necessary for a diagnosis of CAI. 78 Kinesiophobia. Fears of movement and reinjury during functional activities have been reported in patients with CAI. 80 Kinesiophobia is most often assessed with the Fear-Avoidance Beliefs Questionnaire 81 and the Tampa Scale for Kinesiophobia (TSK-11). 82 The Fear Avoidance Beliefs Questionnaire is a 16-item survey that addresses the fear of movement during physical activity and work. 81 The TSK is an 11-item questionnaire that assesses fears of movement and reinjury. 82 The perception that movement of the involved ankle will be harmful runs counter to the emphasis on therapeutic exercise as a primary treatment for CAI and represents an important obstacle to be managed when treating this condition.
Self-Reported Function. Reduced self-reported function has been consistently demonstrated in patients with CAI. 80,[83][84][85] These deficits have most often been identified using a region-specific questionnaire such as the Foot and Ankle Ability Measure (FAAM). 86 The FAAM consists of a 21-item Activities of Daily Living (ADL) scale and an 8item Sports scale; it requires patients to rate their difficulty when performing specific ADL or sport activities due to their involved ankle. 86 Measures of self-reported function provide insight into the types of actions and activities these patients are able to perform.
Health-Related Quality of Life. Measures of healthrelated quality of life (HRQOL) were diminished in patients with CAI. 80,85,87 Global, or generic, HRQOL focuses on broader concerns, such as mood, vitality, and social interactions, that are not as directly linked to ankle function as are the items on region-specific function scales. The most commonly used HRQOL scales in medicine are the Short Form-36 and Short Form-12 questionnaires. These scales are particularly adept at tracking HRQOL in patients with chronic conditions. Both have physical health and mental health subscales. Patients with CAI have displayed deficits in physical HRQOL but not in mental HRQOL. 87 A criticism of the Short Form scales is that they may not be appropriate for athletic or otherwise highly physically active populations because of a ceiling effect in their psychometric properties. 88 In response to this weakness, the Disability in the Physically Active Scale was developed to more accurately assess HRQOL in this population. 89,90 Using the Disability in the Physically Active Scale, Houston et al 80 demonstrated a large deficit in HRQOL among patients with CAI.
Motor-Behavioral Impairments
Motor-behavioral impairments among patients with CAI constitute deficiencies and alterations in muscle contractility, motion patterns, and physical activities that they choose to partake in or avoid. These factors constitute the motor aspect of sensorimotor function. All impairments in this category fall into the bio construct of the biopsychosocial model except for the reduced physical activity impairment, which includes both a bio component related to the physiological costs and benefits related to exercise and physical activity and a psychosocial component representing intentional behavior.
Altered Reflexes. A large body of literature has examined muscle-contraction timing and amplitude in response to inversion perturbations of the ankle. 91 The most common measures were the electromyographic, force, and kinematic responses to inversion of a platform with a trapdoor mechanism that caused the ankle to be suddenly inverted or, in some designs, concomitantly plantar flexed and inverted. Participants were typically in bipedal stance when 1 foot was perturbed, although some researchers [92][93][94] have tested participants during walking. In a meta-analysis, Hoch and McKeon 91 found delayed reaction time of the fibularis longus and brevis muscles in response to suddeninversion perturbations in patients with CAI. The delayed motor response may be due to alterations in somatosensation, nerve conduction velocity, or central processing of the monosynaptic stretch reflex. Regardless of the physiological source, delayed contraction of the fibularis muscles results in an electromechanical delay in the ability to create an eversion force to counteract the ankle moving quickly into inversion. 91 Neuromuscular Inhibition. Arthrogenic muscle inhibition has been well documented in chronically unstable ankles, 95 most often by assessing the H-reflex response in the fibularis longus muscle. The H-reflex is an electrically induced surrogate of the monosynaptic stretch reflex and represents spinal-level motor control. Participants receive transdermal electrical stimulation of a motor nerve, and the H-reflex output is measured via surface electromyography of the muscle of interest. Several groups have reported diminished H-reflex amplitude in the fibularis longus 92,96 and soleus. 97 Kim et al 98 also found that the constrained ability to modulate the H-reflex in the fibularis longus and soleus muscles across different postural positions (ie, moving from lying prone to bipedal stance or from bipedal stance to unipedal stance) was impaired in patients with CAI. Additionally, they were unable to modulate paired reflex depression of the soleus during positional changes similarly to healthy participants and demonstrated greater levels of recurrent inhibition of the soleus. 99 The inhibition of muscles proximal to the ankle has also been reported in patients with CAI. Using measures of central activation, investigators 100 observed that patients with unilateral CAI had bilateral inhibition of the hamstrings muscles and ipsilateral facilitation of the quadriceps muscles compared with healthy controls. Remarkably, impaired contractility of the diaphragm muscle has also been reported in patients with CAI, indicating that proximal muscle function was affected not only in the lower extremity musculature but also in the trunk. 101 In recent years, the influence of supraspinal motor control in patients with CAI has been studied using measures of motor-cortex excitability and inhibition. Electromyographic measures are taken from peripheral muscles immediately after transcranial magnetic stimulation of the motor cortex in areas of the homunculus specific to the muscles of interest. Higher resting 102 and lower active 103 motor thresholds of the fibularis longus were present bilaterally in patients with unilateral CAI. Kosik et al 104 identified less fibularis longus recruitment map volume and area in the motor cortex among patients with CAI than healthy individuals, suggesting that the former had a more concentrated and restricted area of neurons able to recruit the fibularis longus muscle. Altered balance between corticospinal inhibition and excitability of the soleus among patients with CAI compared with healthy controls has also been suggested. 105 Correlations between measures of cortical excitability and ankle laxity 106 and self-reported function 107 have been reported among patients with CAI.
Muscle Weakness. The clinical assessment of muscle function among patients with CAI most often relies on measures of strength using manual muscle tests. Using a handheld dynamometer, Fraser et al 108 recently reported that patients with CAI were weaker than healthy controls in isometric eversion, inversion, and plantar flexion but not in dorsiflexion.
Donnelly et al 109 also demonstrated deficits in isometric eversion strength but no differences in corresponding surface electromyography amplitude of the fibularis longus and brevis muscles. Interestingly, eversion force and electromyographic amplitude were significantly correlated in the healthy group but not the CAI group, indicating an uncoupling of muscle contractility and force production among patients. Additionally, Terrier et al 110 described a weight-bearing test of eversion strength that discriminated between CAI and healthy groups.
Ankle strength among patients with CAI has been studied extensively using isokinetic dynamometry. Meta-analy-ses 111,112 have shown consistent eversion concentricstrength deficits in patients with CAI. Deficits have also been reported in concentric inversion 113,114 and plantarflexion 115,116 strength and eccentric eversion, 117,118 inversion, 118,119 plantar-flexion, 120 and dorsiflexion 121 strength.
Weakness of the muscles proximal to the unstable ankle, including deficits in concentric knee flexion and extension, 116 isometric hip abduction, 115,121 extension, 115,121 external rotation, 121,122 and eccentric hip flexion, 123 has also been identified among patients with CAI. Distally, weakness in hallux and lesser toe-flexion strength 108 and diminished volume of the flexor hallucis brevis and adductor hallucis oblique muscles 59 have been reported in patients.
Balance Deficits. The relationship between ankle instability and balance deficits was first noted by Freeman et al 10-12 more than 50 years ago. In the ensuing decades, dozens of researchers have described balance, or posturalcontrol, deficits in patients with CAI. The most common balance tasks reported in the literature were maintenance of quiet unipedal stance 124 and the Star Excursion Balance Test (SEBT). 125 The former represents static balance, or the ability to remain as still as possible while standing on 1 leg, whereas the latter represents dynamic balance, which requires the participant to reach as far as possible in a prescribed direction with 1 leg while maintaining balance on the other limb. Balance deficits among patients with CAI may be due to somatosensory impairments, motor impairments, or both.
Static balance is typically assessed with a participant performing trials in eyes-open and then in eyes-closed conditions. Assessment of static balance may consist of no-, low-, or high-technology methods. No-technology assessment relies on patient or clinician judgment to subjectively identify impairment while the patient with unilateral CAI balances on the involved limb compared with the uninvolved limb. 12 A low-technology approach to measuring static balance assesses the amount of time a patient can maintain unipedal stance. 126 Performing the unipedal components of the Balance Error Scoring System on firm and foam surfaces by counting the number of predefined errors during a 20-second trial is another low-technology approach that has been used to quantify balance deficits among patients with CAI. 126,127 The most common high-technology approach to measuring balance is to have a participant maintain single-limb stance while standing on a force plate that measures 3dimensional forces and moments. 124 Although dozens of force-plate measures have been reported in the CAI literature, 60,124,128 the key conclusion is that balance deficits have been consistently demonstrated in these patients. Generally, the measures evaluate the magnitude, velocity, or variability of postural sway. Song et al 73 postulated that patients with CAI did not use somatosensory information to the same extent as healthy controls but instead relied more heavily on visual input to maintain unipedal stance.
The SEBT, originally conceived as a ''no-tech'' measure of dynamic balance, has been used extensively to identify deficits in patients with CAI, who are unable to reach as far as healthy controls. 125 Surprisingly, the reach deficits have been shown to be more strongly related to diminished knee and hip flexion than to limited ankle dorsiflexion. 129 Similarly, diminished hip-abduction and external-rotation strength has also been correlated with reduced reach distances in patients with CAI. 121 In addition, patients exhibited more trunk and pelvis rotation when executing select SEBT reach tasks. 130 Altered Movement Patterns. Individuals with CAI displayed altered movement patterns in a spectrum of functional activities, including walking, running, cutting, and landing, compared with control participants. Such alterations have been demonstrated using biomechanical measures of kinematics, kinetics, plantar pressure, and electromyography.
During walking, patients with CAI tend to exhibit greater inversion and plantar flexion of the foot relative to the tibia, a more laterally deviated center of pressure throughout stance, and alterations in fibularis muscle activation. 131 Biomechanical alterations during jogging and running tend to mimic those seen during walking. 131 The kinematic changes were amplified using a dual-task paradigm in which participants performed a cognitive task while ambulating. 132 A more inverted foot is likely to lead to an LAS. Investigators 133 have speculated that because the foot tends to be more inverted during midswing in patients with CAI, the fibularis muscles must activate during late swing to actively move the foot into eversion in preparation for initial contact. This is in contrast to healthy individuals, who typically contract the fibularis longus muscle after initial contact, as part of the loading response in which the fibularis longus muscle contracts to plantar flex the first ray. 133 This contraction would be associated with a medial displacement of the center of pressure as the foot also everts. If the fibularis longus is already contracted before initial contact, as it is in these patients, it cannot be contracted again to plantar flex the first ray during the loading response. Thus, this is a likely reason why the foot remains more inverted and the center of pressure stays more lateral throughout the stance phase among patients with CAI. 133 Although gait alterations in CAI are often described in terms of greater inversion, an alternative view associates CAI with less eversion, and hence, less pronation during the stance phase. This may be why patients with CAI produce greater impact force and a faster loading rate of the vertical ground reaction force during the loading response. 134 Alterations in the stride-to-stride variability of various gait factors have also been reported in patients with CAI. However, increases and decreases in variability have both been seen. These discrepancies likely depend on gait speed (walking, running), task constraints (fatigue, dual tasking), the specific biomechanical measure being analyzed, and the method used to calculate variability (linear, nonlinear). Increased variability in frontal-plane ankle kinematics during running among patients with CAI has been reported using linear variability calculations based on intraindividual standard deviations across multiple steps. 135,136 During walking, ankle frontal-plane kinematic variability was amplified in patients with CAI during a dual-task paradigm. 132 Conversely, Terada et al 137 reported less stride-to-stride variability in frontal-plane ankle kinematics among patients using measures of sample entropy, a nonlinear variability estimate during single-task walking.
Another approach to analyzing stride-to-stride variability is to assess the kinematic coupling behavior, or coordinated movement, of different segments of the lower extremity using vector-coding techniques. Patients with CAI have less variability in coupling between transverse-plane shank and frontal-plane rearfoot motion during walking and jogging. 138,139 Differences in coupling variability have also been examined between ankle motion and more proximal joints. During walking, patients with CAI have demonstrated less variability in frontal-plane ankle-hip coupling 140 and greater variability in ankle frontal-knee sagittal-plane motions. 141 During jogging, patients have exhibited less coupling variability between the ankle-hip and the ankle-knee in both frontal-and sagittal-plane motions. 141 Patients with CAI have also been reported to require a higher level of gait disturbance, defined by alterations in walking speed and dual tasks, to reduce stride-time variability, a spatiotemporal gait measure, compared with healthy controls. 142 This change was hypothesized to be due to less adaptability of the sensorimotor system in response to task constraints. 142 Koldenhoven et al 143 observed that patients with CAI had greater stride-to-stride variability in the location of their center of pressure during the first 10% of stance phase during walking compared with controls but no differences later in the stance phase, despite their center of pressure staying more lateral. 144 Interestingly, the patients with CAI also exhibited less variability in electromyographic amplitude of the fibularis longus muscle throughout the swing phase and the beginning of the stance phase. 143 During cutting tasks requiring rapid lateral movement, patients with CAI activated the fibularis longus earlier than healthy controls, in a manner similar to that seen while walking. 145 Reduced amplitude of fibularis longus surface electromyographic activity both before and after initial contact has been noted, 145,146 as has activation of other ankle and hip muscles. 146 Patients have exhibited greater ankle-inversion 147 and less dorsiflexion 146 motion, as well as pronounced changes in knee and hip motion, 146,147 during cutting tasks. In terms of kinetics, patients with CAI have shown greater peak vertical ground reaction force, less time to peak force, 148 and increased external knee-and hipextensor moments 149 during cutting tasks.
Alterations in single-limb landing tasks have also occurred in patients with CAI. In a recent systematic review, Simpson et al 150 concluded that patients with CAI tended to display altered kinematic, kinetic, and muscleactivation patterns during single-limb landings. They consistently landed in a more dorsiflexed position and underwent less sagittal-plane motion during the absorption phase of landing. 150 Higher peak vertical ground reaction forces and faster loading rates have also been reported in patients with CAI, indicating a stiffer landing strategy. 150 These landing strategies were associated with proximal kinematic and kinetic changes at the knee and hip. 149,151,152 Reduced fibularis longus muscle activation among patients with CAI has been seen in some studies, 150 but conflicting results 152 showed increased fibularis activation. Increased activation of the gluteus maximus muscle before initial contact has also been demonstrated in patients with CAI. 153 Reduced Physical Activity. Patients with CAI may avoid physical activity because of their ankle instability. College students with CAI took more than 2100 fewer steps per day than healthy counterparts with no history of ankle injury. 154 The long-term health consequences of reduced physical activity in patients with CAI are a concern that requires further study. It is also possible that these patients change the type of physical activities in which they choose to participate; however, this area has not been widely studied. Toward this purpose, Halasi et al 155 modified the Tegner Activity Scale, 156 a survey instrument that assesses changes in physical activity of patients with knee injuries, to be appropriate for patients with ankle injuries; however, this instrument has not been used widely in the CAI literature. We chose to include reduced physical activity in the category of motor-behavioral impairments because participating or not participating in specific physical activities is a motor behavior that is distinct from the sensory and perceptual impairments described elsewhere in the model. Yet the specific impairments clearly interact, as described in the model.
Personal Factors
Individual patients will respond to injury in unique ways based on their own distinctive characteristics. Such characteristics are referred to as personal factors in the International Classification of Functioning model. 157 In our CAI model, we identify the personal factors of patient demographics, medical history, physical attributes, and psychological profile. Still, additional personal factors may influence a patient's response to injury. Demographic factors such as age, body mass index, and sex may have important biological influences on healing and other physiological processes after injury. A patient's medical history, including the presence of comorbidities, structural deficits due to past injury, and how an individual has recovered from previous injuries and illnesses, can affect the response to a new or recurrent injury. A patient's physical attributes, such as the level of strength and conditioning (ie, strength and flexibility) or skeletal alignment (ie, foot morphotype), can influence the response to and recovery from injury. Finally, an individual's psychological profile, including characteristics such as self-efficacy and anxiety, can play important roles in the response to injury. Our decision to exclude other potential personal factors is not meant to deny the importance of those factors but was an effort to simplify the presentation of the CAI model. Clinicians should be cognizant of how patient-specific personal factors may influence an individual's response to and recovery from acute and chronic ankle injury. 158
Environmental Factors
Factors outside of a patient's organism that may affect the response to injury are termed environmental factors in the International Classification of Functioning model 157 and are included in our CAI model. These factors include societal expectations the individual perceives regarding physical activity and sports participation as well as expectations for his or her role in home, family, work, and transportation activities. Social support networks can also play an important role in the response to and recovery from injury. Finally, a patient's access to health care facilities and providers can have a large influence on the type and frequency of health care received. Similar to how personal factors are portrayed in the CAI model, other environmental factors may be important to an individual patient; excluding any of these potential factors from our model is not meant to imply that such factors do not exist. In an effort to provide holistic care to each patient we evaluate and treat, clinicians should seek to identify and address any environmental factors that may influence a patient's recovery from injury. 158
From Impairments to CAI Manifestation
Chronic ankle instability is a heterogeneous injury in which individual patients present with unique combinations of pathomechanical, sensory-perceptual, and motor-behavioral impairments. Rather than positing multiple subgroups of patients in an effort to identify homogeneity among CAI patients, the updated model accounts for the heterogeneity of impairment presentation through the interactions of 3 conjectural constructs: self-organization, perception-action cycles, and neurosignature. The first 2 constructs are derived from the dynamic systems theory of motor control, [23][24][25][26] and the third stems from the Melzack neuromatrix theory of pain. 27,28 Self-Organization. Dynamic systems theory is a universal theory of science used to describe complex phenomena in a diverse array of disciplines. 26 Multilevel components influence human movement, including but not limited to cells, tissues, systems, organisms, and social constructs. At the crux of dynamic systems theory are principles indicating that the component levels are not equivalent to each other because the influence of 1 level on another level is typically nonlinear (eg, a small change at the tissue level can cause a large effect at the systems level); circular causality exists among levels, indicating the role of both feedback and feedforward relationships; relationships among levels change over time; and no predefined motor programs are directing system interactions. 26 The generation and control of specific movements are dictated by a process of self-organization that weighs the potential movement strategies available, given the relevant constraints, to achieve the desired movement goal. [23][24][25] Types of constraints include task, environmental, and organismic. Task constraints represent the limitations that govern how a movement may occur (eg, track athletes always race in counterclockwise direction). Environmental constraints are external to the organism and are due to the surroundings in which movement is being executed (eg, uneven grass surface or a flat, paved surface). The primary tissue injury and accompanying pathomechanical, sensoryperceptual, and motor-behavioral impairments in the model represent organismic constraints, which can influence how a patient with CAI moves and engages in physical activity. 25 The unique organismic constraints in an individual patient, coupled with the task and environmental constraints specific to a given situation, influence how a patient behaves and moves. During rehabilitation, a clinician may manipulate task and environmental constraints in an effort to generate a specific motor output aimed at addressing a specific impairment (eg, designing a balance exercise that specifically requires a large amount of fibularis muscle activation). 25 Perception-Action Cycles. A cyclical relationship exists between perception and action, meaning that perception (sensory input) influences action (motor output), and action affects perception, and the cycle repeats in perpetuity. 159 In the CAI model, the perception-action cycle represents the circular causality between sensory-perceptual impairments and motor-behavioral impairments. Understanding the inherent linkage between the sensory and motor contributions to CAI is essential to successful assessment and treatment of patients with this complex condition. An intervention that addresses a sensory-perceptual impairment alters motor behavior and vice versa.
Neurosignature. In his neuromatrix of pain theory, 27,28 Melzack proposed 4 core components that contribute to chronic pain conditions: (1) the body-self neuromatrix; (2) cyclical processing and synthesis producing a continuous neurosignature outflow; (3) sentient neural hubs in the brain where the flow of neurosignature is integrated into the flow of sensory inputs; and (4) an action neuromatrix, also influenced by the neurosignature, that produces movements aimed at achieving a desired goal. The neuromatrix comprises a series of neural networks throughout the brain that process sensory information and generate a stream of neurosignature output that contributes to both the body possessing a sense of itself in perceptual and emotional terms and production of movement. 27,28 The neuromatrix and thus the neurosignature are influenced by genetics and modified by lived experiences. 27,28 These lived experiences are incorporated as personal and environmental factors in our proposed model. Persistent pain and stress are posited to substantially alter the neurosignature in a negative manner, 27,28 whereas targeted therapies such as manual therapy and therapeutic exercise can alter it in a positive manner. In the CAI model, the neurosignature represents the neural patterns unique to the individual patient that influence sensory and emotional perception and motor function. A patient's neurosignature acts as a continuous modifier of the perception-action cycle.
How Do the Component Interactions Work Together in Patients With Ankle Sprain or CAI?
Acute injury to the lateral ankle ligaments produces specific pathomechanical impairments related to ligamentous and, potentially, other tissue damage around the ankle. The injury also initially triggers sensorimotor changes via inflammatory and pain mediators that result in specific sensory-perceptual and motor-behavioral impairments. How a patient responds to these impairments influences his or her perception of the injury and behavior, including motor output, in the presence and aftermath of the injury. An individual's personal factors, such as a history of musculoskeletal injury and level of self-efficacy, will affect perceptions and behaviors. Environmental factors, such as social support and expectations for the patient to fulfill defined roles relative to home, family, work, or sport, further influence the individual's perceptions and behaviors in response to injury. Physiological responses to injury mediated by inflammatory, neurologic, and hormonal processes produce local changes at the site of injury, such as edema, and in the central nervous system, such as neuromuscular inhibition in the injured limb. Neuroendocrine responses to injury, including the release of stress hormones, further influence the patient's perception of injury and movement. Together, these factors and processes affect the flow of afferent and efferent neural signals that constitute the patient's neurosignature.
Before injury, a person's neurosignature is in a state of homeostasis. Injury, such as an acute ankle sprain, leads to an immediate change in the neurosignature in response to tissue damage, inflammation, and stress. This initial change in the neurosignature is protective in nature. Patients who recover quickly after an acute ankle sprain are able to restore their neurosignature to preinjury homeostasis as injury symptoms are eliminated and sensorimotor function is restored. In contrast, patients who are unable to reset their neurosignature soon after injury may develop chronic symptoms and altered movement patterns.
Relative to the neuromusculoskeletal system, perceptionaction cycles are at the crux of an individual's neurosignature. Action in the form of motor output is a product of self-organization. Acute injury and subsequent manifestations of that injury create impairments that impose organismic constraints on movement strategies. Movement, however, is endemic to the human condition, and the body will self-organize to find a motor strategy that circumvents organismic constraints to accomplish the tasks that one deems necessary. For example, a patient who lacks 108 of ankle dorsiflexion is still able to walk but must use motor strategies that bypass the organismic constraint of restricted ankle dorsiflexion. This movement solution introduces unfamiliar signals into the nervous system, thereby producing unaccustomed perception-action cycles. Left unabated, this movement strategy can become the preferred motor output. Failure to address specific impairments postinjury can lead to longstanding constraints that normalize altered movement patterns, resulting in chronically altered perception-action cycles and a neurosignature that predisposes an individual to recurrent episodes of the ankle giving way and ankle sprains. Clinicians are thus encouraged to not only address patient-specific impairments during rehabilitation but to also emphasize perception-action processes in an effort to return the patient's neurosignature to a condition of healthy homeostasis.
Spectrum of Clinical Outcomes
We propose a spectrum of clinical outcomes that ranges from copers on the positive end to CAI on the negative end. (Figure 1 displays negative outcomes to the left and a positive outcome to the right of the clinical outcome spectrum.) The outcome is meant to be determined more than 12 months after the initial ankle sprain, as deficits during the first year would not be deemed chronic.
A coper is defined as an individual who is more than 12 months removed from the index ankle sprain, has incurred no recurrent ankle sprains, reports no or very minimal symptoms or deficits in self-reported function, and perceives a full recovery. 160 The goal of clinicians in treating a patient with a first-time ankle sprain should be to produce an outcome in which the patient becomes a coper. We assert that empirical measures to define a coper should include no ankle pain at rest or during physical activity; self-reported function scores greater than 95% on both the FAAM-ADL and -Sports subscales; a CAIT score of 28 or higher; an IdFAI score of 10 or lower; and no recurrent ankle sprains or perceptions of the ankle giving way. It should be noted that copers may have some identifiable residual impairments, such as increased laxity 35 ; however, these impairments do not adversely affect the function or perception of the patient's ankle. The long-term consequences of these residual impairments are unknown at this time.
Ideally, an ankle-sprain patient becomes a coper without changing the type or volume of physical activities that he or she participated in preinjury. If a patient is asymptomatic but has altered physical activities because of the ankle, that cannot be considered a full recovery and the patient is not a true coper. Some patients choose to alter their physical activity to avoid symptoms or recurrent sprains. Although the outcomes of these patients are on the more positive side of the spectrum, a full recovery has not occurred because of the patient's failure to return to the preinjury level of physical activity. Moving in a negative direction on the outcome spectrum, the increasing frequency of ankle giving-way episodes and the frequency and severity of symptoms such as pain, swelling, and weakness are associated with poorer outcomes, as are recurrent ankle sprains. Repeated episodes of giving way and recurrent ankle sprains are likely to produce further secondary tissue damage, thus resulting in additional pathomechanical impairment. This is represented on the model by the dashed arrow between the outcome and the pathomechanical impairment circle. This ''new'' secondary tissue damage can then further exacerbate sensory-perceptual and motor-behavioral impairments, creating a cyclical condition associated with a poorer outcome.
On the most negative end of the outcome spectrum is the clinical designation of CAI, which is characterized by a patient who is more than 12 months removed from the initial ankle sprain; has a propensity for recurrent ankle sprains; and experiences frequent episodes or perception of the ankle giving way, as well as persistent symptoms such as pain, swelling, diminished ROM, weakness, and reduced self-reported function. We recommend that empirical measures to define CAI should include a CAIT score of 24 or lower, an IdFAI score of 11 or higher, and selfreported function scores of less than 90% on the FAAM-ADL and less than 80% on the FAAM-Sport. At present, we are unable to recommend specific diagnostic thresholds for other impairment categories.
APPLYING THE MODEL TO RESEARCH AND CLINICAL PRACTICE
The aims of the updated CAI model are to serve as (1) a paradigm for the current state of the science regarding the causes of CAI and (2) a framework to aid clinicians in managing patients with LASs or CAI. With respect to the first aim, we acknowledge that the updated model of CAI, while based on our synthesis of the current research, is theoretical. Like previous models of CAI, this model needs validation and refinement through continued research. In particular, little is known about the relationships between specific impairments and how these relationships affect clinical outcomes.
To accomplish the second aim, we recommend the application of the Donovan and Hertel assess-treat-reassess paradigm 161 and the International Ankle Consortium rehabilitation-oriented-assessment approach, 162 which explicitly link the identification of specific impairments during clinical assessment with corresponding treatment goals for rehabilitation. During the assessment of patients with ankle injuries, clinicians should routinely try to identify the source of the primary tissue injury and evaluate specific pathomechanical, sensory-perpetual, and motorbehavioral impairments by taking a thorough injury history and performing a comprehensive physical examination. Clinicians are also encouraged to look not just at the composite scores of questionnaires used to assess perceived ankle instability, pain, kinesiophobia, self-reported function, and HRQOL but also at the individual item responses on these survey instruments to identify patient-specific complaints and impairments. These findings should then be used to guide the development of rehabilitation goals and treatment decisions.
Not all patients will exhibit evidence of each specific impairment in the model. Each patient will present with a unique combination of impairments. As such, rather than applying a uniform rehabilitation protocol to all patients with LASs or CAI, clinicians should tailor a specific rehabilitation plan for each person based on the unique set of impairments identified during assessment. The targeted rehabilitation plan should address the patient's unique collection of impairments in an effort to modify the neurosignature that is driving the cyclical nature of the condition and shift the patient's outcome toward the positive (coper) side of the outcome spectrum. As illustrations, we have created 3 hypothetical patients, each representing a unique collection of impairments within the CAI model and requiring a uniquely targeted rehabilitation approach.
Patient 1 is a 15-year-old female high school basketball player who has sustained 3 LASs in the past 12 months (Figure 2). Her outcome is CAI, as evidenced by multiple recurrent ankle sprains. Her specific impairments, as identified on clinical examination and represented by enlarged circles and text in the figure, include the pathomechanical impairments of secondary tissue damage and pathologic laxity; the sensory-perceptual impairments of diminished somatosensation and heightened kinesiophobia; and the motor-behavioral impairments of neuromus- Figure 3. Adaptation of the model to illustrate the specific impairments of a 35-year-old male construction worker who has chronic ankle instability (CAI). The enlarged circles and text indicate specific impairments that are contributing to his condition and health status. Abbreviations: BMI, body mass index; HRQOL, health-related quality of life. cular inhibition, muscle weakness, and altered movement patterns. The repetitive ankle sprains and subsequent impairments have negatively affected her neurosignature, resulting in substantial neuromuscular dysfunction. This patient is likely to respond favorably to a rehabilitation approach that includes ankle taping or bracing during physical activity to address her ankle laxity and a therapeutic exercise program aimed at improving somatosensation, muscle activation, and strength; restoring functional movement patterns; and reducing her kinesiophobia.
Patient 2 is a 35-year-old male construction worker who incurred a severe ankle sprain 2 years ago and now has a primary complaint of his ankle giving way several times per week (Figure 3). His outcome is CAI as characterized by repeated episodes of giving way and considerable perceived instability. Upon examination, his specific impairments include arthrokinematic restrictions and perceived instability and deficits in somatosensation, reflex responses to unexpected inversion, and static and dynamic balance. The repeated episodes of the ankle giving way and subsequent impairments have negatively affected his neurosignature, resulting in neuromuscular dysfunction. An important environmental factor that may influence the patient's perception of the injury is that it was work related and subject to Workers' Compensation. This patient is likely to respond favorably to a rehabilitation program that includes manual therapy focused on passive accessory joint mobilizations to address specific arthrokinematic restrictions and a therapeutic exercise program aimed at improving somatosensation, reflexive control of the ankle, and postural control in an effort to lessen his perceived ankle instability. Patient 3 is a 22-year-old graduating collegiate studentathlete who had a severe ankle sprain 4 years ago and a mild recurrent ankle sprain 9 months ago ( Figure 4). She is no longer playing competitive sports and has no plans to do so in the future, partly because of her history of ankle and knee injuries. Because her ankle is not symptomatic when she does not participate in sport, she has dramatically reduced the amount of physical activity in which she participates. Although her symptoms do not warrant a diagnosis of CAI, she clearly has not had a full recovery and cannot be classified as a coper. As such, her outcome has moved away from the most positive end of the outcome spectrum to indicate that she is asymptomatic because she has substantially altered her physical activity level. The figure depicts a few specific impairments identified by larger circles and text, but these are not of the same magnitude as those seen in patients 1 and 2. Patient 3's outcome could be characterized as a subclinical condition, and she would benefit from addressing her specific impairments to increase her overall level of physical activity.
These 3 examples are presented for illustrative purposes only. Clinicians must be vigilant in assessing each individual patient and developing a holistic plan of care that addresses the primary condition and identified impairments along with the relevant component interactions, personal factors, and environmental factors. Validation of the clinical application of the CAI model is also needed. At this time, an understanding of the interrelationships among specific impairment categories is lacking. Lastly, we assert that the model could serve as the framework for developing a clinical predictor rule to aid clinicians by identifying the characteristics of patients who are most likely to respond favorably (or unfavorably) to specific treatment approaches.
CONCLUSIONS
We have presented an updated model of CAI that aims to both synthesize the current understanding of the causes of CAI and serve as a framework for the clinical assessment and rehabilitation of patients with LASs or CAI. The model describes how primary tissue injury to the lateral ankle ligaments after an acute ankle sprain may lead to a collection of interrelated pathomechanical, sensory-perceptual, and motor-behavioral impairments that influence a patient's clinical outcome. Using the biopsychosocial model of health care as a foundation, the concepts of self-organization and perception-action cycles, derived from dynamic systems theory, and a patient-specific neurosignature, stemming from the Melzack neuromatrix of pain theory, are incorporated to describe these interrelationships. | 2019-06-07T20:32:22.931Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "e77722bbbd9c7d2951b31d99910baf260029e836",
"oa_license": null,
"oa_url": "https://meridian.allenpress.com/jat/article-pdf/54/6/572/2369325/1062-6050-344-18.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "61944ff5452ffd5757e0f2db4dde35d5523cb607",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232313388 | pes2o/s2orc | v3-fos-license | Opportunities and Challenges for Improving the Productivity of Swamp Buffaloes in Southeastern Asia
The swamp buffalo is a domesticated animal commonly found in Southeast Asia. It is a highly valued agricultural animal for smallholders, but the production of this species has unfortunately declined in recent decades due to rising farm mechanization. While swamp buffalo still plays a role in farmland cultivation, this species’ purposes has shifted from draft power to meat, milk, and hide production. The current status of swamp buffaloes in Southeast Asia is still understudied compared to its counterparts such as the riverine buffaloes and cattle. This review discusses the background of swamp buffalo, with an emphasis on recent work on this species in Southeast Asia, and associated genetics and genomics work such as cytogenetic studies, phylogeny, domestication and migration, genetic sequences and resources. Recent challenges to realize the potential of this species in the agriculture industry are also discussed. Limited genetic resource for swamp buffalo has called for more genomics work to be done on this species including decoding its genome. As the economy progresses and farm mechanization increases, research and development for swamp buffaloes are focused on enhancing its productivity through understanding the genetics of agriculturally important traits. The use of genomic markers is a powerful tool to efficiently utilize the potential of this animal for food security and animal conservation. Understanding its genetics and retaining and maximizing its adaptability to harsher environments are a strategic move for food security in poorer nations in Southeast Asia in the face of climate change.
INTRODUCTION
The majority (~97%) of the 207 million buffalo population in the world is found in Asia, wherein about 20.51% are swamp buffaloes (FAOSTAT, 2018). There are two types of water buffaloes: swamp buffaloes and river buffaloes. Swamp buffaloes are mainly found in China and Southeast Asian countries. River buffaloes' populations are larger than swamp buffaloes' populations. They differ in chromosome number, phenotypic characteristics, and geographical locations, where they are usually found (Degrandi et al., 2014;Colli et al., 2018;Zhang et al., 2020).
Swamp buffaloes in Southeast Asia are raised by smallhold farmers because of their powerful draft capacity (OECD, 2017). This animal is utilized mostly for land cultivation; though it also provides milk, meat, hide, and horn, which are additional income sources to the farmers. However, due to Frontiers in Genetics | www.frontiersin.org 2 March 2021 | Volume 12 | Article 629861 increased farm mechanization, swamp buffalo have declined in value and its production has decreased by 4.92% for the last two decades (FAOSTAT, 2018). While swamp buffalo still holds a significant role in farmland cultivation, the purpose of this animal has shifted from draft power to meat and milk production. One way to address the decline in production of swamp buffalo is to use genomic markers to selectively breed this animal for food security and conservation. Many countries in Southeast Asia have only started their breeding programs for swamp buffaloes in recent decades. Genetic improvement for buffalo in Thailand started in 1979 through their Department of Livestock Development. Genetic evaluation procedures, such as using estimated breeding values (EBVs), were conducted as part of their selection criteria for superior swamp buffaloes (Sanghuayphrai et al., 2013). Although genetic evaluation procedures are used in Thailand, breeding improvement and disease prevention are still lacking in some buffalo herds, leading to its low productivity, and hence highlight the need for upgraded buffalo management (Koobkaew et al., 2013;Sapapanan et al., 2013;Suphachavalit et al., 2013).
In the Philippines, a centralized research agency -Philippine Carabao Center (PCC) was established in 1992 to strengthen research and development on the Philippine carabaos. The PCC has several programs, such as the nationwide dispersal of semen for artificial insemination and bull loan programs, to upgrade buffaloes (Cruz, 2015). Cross breeding of the two types of water buffalo was carried out to improve the efficiency of the animal as their progeny showed increased body weight and milk production when compared to local swamp buffaloes. However, the crossbred progeny showed a decline in reproductivity, and hence backcrossing with a purebred swampor river-type was done to produce a ¾ Philippine swamp-type for draft power or ¾ river-type for dairy, respectively (Salas et al., 2000;Cruz, 2015). Genetic evaluation has also been done to select elite animals to improve milk traits in the Philippine dairy buffaloes (Herrera et al., 2018).
While there is no centralized agency exclusively for the development of water buffaloes in Malaysia, Indonesia, and Vietnam, regional efforts have been carried out to increase the performance of buffaloes in terms of reproductive performance, weight gain, and meat and milk production (Suryanto et al., 2002;Othman, 2014;Ariff et al., 2015). Buffalo management in Indonesia still follows the traditional approach leading to low productivity of the animal due to poor breeding plans, which has led to inbreeding within the population (Komariah et al., 2020). Despite breeding inefficiency, buffalo rearing by smallhold farmers is expected to contribute to the development of dairy industry in Indonesia. Vietnam produced and consumed more buffalo meat than beef; however, limited resources for research have stumped its intensified breeding program and buffalo development (Nguyen, 2000).
CYTOGENETICS, PHYLOGENY, DOMESTICATION, AND MIGRATION
River and swamp buffaloes have 50 and 48 chromosomes, respectively. Although their chromosome numbers are dissimilar, these two sub-species can produce fertile offspring when crossed, which inherits 49 chromosomes due to the preserved characteristics of its chromosome arms (Degrandi et al., 2014). However, reproductivity is decreased in the hybrid progeny (Harisah et al., 1989;Borghese, 2011). This difference in chromosome number between the swamp and river buffalo is due to a tandem fusion translocation between river buffalo chromosomes 4 and 9 and swamp buffalo chromosome 1 (Di Berardino and Iannuzzi, 1981;Harisah et al., 1989), which was later confirmed when swamp buffalo genome assembly was made available (Luo et al., 2020). Studies on the karyotypes of swamp buffaloes that originated from the Philippines, Thailand, Malaysia, and Brazil showed conflicting results on the centromeres' positions but they all agreed that the species has 48 chromosomes (Bondoc et al., 2002;Supanuam et al., 2012;Degrandi et al., 2014;Shaari et al., 2019). There are at least two possible reasons that account for differences in the centromeres' positions: (1) different methods were used in the cytogenetic study (e.g., an addition of alcohol might have affected the arrangement of the chromosomes) and (2) subjective determination of each chromosomes' centromere locations. Further investigation using a standardized method is needed to confirm the typical karyotype of swamp buffaloes.
Both river-and swamp-type have the same ancestral origin from wild Asiatic buffalo, Bubalus arnee (Cockrill, 1981). There is genetic separation for the two types of water buffaloes (Figure 1) and divergence between them is higher than the divergence observed between cattle subspecies (Yindee et al., 2010). Interestingly, comparison between river-and swamptype buffaloes showed higher genetic variation within swamp populations despite the homogenous characteristics of their phenotypes and small number of breeds (Zhang et al., 2016;Paraguas et al., 2018;Sun et al., 2020b).
Divergence of the water buffalo to river-and swamp-type is estimated to have happened from 10 Kya to 1.7 Mya with the most probable period being from around 230 Kya or 900-860 Kya based on overlapping events such as geographical changes and concurrences from multiple studies (Tanaka et al., 1996;Wang et al., 2017;Sun et al., 2020a).
Swamp buffalo during post-domestication period followed two separate migration events from about 3,000 to 6,000 years ago in Asia . One was from Indochina border spreading around mainland China to the Philippines and the other was from mainland Southeast Asia and Southwest China border disseminating down to Indonesia (Zhang et al., 2016;Wang et al., 2017;Colli et al., 2018;Sun et al., 2020b). There is a genetically distinct population of swamp buffaloes in Southeast Asia that is thought to have arisen from the founder effect (Zhang et al., 2016;Colli et al., 2018;Sun et al., 2020b). A rare haplogroup was found in Thailand by Sun et al., 2020b using mtDNA D-loop sequences, which supported the hypothesis that Thai buffalo population may have come from an ancestral lineage (Colli et al., 2018). Considering that the wild Asiatic buffalo still exists in some parts of Thailand (Sarataphan et al., 2017), the ancestor of water buffalo may have also originated in mainland Southeast Asia (Lau et al., 1998).
GENETIC SEQUENCE AND RESOURCE AVAILABILITY
The whole genome sequence of a Mediterranean breed (UMD_ CASPUR_WB_2.0) river buffalo was released in the NCBI in 2013 and published 4 years later (Williams et al., 2017; Table 1). A 90K SNP Buffalo Genotyping Array (Iamartino et al., 2013 has been available for use by researchers in the past few years; however, the SNP panel was created using a cattle reference genome (UMD3.1). The disadvantage of using the SNP panel for water buffalo is that it only represents 75% and 24.5% of the high quality, known polymorphic SNPs of river-and swamptype buffaloes, respectively. The majority of the samples used in the SNP validation belonged to river buffalo, and hence a specific SNP panel for the swamp buffaloes is recommended since it is underrepresented in the 90K SNP Panel (Iamartino et al., 2013Colli et al., 2018). Despite the limitation of missing some water buffalo specific SNPs, the genotyping array is still useful for genomic studies in river buffaloes but its usefulness remains limited in swamp buffalo (Herrera et al., 2016). The river buffalo assembly based on the same animal used to create UMD_CASPUR_WB_2.0 was recently upgraded using long read sequencing for contig assembly and chromatin conformation capture technologies for scaffolding. The final assembly is called as UOA_WB_1 (Low et al., 2019) and is the best representative assembly of the river buffalo based on contiguity metric such as contig N50 ( Table 1). The next assembly upgrade for the river buffalo will be a completely haplotype-resolved genome as demonstrated in cattle (Low et al., 2020). There are eight river buffalo assemblies but only one swamp genome assembly (Luo et al., 2020) in the literature and public databases. Besides genome assemblies and SNP panel, there are transcriptome resources that were used to create a large-scale gene expression atlas for the river buffalo and 3 million intestinal microbial gene catalogs from both buffalo and cattle (Williams et al., 2017;Zhang et al., 2017;Young et al., 2019).
COMPARISONS BETWEEN RIVER AND SWAMP BUFFALOES
The latest river buffalo reference assembly (UOA_WB_1) is approximately 2.5 times more contiguous than the best swamp buffalo assembly (GWHAAJZ00000000) based on contig N50. Both of these assemblies benefited from long read PacBio sequencing to preserve assembly continuity and scaffolding with Hi-C reads has helped to produce chromosome-scale scaffolds. However, despite the availability of an impressive genome assembly, only about 0.76% of the submitted water buffalo nucleotide sequences were from swamp buffaloes in the GenBank as of January 2021. The river buffalo sequences represented the majority of water buffalo sequences in the public database. Additionally, there were only 17 genes for swamp-type, if one excluded the annotation from the recent swamp genome (Luo et al., 2020), which was a few magnitudes lower than the ~35,000 genes submitted for river-type buffaloes. 1 1 https://www.ncbi.nlm.nih.gov Genomic regions that may be under selection have been analyzed in both swamp and river buffaloes. Interestingly, swamp buffaloes showed the signs of selection in docile behavior, muscle development, and fatigue resistance (Luo et al., 2020;Sun et al., 2020a). Among the genes under selection, HDAC9 was found to be associated with muscle development in other species (Mei et al., 2019;Sun et al., 2020a). Luo et al. (2020) study on swamp buffalo genome also showed the expansion of AMD1 gene that promotes muscle growth. This suggests the possibility of prospecting swamp buffaloes as a meat resource. Two critical starch digestion-enzyme genes, AMY2B and SI, were also identified that makes this species unique from other ruminants, which may suggest a new mechanism for adapting to rumen acidosis (Luo et al., 2020).
Signature of selection in river buffaloes showed overrepresentation in genes associated with immune-response, milk production, growth, and feed efficiency, which can be due to selection for milk production (Luo et al., 2020;Sun et al., 2020a). From the genes identified, thyroglobulin gene was associated with milk and meat quality traits, and was found to be a good candidate gene marker for meat marbling and milk fat percentage (Gan et al., 2008;Dubey et al., 2015).
Genetic variations in DGAT1, MUC1, INSIG2, and GHR in both river and swamp buffaloes were also associated with milk components, milk yield, and mastitis resistance, which are potential candidates for genetic selection (Deng et al., 2016;Li et al., 2018;da Rosa et al., 2020;El-Komy et al., 2020).
CHALLENGES AND OPPORTUNITIES
While Southeast Asian countries are experiencing improvements in agricultural productivity, it still remains relatively small (OECD, 2017). Considering the limited number of available genetic sequences and studies of swamp buffalo, it can be said that research funding allocation for this animal is low when compared to other bovine species. Countries from Southeast Asia should take a more progressive approach in studying the animal through genome science. Given the limited budget for research and development, this may be challenging as the costs for genomic research is high. Nevertheless, the trend of smaller farm sizes, increases in population and the effect of climate change, as well as agricultural innovations and developments, will likely push swamp buffalo farming toward intensified, profitable, and efficient farming (OECD, 2017).
Incorporation of genomic selection in genetic improvement programs has proven its success in dairy cattle and other livestock species, but which usually carried out in large-scale breeding programs and with intensive breeding selection (Sonstegard et al., 2001;Miller, 2010;Dekkers, 2012;Xu et al., 2020). On the contrary, local breeds are usually farmed in smaller population size and remain inferior in terms of productivity. Although the incorporation of genome science will maximize genetic gains of the animals, and hence an increase in productivity and income, the costs are relatively higher on a per animal basis (Iamartino et al., 2013;Biscarini et al., 2015). Despite the opportunities in breeding swamp buffaloes, economic constraints in smallhold farming remain a challenge for large scale and cost-effective genetic improvement programs (Biscarini et al., 2015;El Debaky et al., 2019). Nonetheless, the improvement of breeding stock through EBVs and proper management has shown significant increase in milk production in the Philippines, which demonstrated the value of systematic breeding programs for dairy buffalo (Flores et al., 2007). Rural farmers have seen buffalo rearing as a less risky source of income when compared to recurrent crop failures due to calamities such as typhoons and droughts (Escarcha et al., 2020). For example, through the support from government and organized groups, buffalo rearing holds the promise to enable sustainable living in smallhold farmers in the Philippines (Del Rosario and Vargas, 2013).
Genome editing (GE) technologies use zinc-finger nucleases, transcription activator-like effector nucleases and clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 to reproduce animals with economically important traits . It has been used in livestock species to produce polled (i.e., hornless) cattle (Young et al., 2020), mastitis resistant cows through insertion of lysozyme gene (Liu et al., 2014) and enhanced wool quality in goats and sheep by altering their FGF5 gene Li et al., 2017Li et al., , 2019). The GE system has also been used to edit the swamp buffalo GDF8 gene in cell line, which is a regulatory gene for myostatin that inhibits muscle development and differentiation (Su et al., 2018;Lee et al., 2020). Gene knockout of GDF8 can increase the production of meat in cattle, goat, and sheep as double muscling was observed in experimental animals (Proudfoot et al., 2015;He et al., 2018;Wu et al., 2018;Ding et al., 2020). Examples of GE in water buffalo are limited but the opportunity to use this technology to enhance their economic traits remains to be explored. The applications of GE in livestock need to adhere to ethical standards and regulatory policies (McFarlane et al., 2019) that vary between countries. For example, the hornless cattle created using GE tools by the company Recombinetics was meant to proceed further in Brazil, but the plan was abandoned when unintended integration of plasmid was found in edited animals (Molteni, 2019;Norris et al., 2020). AquAdvantage salmon and GalSafe pigs are the only approved genetically modified animals for food specifically in United States and Canada (FDA, 2020). 2 In Asia-Pacific region, it is unclear if livestock made using GE technologies will be acceptable in the near future (FAO, 2019).
Precision livestock farming (PLF) incorporates artificial intelligence technology to automatically monitor and manage animal production, predicts solutions for problems that may arise in the farm, and uses deep learning for genomic prediction (Banhazi et al., 2012;Pérez-Enciso and Zingaretti, 2019;Tullo et al., 2019). PLF assists large farms to be economically and environmentally sustainable; however, the cost of PLF still outweighs its efficiency for smallhold farmers (Hostiou et al., 2017;Carillo and Abeni, 2020). Genomic prediction using deep learning requires large datasets that are currently unavailable for the swamp buffalo. While PLF should be embraced in Southeast Asia, the limitation of high cost Frontiers in Genetics | www.frontiersin.org 6 March 2021 | Volume 12 | Article 629861 means its application to swamp buffalo farming remains infeasible in the near future. Microbiome analysis for swamp buffaloes showed intrinsic difference to cattle microbiota that might explain buffalo's efficiency in digesting fibers Iqbal et al., 2018). Rumen manipulation to reduce methane emission is also of interest in livestock management as it decreases the environmental impact of livestock production (Ungerfeld, 2018). In large-scale farmed populations, besides rumen related measurements, there are other low-cost proxies such as body weights and high-throughput milk mid-infrared that are also suitable to monitor methane emission (Negussie et al., 2017). Management and genetic improvement of swamp buffalo based on combination of these proxies may lead to production animals with less negative environmental footprint (Negussie et al., 2017;Ungerfeld, 2018).
With the increasing demand for food and mechanization in farming, swamp buffalo should be bred for meat and milk production through wide-scale or institutionalized development programs (Palacpac, 2010;Cruz, 2013). Buffaloes are well suited for tropical climate of Southeast Asia, and thus there is potential in upgrading local buffaloes to maximize milk production, which cannot be easily done with species maladapted to hotter and humid climates. Although swamp buffaloes are still susceptible to heat stress (Upadhyay et al., 2007;Rojas-Downing et al., 2017), their wallowing behavior and adaptability to warm conditions give them an advantage for hotter climate (Nardone et al., 2010).
CONCLUSION
The potential of swamp buffaloes in food production is still untapped and genome research to increase its production is still limited. Understanding the capabilities of this species through a genomic approach can increase its productivity and benefit the farmers in the long run. The availability of high-quality swamp buffalo assembly is a leap forward in swamp buffalo genome science, because it opens up opportunities for technological advancement such as the creation of SNP panels specific to swamp buffalo for genetic improvement, diagnosis of diseases, and the study of genetic diversity. Although the cost of genomics is expensive and remains a challenge for developing countries in Southeast Asia, the opportunities to improve this animal for milk and meat production and animal conservation remain to be explored. With the rapid progress of technology and changing climates, rearing swamp buffaloes is a strategic option to increase smallhold farmers' income. Breeding the animals through genomic selection is a good strategy to select meat and milk type swamp buffaloes while retaining its adaption to hotter, humid climates.
AUTHOR CONTRIBUTIONS
All authors contributed to the conception of the study, manuscript revision, read, and approved the submitted version. PP wrote the first draft of the manuscript.
FUNDING
Publishing fee for this article review is funded by the Research and Development Division, Philippine Carabao Center. | 2021-03-23T13:19:56.699Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "6eda94c9cb79330453d40e69ae280e13f8eb7a83",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.629861/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eda94c9cb79330453d40e69ae280e13f8eb7a83",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
79938784 | pes2o/s2orc | v3-fos-license | The method for glomerulations detection in histological images of prostate
In the work presented, a method for detecting glomeruli in pictures of histological preparations of the prostate gland is described, the presence of which indicates a malignant neoplasm. Pathological structures at the level of microimages are investigated. The developed method is the result of joint activity of the National Research Nuclear University "MEPhI" and the Moscow State Medical and Stomatological University named after A.I. Evdokimova.
Introduction
In the global structure of oncological morbidity, prostate cancer ranks sixth, and among men -the third. Today's crucial challenges are the development and implementation of automated software systems to diagnose prostate cancer. Solutions to these challenges could allow us to automate the diagnosis of cancer, increase its accuracy and, therefore, accelerate a decision-making process regarding the methods of medical intervention. This pathology is annually diagnosed in more than half a million people, or about one tenth of all oncological diseases in men [1]. At present, there is no counterpart to the developed method for detecting glomeruloid structures.
The task of the research stage is to extract and describe characteristic features of the pathological structures, called glomeruli, and distinguish them from other objects of the histological specimens. Figure 1 presents examples of glomeruli. The formation of glomeruloid bodies is a rare type of cancer differentiation grades. They are made up of solidifying malignant cells, surrounded by crescent-shaped spaces, and then by the wall of an enlarged acinus or duct with a lining of the cancerous epithelium. In accordance with the Gleason index, a tumor of such a differentiation generally corresponds to grade 4. On the contrary, healthy cells produce glandular formations containing spaces of round, ellipsoidal shape. An example of benign glandular structures is shown in Figure 2
Figure 2. Histological images of benign glandular structures
The purpose of the work is to develop and automate the procedure for detecting glomeruloid structures in the images of histological specimens of the prostate gland.
Materials and methods
Histological specimens of the prostate gland are subjected to detailed analysis. The average size of such images is 1300 x 900 pixels; the color depth is 24 bits.
A binarization method is used to extract ducts of glandular structures (white spaces) contained both in the areas of benign cellular formations and in glomeruli. In the binarization process, the original halftone image having a certain number of brightness levels is converted to a black and white image whose pixels have only two values-0 and 1 [2]. Threshold processing of the image can be carried out in different ways. To use a binarization method, we convert the original three-channel (RGB) image into a halftone image with a color depth of 8 bits (256 semitones). Further, we apply the binarization method with an upper threshold: Where f(m,n) is the pixel half-tone level with coordinates (m,n), f'(m,n) is the value (0/1) of the pixel as a result of binarization, t is the threshold value. Due to the study of histological specimens, the threshold value t can be determined. Figure 3 shows the outcome of applying the binarization method. The black channels are the ducts of the prostate. Further we extract the contours of the gaps for subsequent analysis of their shapes. In order to operate with contours, we present them in the form of a chain code of Freeman (Freeman Chain Code) [3]. Figure 4 shows the result of the contour extraction operation.
Figure 4. Result of the contour extraction
Then we extract crescent-shaped contours. Within the framework of the task set, the criterion for detecting such contours is as follows: if the center of mass of a contour does not belong to its inner area, then the contour is considered to be of crescent shape. With the purpose of determining whether or not the point belongs to the contour-bounded area, the ray tracing method can be used. Assume that it is necessary to define whether point A is related to K contour. To do this, we draw a straight line to point A from some remote point. Along that direction there can occur zero or several intersections of the boundary of the contour: we enter the contour at the first intersection, we exit from it at the second one .If we reach point A with an odd number of intersections of the contour boundary, then point A lies inside the contour, and if an even number of intersections is produced, then the point is outside the contour K. In this case, point A is the contour's center of mass. As shown in Figure 5, straight lines to point A are drawn along four directions. Figure 6 shows the result of extracting crescent-shaped contours. The green rectangle contains a contour satisfying the above criterion, the blue ones -the rest contours.
Results
For computer image processing in C ++ using the Qt library and the OpenCV computer vision library [4], a software module with a minimal graphical user interface has been developed that allows downloading images and applying automatic analysis, as well as displaying results (Figure 7).
Precision and recall are the metrics used in evaluating the majority of the information extraction algorithms. The system precision within the class is the proportion of documents actually belonging to a given class relative to all documents that has assigned to this class by the system. Recall is the proportion of the classifier's documents belonging to the class relative to all documents of this class in the test set.
Three-channel images of histological specimens of the prostate with a size of 1,300 x 900 pixels with a color depth of 24 bits (8 bits per color component) were used as a test set. These images have been produced as a result of splitting the original image size of 60,000 x 130,000 pixels into tiles. The number of images used for the test set is 40. As a result of testing, the following values for precision and recall have been obtained: Precision ≈ 90 %, Recall≈70%. | 2019-03-17T13:11:18.819Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "68dc2662b75414e1740e36adf9322b735c39cbfd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/945/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f9dc0b302f79ef009a1200ee2278ae8e9557fc1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
215746234 | pes2o/s2orc | v3-fos-license | Research in Pregnant Subjects: Increasingly Important, but Challenging
Background: The number of pregnant women with medical comorbidities continues to increase. A large proportion of pregnant women are exposed to medications during pregnancy, but only a fraction of the medications used have been investigated during pregnancy with regard to benefits, risks, and doses. Methods: This article includes a review of potential deterrents and barriers to pregnant women enrolling in clinical research studies and the federal regulations governing enrollment of pregnant women in research. Results: Research in pregnant women has been hampered by concerns for liability, the complex physiology of pregnancy with changes related to stage of pregnancy, and federal regulations that deemed pregnant women a vulnerable population. While recent revisions to federal regulations have removed pregnant women from the classification of vulnerable population, regulations regarding consent requirements still limit women's ability to decide on participation in clinical trials. The Department of Health and Human Services established the Task Force on Research Specific to Pregnant Women and Lactating Women to help identify and reduce these barriers. Conclusion: While recognition of the need for more scientific knowledge on the effects of medications and other interventions in pregnancy is widespread, a number of barriers that hinder enrollment of pregnant women in clinical trials remain.
INTRODUCTION
In the United States each year, more than 6 million pregnancies and approximately 4 million births occur. 1 Up to 90% of women are estimated to be exposed to at least one medication during pregnancy, and at least 60% of women who ultimately give birth take one or more medications to treat a chronic medical condition or a condition that arises during the pregnancy. 2,3 Among the most common medical conditions for which women take medications during pregnancy are hypertension, diabetes, mental health disorders, and autoimmune conditions. Underlying chronic hypertension affects approximately 5% to 10% of the obstetric population, while diabetes, either pregestational or occurring during pregnancy, affects up to 10% of pregnant women. Because of the paucity of information on risks of medication exposure in pregnancy, nearly 50% of women are estimated to use an agent that has little information on its risk during pregnancy or in which animal data suggest the possibility of adverse human effect. 2,3 A disproportionately small number of the medications currently on the market have been investigated during pregnancy. Of medications introduced to the market between 1980 and 2000, 91% had undetermined fetal effects, 3% had some known fetal risk, and only 6% had no known fetal risks. 4 Very few medications on the market have specific indications for use during pregnancy or specific information about dosing in pregnancy. Further, many medications on the market with declared information on risks to a developing fetus are based on toxicity and teratology studies in animals that may not be predictive of human effects. [4][5][6]
CLINICAL TRIALS EXCLUSION AND LIABILITY CONCERNS
While the underlying reasons for the dearth of information on drug effects on the developing fetus are multifaceted, the primary explanation for the lack of information is the systematic exclusion of pregnant women from clinical trials. Although potential risks are the typical reason for excluding pregnant women, multiple women's health advocates have pointed out that without adequate information on the use of therapeutic agents during pregnancy, it is not possible to make an informed decision about the risk-benefit analysis, and the use of untested agents may ultimately expose far more women and their fetuses to potential harm than would be encountered with a well-designed clinical trial. [5][6][7] In the absence of pregnancy-specific data on medication dosing, typical doses used for treatment in a nonpregnant individual when used in a pregnant women may be insufficient for maternal benefit but still have fetal risk.
The National Institutes of Health Office of Research on Women's Health (ORWH) was established in 1990 to address the overall lack of systematic inclusion of women in clinical research. When the Code of Federal Regulations governing human subjects research (45 CFR §46) was originally established, there was a presumptive exclusion of women from clinical research, largely because of the potential concern regarding pregnancy and inadvertent fetal exposure as the stories of thalidomide and diethylstilbestrol exposure were still prominent in public memory. The desire to prevent other tragedies like these led to the categorization of pregnant women as a vulnerable population. 8 The vulnerable classification was historically applied to populations that were considered to be unable to be autonomous agents or whose voluntariness might be compromised, such as children, prisoners, and those with diminished mental capacity. 9,10 While the ORWH was established with a charter to address sex disparity in scientific knowledge in all fields of medicine, issues surrounding the lack of data on medication use in pregnancy, as well as medical comorbidities overall, have become increasingly apparent and warrant additional attention. In 1994, the Institute of Medicine issued a report stating that pregnant women should be presumed eligible for participation in clinical trials and should be excluded only if the trial offered no prospect of medical benefit to the pregnant woman or if the trial involved potential risk of significant harm to the fetus, either known or plausible. 11 Despite this report, little change was made to federal regulations, and the number of women involved in clinical trials-other than those addressing specific pregnancy issues-only minimally increased. 5,12,13
PHYSIOLOGY OF PREGNANCY
Although federal regulations and liability concerns have been major factors limiting inclusion of pregnant women in clinical research, the complex physiology of pregnancy has also been a deterrent. 14 A number of significant physiologic changes occur during pregnancy that can impact drug metabolism and action. The effective volume of distribution of pharmacologic agents is altered significantly by the expansion of plasma volume, typically 50% to 60%, during pregnancy. 15 Most pregnant women develop some degree of hypoproteinemia as gestation advances that alters free drug concentration and the potential therapeutic window. 16 Further, both absorption and clearance of pharmacologic agents are altered. Changes in gastrointestinal motility and gastric acidity affect rates of drug absorption depending on where a particular agent is absorbed. Increases in the glomerular filtration rate and hepatic enzyme activity, stimulated by hormonal effects, result in more rapid clearance of many pharmacologic agents. 15 All of these factors, as well as the fact that these physiologic changes evolve as pregnancy advances, combine to potentially alter the pharmacokinetics and pharmacodynamics of drugs during pregnancy. 16 These potential contributors to variations in drug effect make investigation of therapeutics during pregnancy not only more difficult, but also more expensive, leading many pharmaceutical companies and investigators to exclude or minimize the involvement of pregnant women in research.
FEDERAL REGULATIONS
Much of the concern regarding liability issues with research in pregnancy centers around the historic classification of women as a vulnerable population. The vague and restrictive wording of the federal regulations, with variable interpretation by both local institutional review boards and government agencies, contributes to liability concerns. [5][6][7] While historic events involving congenital anomalies associated with the use of medications have led to considerable liability concerns, more recent data on various medications in pregnancy, albeit limited, have not replicated such experiences, especially when appropriate animal models and pregnancy registries have been utilized. 17,18 And, as noted earlier, the cumulative risk to society is postulated to be lower with scientifically rigorous, carefully monitored clinical studies than with off-label or poorly informed use of medications as often happens in modern obstetric practice. [5][6][7]19 In 2001, the Department of Health and Human Services revised Subpart B of the Code of Federal Regulationsknown as the Common Rule-to state that pregnant women and their fetuses could be included in research if specific criteria were met. 20 While the changes were intended to create a more inclusive approach, pregnant women were still often excluded from clinical research trials because of liability concerns, not only associated with fetal risk but also with meeting the necessary criteria for this vulnerable group of study participants for whom the federal regulations required special protections. The changes failed to create a presumption that pregnant women should be included in research.
In 2010, the ORWH held a scientific forum to begin to address some of the ethical and recruitment challenges associated with conducting clinical research in pregnant women. This meeting resulted in the development of a research agenda focused on expanding the knowledge base of medication use in pregnancy and a road map to begin to address ethical and liability concerns regarding research in pregnancy. 14 One of the discussion points arising from this meeting revolved around the classification of pregnant women as a vulnerable population. As defined in the Common Rule, vulnerability means "vulnerable to coercion and undue influence, in recognition that coercion or undue influence refers to the ability to make an informed decision about participating in research." 12 Using this definition, many argued that modern principles of medical ethics do not justify such classification because a pregnant woman should have the capacity to decide for herself whether or not to participate in research, as well as the capacity to protect the interests of the fetus. Further, because maternal benefit often results in fetal benefit by either improving overall maternal health or allowing pregnancy prolongation, the capability of the woman to decide is even more paramount. 3,5,6,11,14,[21][22][23][24][25] Some have posited that the only reason pregnant women should be considered a vulnerable population is because the systematic exclusion of pregnant women from clinical research has rendered them vulnerable to the inability to make informed decisions about medical therapies because of the a lack of high-quality evidence. 25, 26 In 2016, the American College of Obstetricians and Gynecologists (ACOG) published a document stating that (1) pregnant women have similar capacity for autonomy as nonpregnant women, (2) inclusion is in accordance with the
Table. Code of Federal Regulations Requirements for Inclusion of Pregnant Women and Their Fetuses in Clinical Research
a) Where scientifically appropriate, preclinical studies, including studies on pregnant animals, and clinical studies, including studies on nonpregnant women, have been conducted and provide data for assessing potential risks to pregnant women and fetuses.
b) The risk to the fetus is caused solely by interventions or procedures that hold out the prospect of direct benefit for the woman or the fetus; or, if there is no such prospect of benefit, the risk to the fetus is not greater than minimal and the purpose of the research is the development of important biomedical knowledge which cannot be obtained by any other means.
c) Any risk is the least possible for achieving the objectives of the research. h) No inducements will be offered to terminate a pregnancy.
i) Individuals engaged in the research will have no part in any decisions as to the timing, method, or procedures used to terminate a pregnancy.
j) Individuals engaged in the research will have no part in determining the viability of a neonate.
Note: Text is adapted from 45 CFR 46 Subpart B §46.204. 12 ethical principle of justice after disclosure of all appropriate risks, (3) proscriptive contraceptive requirements for participation reduce patient autonomy, and (4) partner consent is unwarranted and ethically unjustified as it infringes on maternal autonomy. 21 With this background, ACOG challenged that pregnant women should no longer be considered vulnerable, but should instead be considered scientifically complex because of the associated ethical and physiologic complexities.
With momentum building to address these issues, Congress passed the 21 st Century Cures Act in 2016 that directed the Secretary of Health and Human Services to establish the Task Force on Research Specific to Pregnant Women and Lactating Women. 27 The task force was charged with identifying gaps in knowledge of safe and effective therapies in pregnant and lactating women and advising the secretary on how to best address those deficiencies. The summary report identified a number of barriers and challenges to research in pregnant women, including regulatory, liability, resource, and investigator challenges. 13 A prominent point of discussion at early meetings focused on classification of pregnant women as a vulnerable population and potential changes to the Common Rule. As a result of the growing recognition of the need for better scientific knowledge regarding medical treatments during pregnancy, the Office for Human Research Protections changed the Common Rule to remove pregnant women as a vulnerable population.
The revised Common Rule maintained and did not, however, change the criteria, originally specified in 2001, that must be fulfilled for pregnant women and their fetuses to be included in clinical research (Table). The first three requirements endeavor to ensure that the risk-benefit analysis is considered and that research design focuses on minimizing potential risk. Prior to initiation of a clinical trial in pregnant women, the requirement of data in pregnant animals and nonpregnant women provides insight into the potential safety of the agent under study and helps gauge the degree of risk and putative benefit that might be recognized. These data inform decisions regarding the "prospect of direct benefit for the woman or the fetus" and help to determine if the risk is greater than minimal. These determinations of direct benefit and minimal risk are critical, as they form the foundation around which many of the additional requirements are framed.
For research with a potential direct benefit to the pregnant woman or to both the woman and her fetus, only consent from the pregnant woman is required. Similarly, if the research has no prospect of direct benefit to the pregnant woman or the fetus and the risk to the fetus is no more than minimal, research that will generate important medical knowledge can be conducted with the consent of the pregnant woman. In contrast, if the research has potential direct benefit only to the fetus, consent of both the mother and the father is required unless he is not available, he is incompetent, or the pregnancy resulted from rape or incest. This requirement for consent from both the mother and the father in the setting of pregnancy is in contradistinction from the consent requirement in pediatric research where consent of only a single parent is required when the research has no potential direct benefit to the child. The rationale for this discrepancy has been questioned as it raises the question of why a woman has capacity to consent for her child after birth but lacks such capacity prior to birth. 13 The final requirements focus on avoiding potential risk and conflicts associated with pregnancy termination and determination of viability.
A key point of contention for application of the Common Rule requirements revolves around the definitions of minimal risk and direct benefit. 28 Per 45 CFR §46.102.i, minimal risk is defined as "the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests." 12 Applying this standard to the fetus is problematic. What constitutes ordinary daily risk is highly variable-especially in the setting of a complex medical condition that places the fetus at risk every day-and subject to considerable variability in interpretation. Similar challenges arise when attempting to apply the prospect of benefit to the mother or fetus criterion and lead to variations in interpretation of how much benefit and how high a prospect is required to meet the standard.
Originally scheduled for implementation in 2018, the modified Common Rule did not go into effect until 2019, so the extent to which this change will increase the number of pregnant women involved in clinical research and augment our knowledge base is unknown. At later meetings of the Task Force on Research Specific to Pregnant Women and Lactating Women, task force members expressed that removal of the vulnerable population classification and required special protections would hopefully signal the desire for more research in pregnant women to local institutional review boards, other regulatory agencies, and investigators. 13 However, because other criteria specified in Subpart B of the Common Rule were not modified, concern remains about whether the above change will be enough to stimulate increased research in pregnant women and expand scientific knowledge regarding medication and other interventions in pregnancy.
CONCLUSION
With the increasing number of pregnant women with comorbid medical conditions, the need to expand our scientific knowledge and ensure participation of pregnant women in research is critical. Enhanced knowledge is required to better understand the risks and benefits of treatment to the mother and the fetus. The traditional approach to enrollment of pregnant women in research has been one of exclusion because of liability concerns and federal regulations subject to variable interpretations. Recent changes in federal regulation have tried to encourage a paradigm shift toward inclusion, but whether these changes will be enough to appreciably increase the number of pregnant women enrolled in clinical trials is unclear. | 2020-04-14T05:51:38.501Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8ee00c97901deafe56eac21a3c9ab423beff0190",
"oa_license": "CCBY",
"oa_url": "http://www.ochsnerjournal.org/content/ochjnl/20/1/39.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ee00c97901deafe56eac21a3c9ab423beff0190",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224828851 | pes2o/s2orc | v3-fos-license | Correlation between Transient Hypotension and Exclusively Exercise-induced Symptoms of Two-to-One Atrioventricular Block
A 62-year-old woman with activity-dependent two-to-one atrioventricular block (2:1AVB) and a normal left ventricular ejection fraction was referred to our department for the evaluation of exclusively exercise-induced marked symptoms. The treadmill test helped establish a clear correlation between 2:1AVB and symptoms. The test results demonstrated that exercise-induced marked symptoms were attributed to abrupt transient hypotension combined with relative bradycardia, probably due to increased diastolic mitral and tricuspid regurgitation because of 2:1AVB during moderate-to-heavy exercise. After pacemaker implantation for 2:1AVB, the symptoms and transient hypotension disappeared, and her exercise capacity improved.
Introduction
Exercise-induced second-degree atrioventricular (AV) blocks are rare. However, they can cause profound exercise intolerance (1,2). Two-to-one atrioventricular block (2: 1AVB) shows the lowest AV conduction rates among second-degree AV blocks, except for advanced AV block. This suggests that exercise-induced 2:1AVB, even with a normal left ventricular (LV) function, is rare and can cause profound exercise intolerance.
A 62-year-old woman with activity-and rate-dependent 2: 1AVB and a normal LV function was referred to our hospital for the evaluation of exclusively exercise-induced marked symptoms. According to the current Japanese guidelines, if second AV block, including 2:1AVB, is accompanied by clearly correlated symptoms, pacemaker implantation (PMI) is recommended as a Class I indication; under the current American guidelines, however, it is considered reasonable as at least a Class IIa indication (3, 4).
However, 2:1AVB is sometimes activity-dependent or exercise-induced, and its symptoms are also exerciseinduced but do not always occur in the setting of 2:1AVB. Therefore, it is difficult to establish a clear correlation between 2:1AVB and symptoms based on daily clinical practice using a 12-lead electrocardiogram (ECG) or Holter monitoring. Furthermore, there are very few reports concerning the detailed mechanism underlying exclusively exerciseinduced symptoms of 2:1AVB with a normal LV function.
These issues raise the concern that exercise-induced serious symptoms may not be able to be resolved, even with PMI therapy, in 2:1AVB patients with changeable and transient symptoms. It is important to clearly correlate exerciseinduced 2:1AVB with the changeable symptoms and consider the detailed mechanism underlying the transient symptoms that develop prior to PMI.
Case Report
A 62-year-old woman was referred to our hospital for the evaluation of shortness of breath on effort and 2:1AVB. There was neither a significant personal nor family history. She had been treated with long-term 2.5 mg amlodipine for hypertension. One month prior to the referral, she developed shortness of breath and easy fatigability upon ascending stairs. Thereafter, she developed various marked symptoms. These symptoms included shortness of breath and tension in both shoulders throughout a brisk daily walk, dizziness and dimmed vision while walking around 1-kilometer distance, and fatigue after her daily walk. Her previous ECGs at annual health examinations had shown complete right bundle branch block (CRBBB) since nine years ago. A resting 12lead ECG at a neighboring hospital showed a normal axis, CRBBB, first-degree AV block, and 2:1AVB at a sinus rate of 94 beats per minute (bpm) (Fig. 1A). She had no symptoms during 2:1AVB on the resting ECG.
A physical examination revealed no abnormal findings at the first visit to our hospital. Laboratory data that included C-reactive protein and high-sensitive cardiac troponin I were within normal ranges, except for the serum levels of total cholesterol [229 mg/dL; (128-219 mg/dL)] and brain natriuretic peptide [71.18 pg/mL; (<18.4 pg/mL)]. Although a chest X-ray film showed mild cardiomegaly with a cardiothoracic ratio of 53.0%, echocardiography during sinus rhythm revealed a left ventricular ejection fraction (LVEF) of 69% with no LV dilatation or any other abnormal echocardiographic findings that included wall thinning or thickness, focal area of akinesia, and aneurysm, suggesting cardiac sarcoidosis or other myocardial diseases.
Holter monitoring in daily activities demonstrated 90,913 of total heart beats during 22 hours and 2 minutes of recording, first-degree AV block, and two kinds of second-degree AV blocks: Wenckebach second-degree AV block and 2: 1AVB ( Fig. 1B-a, b). After starting 24-hour Holter monitoring, the sinus rate was at around 100 bpm and 2:1AVB continued in the daytime. The episodes of 2:1AVB that occurred during the daytime were rate-and activity-dependent. When the sinus rate increased to over 100 bpm during daily activities, the sinus rhythm changed transitionally to Wenckebach second-degree AV block and then to 2:1AVB. Conversely, when the sinus rate dropped below 100 bpm, 2: 1AVB returned to Wenckebach second-degree AV block and then to sinus rhythm again. However, after 8:00 PM on the day Holter monitoring began, neither of the second-degree AV blocks developed again, even though the sinus rate transiently increased to about 110 bpm in the nighttime and up to 120 bpm in the next morning. She manifested no symptoms during the 2:1AVB through Holter monitoring as well.
We then conducted a modified Bruce protocol treadmill test to investigate her symptoms for a more advanced AV block or ischemic heart disease and examine the correlation between the symptoms and the ECG findings.
When the sinus rate reached 107 bpm at 2 minutes and 30 seconds of exercise, Wenckebach second-degree AV block occurred ( Fig. 2A-a). When the sinus rate went on to As soon as Wenckebach second-degree AV block transitionally changed to 2: 1AVB, abrupt hypotension and relative bradycardia appeared and continued to the endpoint. The exercise was terminated at the endpoint of 7 minutes and 42 seconds of exercise due to the reproduced marked symptoms, which were attributed to the simultaneous 2: 1AVB and abrupt hypotension during moderate-to-heavy exercise before the endpoint. BP: blood pressure, HR: heart rate, min: minute, bpm: beats per minute, AV: atrioventricular, 2: 1AVB: two-to-one atrioventricular block exceed 120 bpm at 4 minutes of exercise, it subsequently led to 2:1AVB ( Fig. 2A-b), which continued until the endpoint of 7 minutes and 42 seconds of exercise due to the reproduced symptoms of dizziness and shortness of breath ( Fig. 2B-a).
From the moment that the Wenckebach second-degree AV block occurred during exercise, the ventricular rate began to decrease (Fig. 3). When the sinus rate exceeded 120 bpm and 2:1AVB developed, the ventricular rate rapidly decreased to nearly 60 to 70 bpm and remained around the same rate until the endpoint. While the 2:1AVB was occurring, the systolic blood pressure (BP) also decreased to nearly 90 mmHg and remained around the same level. During the exercise-induced bradycardia and transient hypotension due to 2:1AVB, more marked symptoms than usually expected from 2:1AVB alone were reproduced simultane- ously without any more advanced AV block, and the exercise was terminated at the endpoint.
Ischemic heart disease might be considered in the differential diagnosis or as a comorbid disease in patients with exercise-induced AV block (4). Our patient, however, did not have a history of coronary heart disease, and her treadmill test did not reveal any axis deviation (5), significant ST-T change (5, 6), or typical anginal symptoms except for shortness of breath and dizziness (Fig. 4). Furthermore, multidetector-row computed tomography showed an Agatston score of 0 in all coronary arteries, and coronary computed tomography angiography revealed no significant stenosis in the three major coronary arteries.
The 2:1AVB during moderate-to-heavy exercise obviously correlated with marked symptoms and transient hypotension simultaneously. We thus implanted a DDD-mode pacemaker in this patient according to the PMI indication in the guidelines as well as from a hemodynamic viewpoint. The programmed pacemaker parameters were as follows: pacing mode, DDD without rate-response mode; lower rate limit, 55 bpm; maximum tracking rate, 130 bpm; both paced and sensed atrioventricular interval ranges, 120-350 ms; postventricular atrial refractory period range, 240-270 ms, and 2: 1AVB response rate, ! 167 bpm.
A re-examination of the modified Bruce protocol treadmill test after PMI confirmed that no bradycardia, hypotension, or marked symptoms were reproduced during exercise because of the elimination of 2:1AVB. In addition, her exercise capacity improved to 10 minutes and 30 seconds of exercise until termination due to leg fatigue (Fig. 5). Her marked symptoms resolved and have not recurred to date after PMI.
Discussion
Isolated exercise-induced 2:1AVB without any other underlying diseases, including more advanced AV block, myocardial ischemia, and ventricular dysfunction, is a rare entity among uncommon exercise-induced second-degree AV blocks and can cause profound exercise intolerance (1, 2). Nevertheless, there are few detailed reports concerning the underlying mechanism: it remains unclear how the exclusively exercise-induced symptoms of 2:1AVB with a normal ventricular function develop. Our patient, who had activityand rate-dependent 2:1AVB and a normal LVEF, had exclusively exercise-induced marked symptoms.
If 2:1AVB is clearly accompanied by symptoms, PMI is recommended as a Class I indication in the current Japanese guidelines (3), whereas it is considered at least reasonable as a Class IIa indication in the current American guidelines (4). However, 2:1AVB itself is not usually accompanied by a long pause, nor does it always provoke symptoms at rest or in daily activities. In our case, at first, we were unable to establish a clear correlation between the symptoms and 2: 1AVB because no accompanying symptoms were noted during the 2:1AVB on either 12-lead ECG at rest or Holter monitoring in daily activities. Therefore, there was some concern that her marked symptoms might not be resolved after PMI therapy for just 2:1AVB.
We then conducted a treadmill test on our patient to examine the response of the bradyarrhythmia to exercise, confirm the correlation between the symptom and the 2:1AVB or resulting bradyarrhythmia, and diagnose the presence of more advanced AV block or myocardial ischemia. In the treadmill test, there was neither more advanced AV block nor myocardial ischemia. However, 2:1AVB and transient hypotension developed simultaneously, and real-time marked symptoms were reproduced. This clear correlation between 2:1AVB and symptoms was a Class I indication for PMI in the current Japanese guidelines and at least a Class IIa indication in the current American guidelines (3, 4).
However, we were unable to identify the level of AV block, despite performing a treadmill test (Class IIa test in the American guidelines) (4). While this patient manifested exercise-induced 2:1AVB, which, along with CRBBB, might suggest infranodal block (4), the AV conduction changed transitionally from Wenckebach second-degree AV block to 2:1AVB during the treadmill test, which implied intranodal AV block (7). These were ambiguous results.
As the next step, an electrophysiological study (Class IIb test in the American guidelines) was considered able to accurately identify the anatomic site of AV block (4). If the infranodal block could be identified on an electrophysiological study, PMI would then have been recommended as a Class I indication in the current American guidelines (4). However, an electrophysiological study might not have been able to Figure 5. A re-examination of the modified Bruce protocol treadmill test after PMI. Neither bradycardia, hypotension, nor marked symptoms were reproduced during the exercise, and her exercise capacity improved to 10 minutes and 30 seconds of exercise. PMI: pacemaker implantation, bpm: beats per minute, BP: blood pressure, HR: heart rate, min: minute reproduce activity-dependent symptoms in the supine position on an examining table. To make a confident decision concerning PMI therapy for the elimination of the exerciseinduced marked symptoms in our patient, we needed to establish a clear correlation between the symptoms and 2: 1AVB (Class I PMI indication in the Japanese guidelines) (3) on a treadmill test (Class IIa test in the American guidelines) rather than identifying the site of AV block (Class I PMI indication in the American guidelines) on a treadmill test (Class IIa test in the American guidelines) or even an electrophysiological study (Class IIb test in the American guidelines) (4).
There have been few detailed reports on how symptoms of 2:1AVB in cases with a normal ventricular function develop exclusively upon exercise, and there are even fewer reports suggesting that exclusively exercise-induced symptoms are related to transient hypotension. One case report of 2:1AVB with a normal LV function and the same symptoms as in our patient noted that resting echocardiography revealed moderate diastolic mitral and tricuspid regurgitation immediately following blocked P waves of 2:1AVB (8). These manifestations improved after PMI therapy for 2:1 AVB.
In our patient with a normal LVEF, the symptoms developed during 2:1AVB exclusively on moderate-to-heavy exercise, such as the treadmill test, whereas no symptoms developed during 2:1AVB on a 12-lead ECG at rest or on Holter monitoring during mild daily activities. This might be because moderate-to-heavy exercise reduced the compliance of cardiac ventricles (9), thereby increasing diastolic mitral and tricuspid regurgitation immediately after blocked P waves of 2:1AVB (8), and thus this condition, along with the relative bradycardia, failed to maintain the BP (Fig. 3), resulting in hemodynamic collapse despite a normal LVEF.
A previous study found that, in an athlete with asymptomatic complete AV block, the stroke index increased, and the mean arterial BP was maintained at almost the same level despite bradycardia throughout a progressive cycle ergometer test, unlike in our case (10). This was likely achieved through an enlarged end-diastolic volume, as per the Frank-Starling mechanism (11). However, another recent article referred to hypotension as a sign of hemodynamic instability of second-degree AV block that should be urgently treated with atropine, sympathomimetic agents, or temporary cardiac pacing (7). Nevertheless, to our knowledge, there are no reports on the detailed mechanism underlying the exclusively exercise-induced symptoms of 2:1AVB patients with a normal ventricular function. In our patient with a normal LVEF, it was thought that the compensatory maintenance of arterial BP was absent, but that the abrupt transient hypotension occurred exclusively during the treadmill test or moderate-to-heavy exercise in daily activities, resulting in her marked symptoms (Fig. 3).
These factors noted above were suggestive of the hemodynamic advantages of DDD-mode PMI therapy (8,9). After PMI for 2:1AVB, transient hypotension disappeared, her exercise-induced marked symptoms resolved and the exercise capacity also improved in the re-examination of the treadmill test (Fig. 5).
In addition, in the present case, 2:1AVB developed at a sinus rate of about 100 bpm on a 12-lead ECG at rest and Holter monitoring in mild daily activities, whereas 2:1AVB developed at a sinus rate of only over 120 bpm on moderate-to-heavy exercise, such as the treadmill test. Furthermore, symptoms of 2:1AVB developed exclusively on moderate-to-heavy exercise, whereas no symptoms developed even during 2:1AVB in mild daily activities. Namely, symptoms accompanied by 2:1AVB occurred depending on the extent of exercise rather than the development of 2: 1AVB alone.
Although echocardiography at our hospital revealed neither underlying disease nor LV systolic dysfunction, moderate-to-heavy exercise was able to reduce the cardiac ventricle compliance (9), probably due to the Frank-Starling mechanism (10). Therefore, at any age, the development of symptoms as well as transient hypotension correlated with 2:1AVB on moderate-to-heavy exercise might depend directly on the balance between exercise strength and the extent of cardiac ventricular diastolic dysfunction, both of which are individually and independently affected by aging.
Several limitations associated with the present study warrant mention. First, we did not perform cardiac magnetic resonance imaging or positron emission tomography/computed tomography, as there were no abnormal echocardiographic findings. Therefore, we were unable to discuss the pathological or anatomical evaluations in greater detail. Second, we did not conduct an electrophysiological study, as the correlated symptoms with 2:1AVB were evident, which otherwise could be a limitation from an anatomical viewpoint.
Conclusions
A 62-year-old woman with activity-dependent 2:1AVB and normal LVEF was referred to our department for the evaluation of exclusively exercise-induced marked symptoms. In this case, the treadmill test helped reproduce the marked symptoms and 2:1AVB simultaneously, thereby establishing a clear correlation between the two.
The exercise-induced marked symptoms of 2:1AVB may have been caused by the abrupt transient hypotension along with relative bradycardia exclusively on moderate-to-heavy exercise, probably due to increased diastolic mitral and tricuspid regurgitation. DDD-mode PMI was thought to have improved the 2:1AVB and the abrupt hypotension simultaneously and restored the hemodynamic collapse, particularly during moderate-to-heavy exercise, resulting in the elimination of the exclusively exercise-induced marked symptoms of our presenile 2:1AVB patient. | 2020-10-22T18:55:55.924Z | 2020-10-21T00:00:00.000 | {
"year": 2020,
"sha1": "a3325454a11bdd5fb16c53701b451d97f969d645",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/60/6/60_5261-20/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d66f02a7693df2a67491f64f702bac810a82de3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
350144 | pes2o/s2orc | v3-fos-license | Oxidative Stress, Mitochondrial Dysfunction, and Aging
Aging is an intricate phenomenon characterized by progressive decline in physiological functions and increase in mortality that is often accompanied by many pathological diseases. Although aging is almost universally conserved among all organisms, the underlying molecular mechanisms of aging remain largely elusive. Many theories of aging have been proposed, including the free-radical and mitochondrial theories of aging. Both theories speculate that cumulative damage to mitochondria and mitochondrial DNA (mtDNA) caused by reactive oxygen species (ROS) is one of the causes of aging. Oxidative damage affects replication and transcription of mtDNA and results in a decline in mitochondrial function which in turn leads to enhanced ROS production and further damage to mtDNA. In this paper, we will present the current understanding of the interplay between ROS and mitochondria and will discuss their potential impact on aging and age-related diseases.
Introduction
The fundamental manifestation of the aging process is a progressive decline in the functional maintenance of tissue homeostasis and an increasing propensity to degenerative diseases and death [1]. It has attracted significant interest to study the underlying mechanisms of aging, and many theories have been put forward to explain the phenomenon of aging. There is an emerging consensus that aging is a multifactorial process, which is genetically determined and influenced epigenetically by environment [2]. Most aging theories postulate a single physiological cause of aging, and likely these theories are correct to a certain degree and in certain aspects of aging.
Reactive oxygen species (ROS) are highly reactive molecules that consist of a number of diverse chemical species including superoxide anion (O 2 − ), hydroxyl radical ( · OH), and hydrogen peroxide (H 2 O 2 ). Because of their potential to cause oxidative deterioration of DNA, protein, and lipid, ROS have been implicated as one of the causative factors of aging [3]. As ROS are generated mainly as byproducts of mitochondrial respiration, mitochondria are thought to be the primary target of oxidative damage and play an important role in aging. Emerging evidence has linked mitochondrial dysfunction to a variety of age-related diseases, including neurodegenerative diseases and cancer. Details of the precise relationship between ROS-induced damage, mitochondrial dysfunction, and aging remain to be elucidated.
ROS, Oxidative Damage, and Cellular Signaling.
There are several sources of ROS within a cell. ROS are generated as by-products of aerobic respiration and various other catabolic and anabolic processes [4]. Mitochondria are the major producer of ROS in cells, and the bulk of mitochondrial ROS is generated at the electron transport chain [5,6]. Electrons leak from the electron transport chain directly to oxygen, producing short-lived free radicals such as superoxide anion (O 2 − ) [7,8]. O 2 − can be converted to nonradical derivatives such as hydrogen peroxide (H 2 O 2 ) either spontaneously or catalyzed by superoxide dismutase (SOD) [9][10][11][12][13]. H 2 O 2 is relatively stable and membrane permeable. It can be diffused within the cell and be removed by cytosolic antioxidant systems such as catalase, glutathione peroxidase, and thioredoxin peroxidase [14,15]. In addition to being generated during cellular metabolism in mitochondria, ROS can be produced in response to different environmental stimuli such as growth factors, inflammatory cytokines, ionizing radiation, UV, chemical oxidants, chemotherapeutics, hyperoxia, toxins, and transition metals [16][17][18][19][20][21][22][23][24][25][26]. Other than mitochondrial respiration, a number of cytosolic enzymes are able to generate ROS [27]. The nicotinamide adenine dinucleotide phosphate (NADPH) oxidases are a group of plasma membrane-associated enzymes found in a variety of cell types [28]. The function of NADPH oxidases is to produce superoxide from oxygen using electrons from NADPH [29].
Once they are produced, ROS react with lipids, proteins, and nucleic acids causing oxidative damage to these macromolecules [30][31][32][33][34]. ROS readily attack DNA and generate a variety of DNA lesions, such as oxidized DNA bases, abasic sites, and DNA strand breaks, which ultimately lead to genomic instability [35]. 7,8-dihydro-8-oxodeoxyguanosine (8-oxo-dG) is one of the most abundant and well-characterized DNA lesions caused by ROS [36]. It is a highly mutagenic lesion that results in G : C to T : A transversions [37]. To limit the cellular damage caused by ROS, mammalian cells have evolved a number of sophisticated defense mechanisms. ROS-generated DNA lesions are repaired mainly by base excision repair as well as other DNA repair pathways including nucleotide excision repair, double-strand break repair, and mismatch repair [38][39][40]. In addition, the damaging effects of ROS can be neutralized via elevated antioxidant defense, which includes superoxide dismutase, catalase, and glutathione peroxidase to scavenge ROS to nontoxic forms [41].
Intracellular ROS are normally maintained at low but measurable level within a narrow range, which is regulated by the balance between the rate of production and the rate of scavenging by various antioxidants [42]. ROS, at low level under normal conditions, is found to act as signaling molecules in many physiological processes, including redox homeostasis and cellular signal transduction [7]. By activating proteins such as tyrosine kinases, mitogenactivated protein kinases, or Ras proteins, ROS are important mediators of signal transduction pathways [7]. Dependent on cell types, ROS have been found to function as signaling molecules in cell proliferation [43], cellular senescence [44], or cell death [45,46]. The divergent effects of ROS on many cellular processes suggest that ROS are not merely detrimental byproducts, but also generated purposefully to mediate a variety of signaling pathways.
The Free
Radical Theory of Aging. The free radical theory of aging proposed by Denham Harman more than fifty years ago postulates that aging results from the accumulation of deleterious effects caused by free radicals, and the ability of an organism to cope with cellular damage induced by ROS plays an important role in determining organismal lifespan [3]. In agreement with this theory, increased ROS production by mitochondria and increased 8-oxo-dG content in the mtDNA are frequently detected in aged tissues [40,[47][48][49][50], suggesting that progressive accumulation of oxidative DNA damage is a contributory factor to the aging process. Consistently, many studies have found that increased oxidative damage in cells is associated with aging [51][52][53]. Furthermore, genetic studies in worm, fly, and mouse have linked enhanced stress resistance or reduced free radical production with increased lifespan [27]. Mutant strains of C. elegans that are resistant to oxidative stress have extended lifespan, whereas those more susceptible to free radicals have shortened lifespan [54,55]. Mice lacking the antioxidant enzyme superoxide dismutase 1 (SOD1) exhibit a 30% decrease in life expectancy [56]. Conversely, simultaneous overexpression of SOD1 and catalase extends lifespan in Drosophila [57]. Small synthetic mimetics of SOD/catalase increase lifespan in C. elegans [58], while treatment of antioxidant drugs in mice increases the median lifespan up to 25% [59,60]. Further supporting this hypothesis, mice lacking Ogg1 and Myh, two enzymes of the base excision repair pathway that repairs oxidative DNA damage, show a 50% reduction in life expectancy [61]. Collectively, these studies demonstrate that interplay between ROS and protective antioxidant responses is an important factor in determining aging and lifespan.
Despite a large body of evidence supporting the role of ROS in aging, the free radical theory of aging faces some challenges [62]. Mice heterozygous for superoxide dismutase 2 (Sod2 +/− ) have reduced manganese SOD (MnSOD) activity, increased oxidative damage, but normal lifespan [63]. Overexpression of antioxidant enzymes in mice, such as SOD1 or catalase, does not extend lifespan [64,65]. The median lifespan of mice heterozygous of glutathione peroxidase 4 (Gpx4 +/− ), an antioxidant defense enzyme that plays an important role in detoxification of oxidative damage to membrane lipids, is significantly longer than that of wild-type mice, even though Gpx4 +/− mice show increased sensitivity to oxidative stress-induced apoptosis [66]. Studies of long-lived rodents also do not find a convincing correlation between level of oxidative damage and aging [67]. Furthermore, pharmacologic intervention with antioxidants in humans and mice has little effect on prolonging lifespan [68][69][70]. More investigations are clearly needed to clarify the discrepancy in the role of ROS and antioxidant enzymes in aging among different species and to understand the precise role that free radicals play in aging.
ROS and Senescence.
Senescence, a process in which normal somatic cells enter an irreversible growth arrest after a finite number of cell divisions [71], is thought to contribute to organismal aging [72][73][74]. Senescent cells are associated with high level of intracellular ROS and accumulated oxidative damage to DNA and protein [75][76][77]. In contrast, immortal cells suffer less oxidative damage and are more resistant to the deleterious effects of H 2 O 2 than primary cells [78]. Increasing intracellular oxidants by altering ambient oxygen concentrations or lowering antioxidant levels accelerates the onset of senescence, while lowering ambient oxygen or increasing ROS scavenging delays senescence [76,[78][79][80][81].
Telomere shortening is considered as the major cause of replicative senescence [82,83]. It has been reported that the rate of telomere shortening is directly related to the cellular level of oxidative stress [84]. Telomere shortening is significantly increased under mild oxidative stress as compared to that observed under normal conditions, whereas overexpression of the extracellular SOD in human fibroblasts decreases the peroxide content and the rate of telomere shortening [79]. ROS can affect telomere maintenance at multiple levels. The presence of 8-oxoguanine (8-oxoG), an oxidative derivative of guanine, in telomeric repeat-containing DNA oligonucleotides has been shown to impair the formation of intramolecular G quadruplexes and reduces the affinity of telomeric DNA for telomerase, thereby interfering with telomerase-mediated extension of single-stranded telomeric DNA [85]. ROS also affect telomeres indirectly through their interaction with the catalytic subunit of telomerase, telomerase reverse transcriptase (TERT). Increased intracellular ROS lead to loss of TERT activity, whereas ROS scavengers such as N-acetylcysteine (NAC) block ROSmediated reduction of TERT activity and delay the onset of cellular senescence [86]. Furthermore, the presence of 8-oxoG in the telomeric sequence reduces the binding affinity of TRF1 and TRF2 to telomeres [87]. TRF1 and TRF2 are components of the telomere-capping shelterin complex that protects the integrity of telomeres [88]. In addition, ROS-induced DNA damage elicits a DNA damage response, leading to the activation of p53 [89], a critical regulator of senescence. It has been shown that p53 transactivates E3 ubiquitin ligase Siah1, which in turn mediates ubiquitination and degradation of TRF2. Consequently, knockdown of Siah1 expression stabilizes TRF2 and delays the onset of replicative senescence [90]. The p53-Siah1-TRF2 regulatory axis places p53 both downstream and upstream of DNA damage signaling initiated by telomere dysfunction. By regulating telomere maintenance or integrity directly or indirectly, ROS plays a critical role in senescence.
ROS and Stem Cell
Aging. Tissue-specific or adult stem cells, which are capable of self-renewal and differentiation, are essential for the normal homeostatic maintenance and regenerative repair of tissues throughout the lifetime of an organism. The self-renewal ability of stem cells is known to decline with advancing age [91][92][93][94], suggesting that decline in stem cell function plays a central role in aging. Increasing evidence suggests that dysregulated formation of ROS may drive stem and progenitor cells into premature senescence and therefore impede normal tissue homeostasis.
Genetic studies of mice deficient in genes implicated in ROS regulation indicate that elevated level of ROS within the stem cell compartments leads to a rapid decline in stem cell self-renewal [95][96][97][98]. Deletion of Ataxia telangiectasia mutated (ATM) kinase results in increased ROS level in hematopoietic stem cell (HSC) population in aged mice, which correlates with a rapid decline in HSC number and function [95]. When Atm −/− mice are treated with antioxidants, the defect in stem cell self-renewal is rescued [95], suggesting that high level of ROS causes the decline in stem cell function. Furthermore, deficiency in telomerase reverse transcriptase (TERT) accelerates the progression of aging, resulting in an even shorter lifespan in Atm −/− mice accompanied by increased senescence in hematopoietic tissues and decreased stem cell activity [99]. These TERTdeficient HSCs are also sensitive to ROS-induced apoptosis, suggesting another possible cause of stem cell impairment during aging [99]. Similarly, defect in HSC number and activity accompanied by increased accumulation of ROS is observed in mice lacking three members of Forkhead box O-class (FoxO) [96][97][98]. Increased level of ROS in FoxO3-null myeloid progenitors leads to hyperproliferation through activation of the AKT/mTOR signaling pathway, and ultimately premature exhaustion of progenitors [100]. Mice carrying a mutation in inner mitochondrial membrane peptidase 2-like (Immp2l) gene, which is required to process signal peptide of mitochondrial cytochrome c1 and glycerol phosphate dehydrogenase 2, exhibit an early onset of aging phenotypes, including premature loss of fat [101]. Elevated mitochondrial ROS level in the Immp2l mutant mice leads to impaired self-renewal of adipose progenitor cells, suggesting that ROS-induced damage to adult stem cells is the driving force of accelerated aging in these mice [101]. Further supporting this notion, intracellular level of ROS is found to correlate with the long-term self-renewal ability of HSCs in mouse [102]. HSCs with high level of ROS show a decreased ability of long-term self-renewal, and treatment of antioxidant NAC is able to restore the functional activity of HSCs with high level of ROS [102]. Taken together, these studies suggest that ROS play an important role in stem cell aging.
ROS-generated DNA lesions are repaired by several DNA repair pathways including base excision repair, nucleotide excision repair, double-strand break repair, and mismatch repair [38][39][40]. Endogenous DNA damage accumulates with age in HSCs in mouse. HSCs in mice deficient in DNA repair pathways, including nucleotide excision repair, telomere maintenance, and nonhomologous end-joining, exhibit increased sensitivity to the detrimental effect of ROS, diminished self-renewal and functional exhaustion with age [103]. These data support the notion that accumulated DNA damage is one of the principal mechanisms underlying agedependent stem cell decline.
Mitochondria and Aging
3.1. The Mitochondrial Theory of Aging. Because mitochondria are the major producer of ROS in mammalian cells, the close proximity to ROS places mitochondrial DNA (mtDNA) prone to oxidative damage [104]. Consistently, many studies have shown that 8-oxo-dG, one of the common oxidative lesions, is detected at higher level in mtDNA than nuclear DNA, suggesting that mtDNA is more susceptible to oxidative damage [52,[105][106][107][108][109][110][111][112][113]. As both the major producer and primary target of ROS, mitochondria are thought to play an important role in aging. The mitochondrial theory of aging, extended from the free radical theory, proposes that oxidative damage generated during oxidative phosphorylation of mitochondrial macromolecules such as mtDNA, proteins, or lipids is responsible for aging [114]. As mtDNA encodes essential components of oxidative phosphorylation and protein synthesis machinery [115], oxidative damageinduced mtDNA mutations that impair either the assembly or the function of the respiratory chain will in turn trigger further accumulation of ROS, which results in a vicious cycle leading to energy depletion in the cell and ultimately cell death [104,114,[116][117][118].
As mitochondria play a critical role in regulation of apoptosis, which is implicated in the aging process [119], age-related mitochondrial oxidative stress may contribute to apoptosis upon aging. The activation of the permeability transition pore in mitochondria, which is believed to play a critical role in cell necrosis and apoptosis, is enhanced in spleen, brain, and liver of aged mice [120,121]. Moreover, mitochondrial adenine nucleotide translocase, a component of the permeability transition pore, exhibits an age-associated increase of oxidative modification in male houseflies [122]. Such selective oxidative modification may cause the cells more vulnerable to apoptotic inducers [123]. Thus, mitochondria appear to influence the aging process via modifying the regulatory machinery of apoptosis.
Mice expressing proof reading-deficient mitochondrial DNA polymerase show a consistent increase in mtDNA mutations, premature onset of the aging phenotypes and reduced lifespan [124,125], suggesting a critical link between mitochondria and aging. Interestingly, ROS production in these mice is not increased [124,125]. Similarly, mice expressing proof reading-deficient mitochondrial DNA polymerase specifically in heart show accumulation of mutations in mtDNA and develop cardiomyopathy, but oxidative stress in the transgenic heart is not increased, indicating that oxidative stress is not an obligate mediator of diseases provoked by mtDNA mutations [126]. More studies are required to further clarify the consequence of oxidative stress and mitochondrial dysfunction in aging.
Age-Associated Changes of Mitochondria.
Mitochondrial genome encodes proteins required for oxidative phosphorylation and ATP synthesis, and RNAs needed for mitochondrial protein translation [115]. The mtDNA is densely packed with genes and only contains one noncoding region called the displacement loop (D-loop) [127]. The D-loop is important for mtDNA replication and transcription and has been extensively studied for the presence of age-related mutations [115]. Age-dependent accumulation of point mutations within the D-loop has been reported in various types of cells and tissues, including skin and muscle [128][129][130][131][132]. In addition to point mutations, deletions of mtDNA are detected at higher frequency in aged human and animal tissues [133][134][135][136][137][138][139][140][141][142][143][144][145]. Replication is thought to be the likely mechanism leading to the formation of mtDNA deletions [146][147][148], but recent studies suggest that mtDNA deletions may be generated during repair of damaged mtDNA rather than during replication [149]. It is thought that repair of oxidative damage to mtDNA accumulated during aging leads to generation of double-strand breaks [149], with single-strand regions free to anneal with microhomologous sequences on other single-stranded mtDNA or within the noncoding region [150]. Subsequent repair, ligation and degradation of the remaining exposed single strands would result in the formation of an intact mitochondrial genome harboring a deletion [149]. Whether and how exactly mutations and deletions of mtDNA cause the aging phenotypes are not clear. Among mtDNA deletions during aging, especially in postmitotic tissues like muscle and brain, the most common one is a 4977-bp deletion [151][152][153]. The frequency of this deletion increases in brain, heart, and skeletal muscle with age, although the increase varies in different tissues of the same individual [154], or even in different regions of the same tissue [134,136,137]. This deletion occurs in a region encoding subunits of the NADH dehydrogenase, cytochrome c oxidase, and ATP synthase [155]. Whether deletion of these genes plays a causative role in the development of aging phenotypes remains to be determined.
In addition to age-associated increase of mtDNA mutations and deletions, the abundance of mtDNA also declines with age in various tissues of human and rodent [156][157][158]. For instance, in a large group of healthy men and women aged from 18 to 89 years, mtDNA and mRNA abundance is found to decline with advancing age in the vastus lateralis muscle. Furthermore, abundance of mtDNA correlates with the rate of mitochondrial ATP production [158], suggesting that age-related mitochondrial dysfunction in muscle is related to reduced mtDNA abundance. However, age-associated change in mtDNA abundance seems to be tissue specific, as several studies have reported no change in mtDNA abundance with age in other tissues in human and mouse [159][160][161]. It is possible that tissue-specific effect of aging on mtDNA abundance is related to the status of aerobic activity [156,158], as aerobic exercise has been shown to enhance muscle mtDNA abundance in both human and mouse [162][163][164]. Increased prevalence of mtDNA mutations/deletions and decreased mtDNA abundance offer attractive underlying causes of mitochondrial dysfunction in aging, which warrants further investigation.
Mitochondria Malfunction in Age-Associated Human Diseases.
A heterogeneous class of disorders with a broad spectrum of complex clinical phenotypes has been linked to mitochondrial defect and oxidative stress [165,166]. Particularly, mitochondria are thought to play an important role in the pathogenesis of age-associated neurodegenerative diseases, such as Alzheimer's disease, Parkinson's disease, and Huntington's disease. This is not surprising as neurons are especially sensitive and vulnerable to any abnormality in mitochondrial function because of their high energy demand.
Alzheimer's disease (AD) is the most common form of dementia and often diagnosed in people over 65 years of age. AD is characterized by severe neurodegenerative changes, such as cerebral atrophy, loss of neurons and synapses, and selective depletion of neurotransmitter systems in cerebral cortex and certain subcortical region [167]. Mitochondria are significantly reduced in various types of cells obtained from patients with AD [168][169][170]. Dysfunction of mitochondrial electron transport chain has also been associated with the pathophysiology of AD [170]. The most consistent defect in mitochondrial electron transport enzymes in AD is a deficiency in cytochrome c oxidase [171,172], which leads to an increase in ROS production, a reduction in energy stores, and disturbance in energy metabolism [173].
Parkinson's disease (PD) is the second most common progressive disorder of the central nervous system, which is characterized prominently by loss of dopaminergic neurons in the substantia nigra and formation of intraneuronal protein aggregates [174]. The finding that exposure to environmental toxins, which inhibit mitochondrial respiration and increase production of ROS, causes loss of dopaminergic neurons in human and animal models leads to a hypothesis that oxidative stress and mitochondrial dysfunction are involved in PD pathogenesis [175]. Consistent with this notion, a significant decrease in the activity of complex I of the electron transport chain is observed in the substantia nigra from PD patients [176]. Furthermore, neurotoxin 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine, which acts as an inhibitor of complex I, can induce parkinsonism in human, monkey, and rodent [177,178]. Genetic studies of PINK1 and PARKIN further support the role of mitochondrial dysfunction in pathogenesis of PD [179,180]. Autosomal recessive mutations in PINK1 and PARKIN are associated with juvenile Parkinsonism [181][182][183]. Studies in Drosophila have provided strong evidence that PINK1 and PARKIN act in the same genetic pathway to control mitochondrial morphology in tissues with high energy demand and requirement of proper mitochondrial function, such as indirect flight muscle and dopaminergic neurons [184][185][186]. Consistent with the finding in Drosophila, primary fibroblasts derived from patients with PINK1 mutations show similar abnormalities in mitochondrial morphology [187]. The morphologic changes of mitochondria can be rescued by expression of wild-type PARKIN but not pathogenic PARKIN mutants [187], suggesting that mitochondrial dynamics plays an important role in PD pathogenesis.
Huntington's disease (HD) is another hereditary neurodegenerative disorder that affects muscle coordination and leads to cognitive decline and dementia. HD is caused by an autosomal dominant mutation in the Huntingtin (HTT) gene [188]. Morphologic defects of mitochondria, such as reduced mitochondrial movement and alterations in mitochondrial ultrastructures, have been observed in patients with HD or transgenic HD mouse models [189,190]. In addition, expression of mutant HTT leads to impaired energy metabolism, abnormal Ca 2+ signaling and mitochondrial membrane potential, and drastic changes in mitochondrial ultrastructures [191,192]. Although the underlying molecular mechanism remains to be determined, it is recently proposed that mutant HTT conveys its neurotoxicity by evoking defects in mitochondrial dynamics, mitochondrial fission and fusion, and organelle trafficking, which in turn result in bioenergetic failure and HDassociated neuronal dysfunction [189].
Mitochondrial dysfunction and increased oxidative damage are often associated with AD, PD, and HD, suggesting that oxidative stress may play an important role in the pathophysiology of these diseases [193]. Increased production of cellular ROS and oxidative stress have been reported to induce autophagy, a homeostatic process that enables cells to degrade cytoplasmic proteins and organelles [194][195][196][197]. The observation of increased autophagy in the brains of patients with AD, PD, and HD suggests that autophagy contributes to the pathogenesis of these neurodegenerative diseases, possibly by causing cell death [170,[198][199][200][201][202]. Consistently, oxidative stress-induced autophagy of accumulated amyloid β-protein in AD causes permeabilization of lysosomal membrane and leads to neuronal cell death [203]. Mitochondria damaged by significantly increased oxidative stress in pyramidal neurons of AD are subjected to autophagic degradation, ultimately leading to neurodegeneration [204]. Furthermore, overexpression of wildtype PINK1 increases mitochondrial interconnectivity and suppresses toxin-induced autophagy, whereas knockdown of PINK1 expression potentiates mitochondrial fragmentation and induces autophagy [197], suggesting that induced autophagy as a consequence of loss of function of PINK1 may contribute to the pathogenesis of PD.
Interestingly, autophagy also serves as a protective mechanism in age-related neurodegenerative diseases. Several studies demonstrate that basal level of autophagy clears the deleterious protein aggregates that are associated with AD, PD, and HD [205][206][207], therefore playing a protective role in the maintenance of neural cells. For instance, autophagy is involved in degradation of HTT aggregates [198]. Administration of rapamycin induces autophagy and enhances the clearance of mutant HTT, improving cell viability and ameliorating HD phenotypes in cell and animal models [208]. Furthermore, PARKIN, whose loss of function mutation causes early onset PD, has been found to promote autophagy of depolarized mitochondria [209], suggesting that a failure to eliminate damaged mitochondria by mutant PARKIN is responsible for the pathogenesis of PD. It is not entirely clear why autophagy can exert protective or deleterious effects on pathogenesis of these neurodegenerative diseases. A better understanding of autophagy, mitochondrial dysfunction, and oxidative stress is necessary in order to dissect the pathogenesis of AD, PD, and HD.
Cancer is considered an age-associated disease, as the incidence of cancer increases exponentially with age. Warburg first discovered that cancer cells constitutively metabolize glucose and produce excessive lactic acid even in the presence of abundant oxygen, a phenomenon named "aerobic glycolysis" [210]. In contrast, normal cells generate energy mainly from oxidative breakdown of pyruvate, which is an end product of glycolysis and is oxidized in mitochondria. Conversion of glucose to lactate only takes place in the absence of oxygen (termed "Pasteur effect") in normal cells. He hypothesized that defect in mitochondrial respiration in tumor cells is the cause of cancer, and cancer should be interpreted as mitochondrial dysfunction [210]. A growing body of evidence has demonstrated the presence of both somatic and germline mutations in mtDNA in various types of human cancers [211][212][213]. The most direct evidence that mtDNA mutations may play an important role in neoplastic transformation comes from the study by introducing 6 Journal of Signal Transduction a known pathogenic mtDNA mutation T8993G into the prostate cancer cell line PC3 through transmitochondrial cybrids [214]. The T8993G mutation derived from a mitochondrial disease patient causes a 70% reduction in ATP synthase activity and a significant increase in mitochondrial ROS production [215]. Tumor growth in the T8993G mutant cybrids is much faster than that in the wild-type control cybrids [214]. Moreover, staining of tumor sections confirms a dramatic increase in ROS production in T8993G mutant tumors, suggesting that mitochondrial dysfunction and ROS elevation contribute to tumor progression. Consistent with this notion, the Sod2 +/− mice exhibit increased oxidative damage and enhanced susceptibility to cancer as compared to wild-type mice [63]. Collectively, these studies suggest that mtDNA mutations could contribute to cancer progression by increasing mitochondrial oxidative damage and changing cellular energy capacities.
Mouse Models of Oxidative Stress and Mitochondrial
Dysfunction in Aging. Genetically engineered mouse models provide great systems to directly dissect the complex relationship between oxidative damage, mitochondrial dysfunction, and aging. Although it is difficult to manipulate mitochondrial genome, genetic engineering of nuclear genes that are involved in oxidative stress response and mitochondrial function has been utilized to study mitochondrial biology and aging.
Mammalian cells scavenge ROS to nontoxic forms through a sophisticated antioxidant defense that includes superoxide dismutase (SOD), catalase, and glutathione peroxidase. Genetic ablation of SOD2, which encodes a mitochondrial manganese SOD (MnSOD), leads to early postnatal death in mice accompanied by a dilated cardiomyopathy, metabolic acidosis, accumulation of lipid in liver and skeletal muscle, increased oxidative damage, and enzymatic abnormalities in mitochondria [216,217]. Treatment of Sod2 −/− mice with a synthetic SOD mimetic not only rescues their mitochondrial defects in the liver, but also dramatically prolongs their survival [218]. Furthermore, heterozygous Sod2 +/− mice show evidence of decreased membrane potential, inhibition of respiration, and rapid accumulation of mitochondrial oxidative damage [219]. Mitochondrial oxidative stress induced by partial loss of SOD2 leads to an increase in proton leak, sensitization of the mitochondrial permeability transition pore and premature induction of apoptosis [219]. These studies clearly demonstrate that ROS generated in mitochondria play an important role in cell homeostasis and aging.
Conflicting results of the effect of increased SOD2 expression on aging are obtained using different SOD2 transgenic mouse strains [220][221][222]. A transgenic line carrying a human SOD2 transgene under the control of a human β-actin promoter shows protection against hyperoxic lung injury [220], reduction in mitochondrial superoxide in hippocampal neurons, and extended lifespan as the result of increased activity of MnSOD [221]. Another transgenic line carrying a 13-kb mouse genomic fragment containing SOD2 [223] has a twofold increase in the activity of MnSOD [222].
Such level of SOD2 overexpression does not alter either lifespan or age-related pathology, even though these mice exhibit decreased lipid peroxidation, increased resistance against paraquat-induced oxidative stress, and decreased agerelated decline in mitochondrial ATP production [222]. The reason behind the different outcomes of these two SOD2 transgenic mice on lifespan is not clear, but may be related to different levels of SOD2 expression. The precise role of SOD2 in aging needs further investigation.
An important function of mitochondria is to produce ATP. Targeting genes involved in ATP production offers a great opportunity to study the role of mitochondrial function in aging. An example is a mouse model with targeted inactivation of adenine nucleotide translocator (ANT), a transporter protein that imports ADP and exports ATP from the mitochondria. Ant1 −/− mice exhibit classical physiological features of mitochondrial myopathy and hypertrophic cardiomyopathy in human, as evident of cardiac hypertrophy, an increase in succinate dehydrogenase and cytochrome c oxidase activities, a degeneration of the contractile muscle fibers, and a massive proliferation of abnormal mitochondria in skeletal muscle [224]. The increase in mitochondrial abundance and volume in muscle of Ant1 −/− mice is accompanied by upregulation of genes that are known to be involved in oxidative phosphorylation [225]. Consistently, mitochondrial H 2 O 2 production increases in skeletal muscle and heart of Ant1 −/− mice [226]. The Ant1-deficient mouse model provides strong evidence that a defect in mitochondrial energy metabolism can result in pathological disease [224].
IMMP2L protein is a subunit of a heterodimer complex of inner mitochondrial membrane peptidase that cleaves signal peptide from precursor or intermediate polypeptides after they reach the inner membrane of mitochondria [227,228]. Mammalian IMMP2L has two known substrates, cytochrome c1 and glycerol phosphate dehydrogenase 2, both of which are involved in superoxide generation [229]. The Immp2l mutant mice have impaired processing of signal peptide of cytochrome c1 and glycerol phosphate dehydrogenase 2 [230], and consequently show elevated level of superoxide ion, hyperpolarization of mitochondria, and increased oxidative stress in multiple organs. Furthermore, these Immp2l mutant mice exhibit multiple aging-related phenotypes, including wasting, sarcopenia, loss of subcutaneous fat, kyphosis, and ataxia [101]. These data provide a strong evidence that mitochondrial dysfunction is a driving force of accelerated aging.
Conclusion
Aging is a complex process involving a multitude of factors. Many studies have demonstrated that oxidative stress and mitochondrial dysfunction are two important factors contributing to the aging process. The importance of mitochondrial dynamics in aging is illustrated by its association with a growing number of age-associated pathogenesis. A better understanding of response to oxidative stress and mitochondrial dynamics will lead to new therapeutic Journal of Signal Transduction 7 approaches for the prevention or amelioration of age-associated degenerative diseases. | 2014-10-01T00:00:00.000Z | 2011-10-02T00:00:00.000 | {
"year": 2011,
"sha1": "5fd7016cb70d2ca4fe7aa898e316a8d5abc4fdaa",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2012/646354.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b4038bb2aca26f843b08e0fbb02083e6d25d3b0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
230597205 | pes2o/s2orc | v3-fos-license | Retinal Structural and Microvascular Alterations in Different Acute Ischemic Stroke Subtypes
Introduction Retinal structural and microvascular damages reflect damage to cerebral microvasculature and neurons. We aimed to investigate neovascular unit abnormalities among patients with large-artery atherosclerosis (LAA) or small-vessel occlusion (SAA) and control subjects. Methods Twenty-eight LAA patients, forty-one SAA patients, and sixty-five age- and gender-matched controls were recruited. Based on optical coherence tomography angiography (OCTA), retinal capillary vessel density was assessed in the general and local sectors, and the thickness of individual retinal layer was extracted from retinal structural images. The differences between structural and microvascular were analyzed. Results The superior peripapillary retinal nerve fiber layer (pRNFL) thickness was significantly different among the three groups, and the LAA group had the thinnest thickness. Compared to the control group, the deep retinal capillary vessel density in other two stroke subgroups were significantly reduced in all regions except in the inferior region (P < 0.05), and the fractal dimension in C2 and C4 regions of deep retina was significantly lower in the LAA group (P < 0.05). Discussion. Compared with superficial microvascular network, deep microvascular network is more sensitive to ischemic stroke. In addition, we have demonstrated quadrant-specific pRNFL abnormalities in LAA and SAA patients. Superior quadrant pRNFL thickness differences between stroke subgroups may suggest that changes in retinal nerve fiber layer are more sensitive to subtype identification than changes in retinal microvascular structure. All in all, the alteration in retinal structural and microvascular may further elucidate the role of the neovascular unit in ischemic stroke, suggesting that the combination of these two indicators could be used for subtype identification to guide prognosis and establish a risk prediction model.
Introduction
Stroke is the most common cause of serious disability in adults, and China bears the biggest burden globally [1]. As therapeutic options are limited, effective preventive strategies for early diagnosis are needed. e underlying subclinical pathologic process occurs much earlier before the onset of clinical stroke while current neuroimaging technologies may not be capable of directly observing subtle subclinical changes in stroke due to the resolution. Besides, current predictions of stroke are difficult to quantify. erefore, there is an urgent need for additional surrogate techniques to detect the subtle changes in vivo. In addition to the importance of finding indicators to establish a model for stroke risk prediction and prognosis assessment, the identification of subtypes of ischemic stroke is vital for guiding clinical treatment and management. However, the current international classification Treatment of Acute Stroke Trial (TOAST) [2] requires a lot of auxiliary examinations which are expensive and time-consuming. e above shortcomings and possible examination contraindications make it challenging for patients to complete the examination in time when they are admitted to the hospital, affecting the early guiding role in clinical treatment. erefore, finding an early, especially sensitive and effective, method is of great importance in disease prediction, treatment, and prognosis assessment. Because the retinal and cerebral vessels share similar anatomic, embryological, and physiological characteristics, the retina provides a unique "window" to assess the cerebral microvascular and neurons in vivo noninvasively.
Previous studies based on fundus photography have revealed an independent correlation between retinal vascular parameters and stroke [3][4][5][6]. Additionally, several studies also detected vascular changes vary according to stroke subtypes, suggesting the specific cerebral microvasculopathy of subtyping of stroke [7,8]. Inconsistently, another study showed that vascular changes were similar between stroke subtypes [9]. e reason for the discordant results may be that the fundus photography is only a plane picture, which only reflects the large blood vessels of the retina without microvascular and quantitative retinal structural parameters. With the advancement of the technique, the optical coherence tomography angiography (OCTA) can reflect finer retinal capillary plexuses and choriocapillaris changes by generating three-dimensional images based on the comparisons of the motion of circulating blood cells. Consequently, we can observe retinal microvascular changes of stroke patients. e concept of the neurovascular unit (NVU) was proposed in 2003 [10], which is composed of endothelial cells, neurons, astrocytes, and pericytes. e NVU provides new insights for the pathogenesis and diagnostic and treatment strategies of stroke [11,12]. Spectral-domain optical coherence tomography (SD-OCT) with high-resolution retinal imaging can provide cross-sectional images of biologic structures and quantify the thickness of each retinal layer. One previous study observed that the transneuronal retrograde degeneration (TRD) of retinal ganglion cells (RGCs) assessed by SD-OCT is associated with cerebral infarction [13]. Most of the studies containing previous stroke patients with an increased risk of confounding factors concentrated on the association between large retinal vessels or neural structures changes and stroke separately [3,13]. So far, there have been no in vivo studies on simultaneous microvascular and neural structures in stroke subjects.
In this study, we aimed to find the retinal microvascular and microstructural changes in the subtypes of initial acute stroke patients.
Study Population.
In this study, a total of 85 patients with initial ischemic stroke within 14 days of an acute period were prospectively recruited from the neurology unit at the Second Affiliated Hospital & Yuying Children's Hospital of Wenzhou Medical University from Jan 2017 to December 2018. One neurologist (Zhao Han) assessed the stroke severity with the National Institutes of Health Stroke Scale (NIHSS) [14] and classified large-artery atherosclerosis (LAA) and small-artery occlusion lacunar (SAA) according to a modified TOAST classification [2]. Besides, 65 age-and gender-matched controls with no self-reported history of stroke or transient ischemic attack or ophthalmic disease were enrolled consecutively from the relatives of patients or working staff at the Eye Hospital or the Second Affiliated Hospital & Yuying Children's Hospital of Wenzhou Medical University between Jan 2017 to Aug 2019. Considering the effect of stroke site, parameters of ipsilateral eyes were selected for analysis. Additionally, random eyes were selected in nonunilateral stroke and control subjects. Written informed consent was obtained from patients or their next of kin, and the project was approved by the ethics committee of the Eye Hospital of Wenzhou Medical of Wenzhou Medical University.
Assessment of Cardiovascular Risk Factors.
Patients finished detailed questionnaires for information on history of hypertension, diabetes mellitus, hypercholesterolemia, ischemic heart disease, cigarette smoking status, and medication use. All patients underwent usual examinations of stroke, including brain imaging, fasting blood samples for glycosylated hemoglobinA1C (HbA1C), total cholesterol (TC), total triglycerides (TG), homocysteine (HCY), creatinine (Cr), high-density lipoprotein (HDL-C), low-density lipoprotein (LDL-C), and body mass index (BMI). Besides, as a part of clinical care for stroke, blood pressure was measured three times after participants had been seated for at least 10 minutes at the same sitting.
e mean blood pressure of three times was taken as the final result. e mean arterial pressure (MAP) is equal to one-third of systolic blood pressure (SBP) plus two-thirds of diastolic blood pressure (DBP).
Hypertension was diagnosed as SBP ≥140 mm Hg or DBP ≥90 mm Hg at examination or a self-reported history of physician-diagnosed hypertension or the use of antihypertensive medication. Diabetes mellitus was defined as fasting blood glucose ≥7.0 mmol/L and/or random blood glucose ≥11.1 mmol/L, hemoglobin A1C ≥7%, self-reported history of physician-diagnosed diabetes mellitus, or the use of antihyperglycemic medication. Hypercholesterolemia was defined as fasting total cholesterol ≥5.2 mmol/L, self-reported history of physician-diagnosed hypercholesterolemia, or the use of antilipemic medication. Current smokers were defined as people who smoke currently or quitted smoking less than one year before the examination.
Assessment of Ophthalmic Parameters and
Microstructure of the Retina. All the subjects had detailed ophthalmologic examinations performed by two ophthalmologists (Ying Zhang and Ce Shi), including slit-lamp biomicroscopy, refraction diopter, best-corrected visual acuity (BCVA), and noncontact intraocular pressure (IOP). All patients were imaged by an OCT system (Optovue RTVue-XR Avanti; Optovue, Inc., Fremont, CA, USA) to obtain the OCTA images, with detailed steps described as follows: refraction data were converted to spherical equivalents (SEs) and calculated as the spherical dioptric power plus one-half of the cylindrical dioptric power. e exclusion criteria were presented as follows: patients with contraindications to magnetic resonance, hemorrhagic stroke, recurrent stroke, those who were unable to complete the eye examinations, and those with spherical equivalent (SE) under ± 5.00 D, IOP >21 mm Hg or previous ophthalmologic diseases (such as cataract, glaucoma, high myopia, and retinal diseases). Other exclusion criteria were systemic diseases that could affect the ocular structures, such as uncontrolled hypertension/diabetes and neurological diseases such as Parkinson's disease and multiple sclerosis.
MRI Analysis.
All patients underwent the 3.0-T MRI (Signa HDxt GE Healthcare), which included T1weighted and T2-weighted imaging, diffusion weighted imaging (DWI), and fluid attenuated inversion recovery (FLAIR). e slice thickness was 5 mm with an interslice gap of 1 mm. e high signal on DWI sequence of MRI indicates the presence of acute cerebral infarction. In addition, the size and the location of the lesion are conductive to the classification of stroke. In the control group, MRI was also used for the reason of homogeneous management. For ethical reasons, not all controls were willing to undergo the tests for fasting blood tests and magnetic resonance imaging. erefore, the two indicators are not shown in the Table 1.
OCT and OCTA Acquisitions.
All subjects remained seated under the same conditions, and examinations were performed by an expert examiner. e OCTA system, which employs the split-spectrum amplitude-decorrelation angiography (SSADA) algorithm, operated at a rate of 70000 A-scans per second, with the scan area of 3 × 3 mm 2 , and the results were obtained by orthogonal registration and merging of two consecutive B-scans. e size of the exported OCT images was 304 × 304 pixels. OCTA combines orthogonal fast-scan directions to correct motion artifacts based on the DualTrac Motion Correction technology [15]. A good set of scans with a signal strength index (SSI) over 40 was selected for further analysis.
Retinal Layer ickness Analysis on Spectral-Domain
Optical Coherence Tomography. Retinal thickness was imaged by the RTVue XR Avanti SD-OCT system (Optovue, Inc, Fremont, California, USA). Besides, the average, superior (S), temporal (T), inferior (I), and nasal (N) quadrants of peripapillary retinal nerve fiber layer (pRNFL) thickness were obtained. e ganglion cell complex (GCC) provides inner retinal thickness values from the internal limiting membrane (ILM) to the inner molecular layer (IPL), shown as average, superior, and inferior regions (Figure 1(f )). Vessel density (VD) is defined as the percentage of area occupied by OCTA detected vasculature. e software sets the superficial capillary plexuses (SCP) from 3 μm below the ILM to 15 μm below the IPL. e deep capillary plexuses (DCP) were set from 15 to 70 μm below the IPL (Figures 1(b) and 1(c)). In addition, the parafovea vessel density, defined as the area of annular circle with a diameter of 3 mm excluding the fovea zone (diameter � 1 mm), was divided automatically into whole and superior (S), temporal (T), inferior (I), and nasal (N) quadrants. Similarly, the 5 sectors of radial peripapillary capillary (RPC) vessel density were analyzed. e boundary of RPC ranges from ILM to the nerve fiber layer.
Capillary Vessel Density and Fractal Dimension Analysis on Optical Coherence Tomography Angiography.
To quantify the complexity of the branching pattern and density of the retinal capillary network in OCTA images, the automated fractal analysis system was employed to correct the image magnification based on the axial length [16,17]. Briefly, the OCTA images in PNG format were imported to the custom automated algorithm software published previously [18]. en, the grayscale of the two-dimensional OCTA images was first extended by bicubic interpolation to 1024 × 1024 pixels so as to improve the image details. e binary images of vessels were created by the algorithm. Subsequently, one binary image containing only large arteries and the other binary image containing both large and small vessels were subtracted to obtain the final binary image. Based on the final image of white-pixelated vasculature, a skeletonized image was created by detecting the central axis of each capillary. After the image processing, both the superficial and deep retinal capillary complexities were calculated based on the skeletonized images [19,20]. e quantitative measured parameter of complexity, D box values, was obtained with the fractal analysis software (Benoit, Trusoft Benoit Fractal Analysis Toolbox; Trusoft International, Inc., St. Petersburg, FL). Both the general and local fractal dimensions were used to describe the complexity of capillary network. At first, after excluding the fovea avascular zone (FAZ) within the diameter of 0.6 mm, the fractal dimension (FD) was automatically calculated for the total annular zone (TAZ) within the 2.5 diameter and for the 4 parafoveal quadrant sectors (superior (S), temporal (T), inferior (I), and nasal (N)) and 6 concentric isometric annular rings (Figures 1(d) and 1(e)). e methods above were implemented using MATLAB v 7.10 (Mathworks, Inc., Natick, Massachusetts, USA).
Statistical Analysis.
All statistical analyses were conducted using SPSS software (version 24.0; SPSS, Inc., Chicago, IL, USA). e data were expressed as the mean ± standard deviation (SD). One-way analysis of variance (ANOVA) was used to test the differences among patients with large-artery stroke and lacunar stroke and control subjects, and Bonferroni correction was used for pairwise comparisons. e differences in gender and medical history were determined by the χ 2 test.
Results
A total of 85 patients were included in the present study. Among them, 16 patients were excluded due to poor image quality of OCTA scans. e remaining 28 patients with LAA, 41 patients with SAA, and 65 age-and gender-matched control subjects were further analyzed. e demographic and clinical characteristics are summarized in Table 1. Normally, distributed data are represented by mean plus or minus standard deviation, while nonnormally distributed data are represented by median and interquartile spacing. Differences in age, sex, BMI, SE, IOP, and DBP, together with the prevalence of hyperlipidemia, currently smoking, and previous ischemic heart disease were not significant among the three groups. Patients with SAA were more likely to have hypertension than the other two groups (P < 0.001). Besides, the values of SBP and MAP were significantly higher than those of the other two groups (P < 0.001). LAA patients were more likely to undergo diabetes (P � 0.007) than the controls. However, there was no statistical difference in demographic data between the two stroke subgroups. Not all subjects completed all tests. e number of eyes that completed each examination is shown in Table 2.
Retinal Microstructural icknesses.
In terms of the quadrants, the superior pRNFL thickness was significantly thinner in the eyes of patients than that in the eyes of the control group (P � 0.01, Table 3, Figure 2(a)). In the eyes of LAA, the pRNFL thickness was significantly thinner in the superior quadrant compared to the eyes of SAA (P � 0.034) and control (P � 0.003). No significant superior pRNFL thinning was observed in SAA compared to control (P � 0.438). Additionally, the thicknesses of GCC were not significantly different among the three groups (all P > 0.05, Table 3, Figure 2(b)).
Vessel Density around ONH and Macula.
e vessel density around the macula in the deep retinal capillary layer was significantly reduced in patients with LAA or SAA within all regions, except for the inferior region (P < 0.05, Table 4, Figure 3(b)). e significant differences mostly existed between the stroke group and control group (P < 0.05), and no significant difference was found in the two stroke subgroups, although the LAA group tended to have a lower vessel density. In addition to ONH capillary density, no significant difference was found in the superficial layer of all regions (P > 0.05, Table 4, Figure 3(a)).
Fractal Dimension around Macula.
Differences in the fractal dimension were only statistically significant in C2 and C4 regions of the deep retina (P < 0.05, Table 5, Figure 4(b)) between patients with LAA and the controls. Compared with the control group, the fractal dimension of most regions in Figures 4(a) and 4(b)).
Discussion
Assuming that the retinal vasculature mirrors the cerebral vasculature, OCTA enables noninvasive imaging of retinal capillaries in multiple layers invisible on fundus images. Based on fundus photography, retinal abnormalities, including arteriovenous nicking and generalized and localized arteriolar thinning, a lower arteriolar/venular diameter ratio and geometric parameters have been demonstrated to be significantly related to the incidence of stroke [5,9,21,22]. e previous studies using fundus photos could not qualitatively detect the subtle changes at the capillary level while OCTA provides the opportunity to investigate the retinal capillary microcirculation at the micrometer resolution [23].
In terms of vessel density around the macula, our findings demonstrate that the changes of vessel density are more obvious in the deep layers than the superficial layer between stroke patients and control subjects. Our results point to a preferential involvement of the deep layer in patients with stroke which may be attributed to the fact that the deep network consists of a dense and complex system of smaller vessels [24]. It can be speculated that deep retinal Journal of Ophthalmology capillaries are more susceptible to ischemic and hypoxia. As per our expectation, patients with SAA displayed much smaller changes than patients with LAA. However, there exists no significant difference between the LAA and SAA. e finding suggested that the retinal vasculopathy may result from downstream effects of large-artery pathology in the cerebral circulation. In addition, we found that fractal dimension was not helpful in identifying stroke subtypes and was significantly lower in the LAA group than in controls only in individual regions of the deep retina. Our results were in consistence with those of others [9], which found that decreased FD was correlated with stroke, suggesting a loss of complexity. However, previous results of fractal dimension based on the fundus photograph remain controversial. Some considered lacunar stroke subtype was associated with decreased retinal FD [8] while others demonstrated that the lacunar stroke was positively associated with higher FD [7]. We found no significant difference between stroke subgroups, which may indicate that the FD is not applicable to differentiate the disease subgroups who have already had stroke.
Regarding peripapillary vessel density, there was no significance in RPC among the groups. e finding could be related to the anatomical differences between the different Figure 3: Comparisons of the microvascular density on OCTA images in whole and four quadrant sectors of the superficial capillary plexuses (SCP) (a) and deep capillary plexuses (DCP) (b). * P < 0.05, the density in the SAA group was lower than that in the control group; # P < 0.05, the density in the LAA group was lower than that in the control group. areas. e parafoveal superficial capillary plexus originates largely from the retinal circulation, whereas the RPC receives additional blood supply from the choroid [25]. Larger vascular channels around the optic disc may have masked subtle changes in the capillary network. Additionally, RPC contains multiple layers of capillaries that overlap on en-face OCTA images, lacking the ability to detect tiny vascular losses.
In addition to retinal capillary changes, our study observed that the pRNFL thickness was statistically reduced in the superior quadrant of stroke patients, and there was a statistical difference between the stroke subgroups which may indicate different patterns of nerve damage in the two stroke subtypes. Additionally, some research recently also observed that both acute and previous stroke were significantly associated with retinal nerve fiber layer defects (RNFLDs) [13]. ese findings were also in accordance those of with others [26], which reported the associations between severity and laterality of RNFLD and laterality of hemispheric damage as well as arterial territory of infarct. ey found RNFLDs were significant in the temporal sector of the ipsilateral side and in the nasal sector of the contralateral side of the stroke. Furthermore, they also confirmed that the degree of the transneuronal retrograde degeneration (TRD) was time-dependent. However, we find that the significant RNFLDs only exist in the superior sector in our study although the ipsilateral sides were included.
As is known to all, over 30 morphological types of RGCs compose the structure of the retina. e midget RGCs (80%), with wide retinal dendritic fields located in the peripheral retina, mainly project to the magnocellular layers of the lateral geniculate body. e parasol RGCs (5-15%) predominantly present in the papillomacular bundle (macula) and project to suprachiasmatic nucleus of the hypothalamus. Axons from midget RGCs enter the superior, inferior, and nasal sectors of the optic nerve, and parasol RGCs gather their axons and enter the temporal sector of the optic nerve [27,28]. e different degenerative patterns of the ganglion cells and nerve fibers have been demonstrated in several neurodegenerative diseases [29,30]. Recent studies have described that the parasol RGCs are more involved in the pathogenesis of Alzheimer's disease [31][32][33]. However, as for the Parkinson syndrome and mitochondrial optic neuropathy, the midget RGCs are mainly involved [30,34,35]. Both the above and our study suggest that mean thickness measurement may not reflect the disease well and reduce the diagnostic efficacy of the ocular biomarker. We speculate that detailed analysis of the focal nerve structure alteration may be developed as an ocular imaging biomarker for monitoring disease progression and evaluating prognosis of these diseases.
Nevertheless, as this is the first study on ischemic stroke subtypes based on OCTA, further large samples are required to confirm the universality in the disease. e discrepancies existing between different studies could be ascribed to differences in course and severity of disease as well as various OCT devices and study designs.
Besides, a previous animal study also revealed that the TRD of retinal ganglion cells occurred after shrinkage of the optic tract, and degeneration of the RGC was progressing slowly in the next few years [36]. us, our negative results in GCC thickness might be related to the fact that patients were tested within two weeks after stroke.
To conclude, our study provides a side view that the neurovascular unit is affected in ischemic stroke and severely in LAA patients. e components of neurovascular units are interrelated in the microenvironment. Studies have shown that the signal transmission between neurons, astrocytes, and microvascular endothelial cells regulates the brain microenvironment [37] [29]. Briefly, NVU as a structural and functional whole, the relationship between members changes in the state of illness. e mechanism of cerebral ischemia is complex and involves multiple cascading reactions.
erefore, monitoring the NVU as a whole and improving its function help maintain brain cell function and make stroke treatment more ideal. e retina provides a visualization of the neurovascular units simultaneously to reflect the changes of the brain, which has important implications for disease surveillance.
In addition, we also acknowledge the limits in this study. e cross-sectional study with a small sample limits our ability to identify the different pathogenesis of the capillaries and microstructures of the stroke. e outer structure of the retina which is mainly supplied by the choroid has not been analyzed. Due to the practical difficulty of recruiting and examining the patients, milder patients may be recruited, which may be more likely with milder vascular lesions to differentiate the two subtypes. Despite the weakness, there are several strengths in our study. We recruited different ischemic stroke subtypes with strict inclusion and exclusion criteria. Otherwise, to minimize the confounding of stroke, the patients with a history of previous stroke were excluded. We completed the detailed ophthalmic and clinical examinations within two weeks after the onset of the stroke and maintained the blinding of retinal and brain images to each other. Finally, this is the first attempt to simultaneously observe retinal microvascular and neurological changes in different stroke subtypes in vivo.
Moreover, further longitudinal studies with a larger sample size are required to show differences in the characteristics of retinal microstructure and capillaries. It remains to be seen whether the retinal signs are indicative of cerebrovascular risk beyond the conventional risk indicators and whether the retinal imaging technique will be a surrogate or accessory examination system in clinical settings and ultimately become a part of the routine stroke risk or evaluation of treatment assessment.
Data Availability
All data generated or analyzed during this study are included within published article.
Ethical Approval
is study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee Board of the Wenzhou Medical University (2019-027-K-26).
Consent
All subjects, recruited voluntarily, were informed about the purposes, methods, and potential risks of the study. A signed consent form was obtained from each patient.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
Ying Zhang, Fan Lu, and Meixiao Shen designed the study. Ying Zhang, Fan Lu, and Ce Shi conducted the study. Ying Zhang, Xianda Lin, Weicheng Wang, and Shenghai Huang collected the data. Shenghai Huang wrote the image processing algorithm for data analysis. Ying Zhang and Ce Shi analyzed and interpreted the data. Fan Lu and Meixiao Shen were the main contributors to manuscript discussion. Meixiao Shen, Weicheng Wang, and Zhao Han revised the manuscript. All authors read and approved the final manuscript. | 2020-12-17T09:10:38.780Z | 2020-12-09T00:00:00.000 | {
"year": 2020,
"sha1": "f3e3f77e18231446bf963abb7c8581a68b3237bc",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/joph/2020/8850309.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "334558f959e3359eb5936e53bd11779eb8df9dbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
167938246 | pes2o/s2orc | v3-fos-license | Governance of change for sustainability: experience from Central and Eastern Europe
Two distinct forms of governance, state institutions and markets, have incentivized and organized the socioeconomic changes, as well as managed the social and environmental problems that have accompanied the transition process. Over the last 25 years, most central and eastern European states have shifted from more authoritarian or planned governance systems to more market-driven ones. But they have to recognize that markets work best where regulations are clear and well enforced, and to achieve environmentally optimal, or at least desirable, outcomes, requires a smart mix of regulations and market incentives. Strong and efficient institutions are essential to delivering the public interest and environmental interest. Good regulation is not only needed to correct market failures, but to make markets work and to induce innovation. As a Slovenian, I witnessed the dissolution of the former Yugoslav Republic, and soon after I headed the team negotiating my country’s accession to the European Union in 2004. This process of adopting a whole set of new rules and regulations covering everything from agriculture to construction, from consumer rights to the environment, gave me a good understanding of how a common rules-based system, backed up by institutional capacity and a sound knowledge base, can provide effective environmental protection and management. My subsequent role as European Commissioner for Science and Research, then for Environment, led me to appreciate that Europe remains extremely diverse across its 28 EU Member States, each with its own economic, social, and political governance systems. But I also saw that the EU’s supranational nature has enabled agreement on a body of legislation that protects citizens of all Member States in dealing with the many shared environmental pressures that do not respect national borders. About 80% of all legislation that Member States have to implement in relation to the environment originates on the European level. We need to have the legislation to stop extravagant and bad behavior, and European legislation, developed over the last 50 years, has ensured that environmental damage and pollution can be punished. But punishment always comes too late after the damage is done. It is far better to encourage good behavior. It is a bit like with our health; it is better not to fall ill than to cure the disease. Being healthy is what we really want, but, for those who like to measure everything only in economic terms, the very solid argument is that it is also much cheaper than treating illness. To avoid damage to the environment, we need to change the way we produce and consume, by creating the right incentives and market mechanisms. And we need to manage natural resources and ecosystems in such ways that ensure that they will be there for future generations to enjoy. Legislation can help ensure better management of resources, with the Habitats and Birds Directives being an excellent example, now ensuring proper management of nature protection areas that cover more than 18% of the European Union. But other tools are needed, particularly to bring about changes in culture and attitudes toward working with nature, not against it.
protect from bad behavior, but there is still much to be done to ensure proper implementation of that legislation, whether it is regarding waste management, air quality, or water. Many central and eastern European countries, for example, still have serious problems with efficient waste management resulting in high levels of landfilling and low recycling rates. Waste is still considered a burden, not a source of valuable resources. Too many do not see proper waste management as an opportunity for job creation; creation of jobs that are difficult to delocalize.
Another characteristic of central and eastern European countries is the wealth of beautiful and well-preserved nature. This treasure is sometimes seen as an obstacle to economic development. This is not specific to those countries, but what is particular in their case is that they still have a possibility to protect this wealth, while in many more industrialized countries the damage has already been done, often irreparably. Proper implementation of nature protection laws requires institutional capacity, but it is a brave and far-sighted politician that argues for more public resources to enforce laws that are often perceived as a brake on economic activity, particularly in a context of economic hardship for many. Civil society organizations provide a vital role here.
Global Challenges are Increasing our Responsibility
For a European economic community with no trade barriers, a shared basis of environmental legislation has been indispensable for many years. But today at the global level, population growth and the three billion people who will move out of poverty into middle-class lifestyles in the next generation are vastly increasing the scale of humanity's burden on our planet. As our production systems and supply chains become more globalized, we are becoming both more interdependent and more interconnected. Market forces are helping millions out of poverty, but they alone will not lead to sustainable and socially beneficial outcomes. We need to think more as a global community about the institutional governance systems that can ensure global sustainability. To apply environmentally and socially optimal approaches as a global community will require a robust and reputable knowledge base to identify the megatrends, drivers, and challenges on which we can base the right policies.
Providing this independent, science-based, and robust knowledge base in the area of resources is the role of the International Resource Panel (IRP) of the United Nations Environment Programme since 2007. It is my honor to cochair the IRP since late 2014. In its mission to develop an understanding of how to decouple economic growth from environmental degradation, the IRP has already produced valuable and respected reports. In the near future, the panel will produce further works, including reports on the resource dimensions of international trade, the benefits, risks, and trade-offs of low-carbon technologies, landscape productivity and food systems, and ecosystem approaches for sustainable management of natural resources.
Our challenge is to make sure that this knowledge is policy-relevant and policy-applied. Our challenge is to prepare and organize our society for the change, which will take into account the new reality we are facing. We need proper governance, we need better implementation, and we need to show that we take our increased responsibility seriously. Because we share the same planet and because there is no more time to lose, we humans are for the first time seriously influencing the health and sustainability of the planet Earth, the only home we have. Coming from a region which experienced some fundamental transitions in the past helps to understand the necessity and importance of change toward good governance. | 2019-05-29T13:12:35.504Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "7d5a369f100ad73b5459fa7df47a7117130c9630",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1890/EHS15-0019.1?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "82d11ceab5359870e664d9a9242d141c7810cb10",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
231678867 | pes2o/s2orc | v3-fos-license | Modulation of Oxidative Stress and Hemostasis by Flavonoids from Lentil Aerial Parts
While specific metabolites of lentil (Lens culinaris L.) seeds and their biological activity have been well described, other organs of this plant have attracted little scientific attention. In recent years, green parts of lentils have been shown to contain diverse acylated flavonoids. This work presents the results of the research on the effect of the crude extract, the phenolic fraction, and seven flavonoids obtained from aerial parts of lentils on oxidative damage induced by H2O2/Fe to lipid and protein constituents of human plasma. Another goal was to determine their effect on hemostasis parameters of human plasma in vitro. Most of the purified lentil flavonoids had antioxidant and anticoagulant properties. The crude extract and the phenolic fraction of lentil aerial parts showed antioxidant activity, only at the highest tested concentration (50 μg/mL). Our results indicate that aerial parts of lentils may be recommended as a source of bioactive substances.
Introduction
Nowadays, it is widely known that a diet rich in vegetables, fruit and whole-meal cereal products helps in the prevention of age-related diseases, especially cancer, cardiovascular and neurodegenerative diseases. Recently, Tang et al. [1] have described that various vegetables (for example, celery, rape, carrot, lettuce and broccoli) have cardioprotective actions, including modifying lipid metabolism, lowering blood pressure and antioxidant properties. Other vegetables-onion or beetroot are also important diet components for the prevention and treatment of cardiovascular diseases by their antiaggregatory potential [2]. A broad body of evidence indicates that these health-promoting properties may be attributed to the presence of different plant-specific metabolites, including phenolic compounds [3,4]. For this reason, specific metabolites of crop plants and their biologic activities have been extensively investigated. Lentil (Lens culinaris L.) seeds are known as nutrient-rich and healthy food and find also use in traditional medicine in different areas of the world. Lentils were shown to have anticarcinogenic, hypoglycemic, hypocholesterolemic and blood pressure-lowering properties. While phenolic compounds and other specific metabolites of lentil seeds have been very well characterized, little interest has been devoted to the green parts of the plant. However, leaves and stems of lentil were shown to contain diverse acylated glycosides of quercetin and kaempferol [5]. In the current experiment, we focused on the effect of the crude extract, the phenolic fraction and various flavonoids obtained from lentil aerial parts on human plasma-an important element of hemostasis. Plasma oxidative stress may modulate hemostasis and lead to the development of pathological processes of the cardiovascular system [6]. Therefore, the objective was to investigate antioxidant activity of the crude extract, the phenolic fraction and various flavonoids (
Plant Material
Seeds of lentil (Lens culinaris Medik.) cultivar Tina were obtained from the Department of Agrotechnology and Crop Management, University of Warmia and Mazury, Olsztyn, Poland. Lentil plants were grown in the experimental field of the Institute of Soil Science and Plant Cultivation in Puławy, Poland, and harvested during the flowering period. The collected aerial parts of lentil were lyophilized (Gamma 2-16 LSC, Christ, Osterode am Harz, Germany), milled in a laboratory mill, and defatted with chloroform in a Soxhlet extractor (Quickfit, Stone, UK).
Preparation of Extract and Phenolic Fraction from Lentil Aerial Parts
The extract used in this work was prepared according to the earlier described procedure [7]. A 200 g portion of the defatted plant material was extracted with boiling 80% methanol (v/v; 3 × 2.0 L, for 1 h), under reflux. The obtained extracts were filtered through a sintered glass funnel, rotary evaporated (Heidolph, Schwabach, Germany), and lyophilized. The extraction yield was 35.94 g. The phenolic fraction of the lentil extract was prepared by solid phase extraction (SPE). A 7.03 g portion of the extract was shaken with 1% water-methanol containing 0.1% formic acid, and centrifuged (6654× g, 18 • C, 10 min). The supernatant was loaded onto a C18 column (34 × 110 mm; Cosmosil 140C18-Prep, 140 µm). The column was washed with the same solution, and the bound phenolic compounds were subsequently eluted with a 60% methanol solution to yield 976 mg of the phenolic fraction.
Isolation of Flavonoids from Lentil Aerial Parts
Lentil flavonoids were purified from the above-described crude extract by reversephase chromatography. The applied isolation procedure included vacuum liquid chromatography, low-pressure liquid chromatography, and semi-preparative HPLC. Structures of the purified compounds were determined by 1D and 2D NMR spectroscopy. A detailed description of isolation and structure elucidation of flavonoids from the aerial parts of lentils can be found in a publication of [7,8].
Stock Solutions
Stock solutions of the crude extract, the phenolic fraction, flavonoids from green parts of lentil (compounds 1-7), quercetin and kaempferol, used in tests of biological activity, were made in 50% DMSO. The final concentration of DMSO in samples was lower than 0.05%, and its effects were determined in all experiments.
Quantification of Flavonoids in the Tested Crude Extract and Phenolic Fraction of the Lentil Aerial Parts
The content of compounds 1-7 in the lentil extract and the phenolic fraction was determined by ultra-high-performance liquid chromatographic-photodiode array (UHPLC-PDA) system, using an ACQUITY UPLC chromatographic system equipped with photodiode array (PDA) and triple quadrupole (TQD) mass spectrometer (MS) detectors (Waters Corp., Milford, MA, USA). Samples were separated on an ACQUITY BEH C18 column (2.1 × 100 mm, 1.7 µm; Waters) at 40 • C; the flow rate was 0.400 mL min −1 , the injection volume was 2.
Human Plasma Isolation
Human blood and plasma were obtained from six regular donors (non-smokers of both sexes) of a blood bank (Lodz, Poland) and a Medical Center (Lodz, Poland). Blood was collected into CPDA solution (citrate/phosphate/dextrose/adenine; 8.5:1; v/v) or CPD solution (citrate/phosphate/dextrose; 9:1; v/v). Donors had not taken any medication, alcohol or antioxidant supplementation for a week before donating blood. Analysis of the blood samples was performed according to the guidelines of the Helsinki Declaration for Human Research, with approval of the Committee on the Ethics of Research in Human Experimentation of the University of Lodz (resolution No. 7/KBBN-UŁ/III/2018). For determination of hemostatic parameters, plasma was incubated (30 min, at 37 • C) with:
1.
The extract from the lentil aerial parts at the final concentrations of 1-50 µg/mL; 2.
The phenolic fraction from lentil aerial parts at the final concentrations of 1-50 µg/mL; 3.
The extract from the lentil aerial parts at the final concentrations of 1-50 µg/mL; 7.
The phenolic fraction from lentil aerial parts at the final concentrations of 1-50 µg/mL 8.
The protein concentration, determined by measuring absorbance at 280 nm (in tested samples), was calculated according to the procedure of Whitaker et al. [9].
Lipid Peroxidation Measurement
To the test samples after completed incubation, 500 µL of TCA was added, followed by 500 µL of TBA and vortexed for 1 min. Two or three holes were made in Eppendorf caps, which were then heated at 100 • C for 10 min. After incubation, samples were cooled for 15 min at 4 • C and centrifuged at 33,540× g at 18 • C for 15 min. The absorbance was measured at 535 nm using the SPECTROstar nano microplate reader (BMG LABTECH, Ortenberg, Germany). The TBARS (thiobarbituric acid reactive substances) concentration was calculated using the molar extinction coefficient (ε = 156,000 M −1 cm −1 ). More details on the method are described in other papers [10,11].
Carbonyl Group Measurement
Carbonyl content was measured by absorbance at 375 nm (the SPECTROstar nano microplate reader, BMG LABTECH, Ortenberg Germany). The carbonyl group concentration was calculated using a molar extinction coefficient (ε = 22,000 M −1 cm −1 ) and the level of carbonyl groups as nmol carbonyl groups/mg of plasma protein. More details on the method are described in other papers [12][13][14].
Thiol Group Determination
After incubation, the test samples were transferred to a 96-well plate at 20 µL, followed by the addition of 20 µL of SDS and mixed thoroughly. Successively, 160 µL of 10 mM phosphate buffer (pH 8.0) was added to all samples and mixed thoroughly. The absorbance was measured at a wavelength λ = 412 nm (A 0 ), and 16.6 µL DTNB was added. The 96-well plate was incubated for 60 min (temperature 37 • C). After incubation, the absorbance was measured at a wavelength λ = 412 nm (A 1 ). The absorbance difference A 1 -A 0 was calculated. The thiol group concentration was calculated using the molar extinction coefficient (ε = 13,600 M −1 cm −1 ). The level of thiol groups was expressed as µmol thiol groups/mL of plasma. More details on the method are described in other papers [15][16][17].
Parameters of Hemostasis
The prothrombin time (PT), thrombin time (TT) and the activated partial thromboplastin time (APTT) were determined coagulometrically using an optic coagulation analyzer (model K-3002, Kselmed, Grudziadz, Poland), according to the method described by Malinowska et al. [17].
Data Analysis
Several tests were applied to carry out statistical analysis. Six replicates were used in each measurement for this study. In order to eliminate uncertain data, the Dixon Q-test was performed. All the values in this study were expressed as mean ± SD. Obtained results were first analyzed under the account of normality with Shapiro-Wilk test and equality of variance with Levine's test. Statistical significance of differences among means was assessed by ANOVA (the significance level was p < 0.05), followed by Tukey's multiple comparisons test or Kruskal-Wallis test.
Quantitative Analysis of the Tested Extract and Phenolic Fraction of the Lentil Aerial Parts
Different glycosides of quercetin and kaempferol, most of them acylated with hydroxycinnamic acids, were the main phenolic constituents of both analyzed lentil preparations. Apart from compounds 1-7, the content of four other flavonoids was additionally evaluated: Compound 2 was the dominant phenolic constituent of both lentil preparations, while 3-5 and 8-11 were other major phenolics (Table 1, Figure 2). The total content of all determined compounds was 80.8 ± 5.5 mg g −1 for the crude extract and 412.5 ± 6.5 mg g −1 for the phenolic fraction, respectively. Table 1. Content of the compounds 1-7 and other major flavonol glycosides (mg g −1 ; mean value ± SD) in the crude extract (CE) and the phenolic fraction (PF) of lentil aerial parts.
Effects on Oxidative Stress Biomarkers in Human Plasma In Vitro
Exposure of plasma to a strong chemical oxidant, H 2 O 2 /Fe, resulted in enhanced plasma lipid peroxidation and plasma protein carbonylation. Compound 1 (at all concentrations) had the strongest antioxidant activity; it reduced the H 2 O 2 /Fe-induced plasma lipid peroxidation by about 60%. This compound was more active than quercetin and its other derivatives (compounds 2-4 at 50 µg/mL). Three derivatives of kaempferol (compounds 5-7 at 50 µg/mL) were also more active than their aglycone ( Figure 3A, Table 3). On the other hand, in this model in vitro, the phenolic fraction (at 5 and 50 µg/mL) caused stronger inhibition of protein carbonylation induced by H 2 O 2 /Fe than tested flavonoids ( Figure 3B). The phenolic fraction from the lentil aerial parts (at the highest concentration-50 µg/mL) reduced this process by about 50% ( Figure 3B, Table 3). The tested derivatives of quercetin (compounds 1-4), like quercetin, had a similar effect on protein carbonylation induced by H 2 O 2 /Fe. The derivatives of kaempferol also reduced protein carbonylation (about 20% inhibition for 50 µg/mL) ( Figure 3B, Table 3). Analysis of the effect of tested flavonoids, the crude extract and the phenolic fraction from the lentil aerial parts on the level of protein thiols showed that all lentil flavonoids stronger inhibited the thiol H 2 O 2 /Fe-induced oxidation than quercetin and kaempferol ( Figure 3C). Quercetin and Compound 1 (at 50 µg/mL) have increased H 2 O 2 /Fe induced oxidation of the thiol protein ( Figure 3C). The phenolic fraction was not effective, while the antioxidant activity of the crude extract was observed only at the concentration of 50 µg/mL.
Effects on Hemostatic Parameters of Plasma
Analysis of the influence of the tested preparations on the coagulation properties of plasma showed that four derivatives of quercetin-compounds 1-4, and three derivatives of kaempferol-compounds 5-7, significantly prolonged the TT, at the whole tested range 1-50 µg/mL ( Figure 4C). However, none of the investigated flavonoids tested crude extract, and the phenolic fraction from lentil aerial parts changed the APTT and PT ( Figure 4A,B, respectively). In addition, Table 3 demonstrates comparative effects of the crude extract, the phenolic fraction and flavonoids from lentil aerial parts and commercial two flavonoids (at the highest used concentration-50 µg/mL) on the TT.
Discussion
The presence of phenolic compounds in fruits and vegetables is correlated with the beneficial effects of these food products on human health. For example, various biological actions of fruits (including berries) against diseases associated with oxidative stress have been attributed to their high phenolic antioxidant content, especially phenolic acids [2]. It has known that phenolic compounds are effective agents preventing damages related to oxidative stress, which plays a crucial role in the etiology and progression of various diseases, for example, cardiovascular diseases [1]. Besides, it has been demonstrated that flavonoids, including kaempferol and quercetin, have cardioprotective action [18].
These compounds consist of two phenyl rings: A and B, connected to a heterocyclic ringring C. The chemical activity of these compounds depends on the number of hydroxyl groups [19,20]. Choi et al. [21] demonstrated the cardioprotective effect of kaempferol in in vivo and ex vivo experiments. Two animal models were used in studies, male mice (Imprinting Control Region, outbred strain) and male rats (Sprague-Dawley). Results in in vivo model show that kaempferol protected against thrombosis development in thrombin-and collagen/epinephrine-induced acute thromboembolism models and a FeCl 3induced carotid arterial thrombus model. The anticoagulant effect was further confirmed in an ex vivo experiment in mice models [21].
The antioxidant properties of lentil seed extracts are well documented. Their antioxidant activity was usually determined using DPPH (2,2-diphenyl-1-picrylhydrazyl radical), ABTS • , TEAC, FRAP (fluorescence recovery after photobleaching), and/or ORAC assays, but their influence on lipid peroxidation processes was also occasionally reported [22][23][24][25][26][27]. The antiradical activity of different flavonoids purified from the aerial parts of the lentil was determined using DPPH • method [7]. Moreover, our earlier results indicate that quercetin and kaempferol derivatives isolated from aerial parts of the lentil modulate blood platelet function [8]. In the present work, for the first time, we characterized the influence of the crude extract and phenolic fraction from the aerial parts of lentils, as well as their constituent flavonoids on H 2 O 2 /Fe-induced lipid peroxidation, protein carbonylation, and oxidation of protein thiols in human plasma in vitro.
Quercetin and kaempferol are among the most ubiquitous flavonoid aglycones. Our current and earlier report showed that their glycosides were the main phenolics of lentil aerial parts [7]. Quercetin glycosides constituted about 73% of the total determined flavonoids. Among them, compounds 2-4 were dominant, constituting about 46% of the total flavonoids (Table 1). For this reason, these three flavonoids may be expected to have had the strongest influence on the biological activity of the crude extract and phenolic fraction of lentil. Similarly, the influence of compounds 1, 6, and 7 on the bioactivity of these preparations was most probably negligible due to their low content.
The majority of flavonol glycosides from lentil leaves and stems seem to be unique, not reported from any other plant. They share a common glycosylation pattern, and most of them are acylated with hydroxycinnamic acids (e.g., compounds 2-5, and 7-10). Moreover, they are 7-O-glucuronides, and this kind of flavonol glycosides apparently occurs rarely in plants. Phenolic compound profiles of lentil seeds and lentil aerial parts were completely different. The extract from the seeds of lentil cv. Tina contained significant amounts of a non-acylated flavonol glycoside (a kaempferol dihexoside-dideoxyhexoside) but seemed to be devoid of flavonol glucuronides, which were dominant in the extract from lentil leaves and stems [7]. The literature data indicate the presence of proanthocyanidins, catechin and epicatechin, phenolic acids and flavonol glycosides (but not flavonol glucuronides) in lentil seeds [23,24]. We did not determine the content of phenolic compounds in the extract from seeds of lentil cv. Tina; however, the total content of all determining phenolic compounds in the 80% acetone extract from seeds of green lentil (~1 mg g −1 ) was about 80 times lower than that in the currently characterized extract from the aerial parts of lentil [24].
The lentil extract and phenolic fraction inhibited plasma lipid peroxidation only at the highest tested concentration. Pure flavonol glycosides (compounds 1-7) showed distinctly stronger antioxidant activity (though usually not at the lowest applied concentration), while flavonoid aglycons turned out to be surprisingly weak inhibitors of lipid peroxidation, especially quercetin, the activity of which was comparable to those of the crude extract and phenolic fraction. It can be observed, especially at the highest tested concentration, that inhibitory activity of acylated flavonoids tended to increase with their hydrophobicity (compounds 2-4 and 5-7), which can be explained by better interactions of fewer polar compounds with plasma lipids. This pattern is disturbed by compound 1, a nonacylated quercetin glycoside, showing the strongest inhibitory properties of all investigated compounds. This observation, as well as the low antioxidant activities of quercetin and kaempferol, seem to be not easy to explain and may be partly attributed to the applied experimental conditions. Flavonoids are known as efficient free radical scavengers and chelator of heavy metal ions. Flavonoid aglycones are commonly regarded as more efficient radical scavengers than their glycosides. Possibly, the chelation of Fe 2+ ions strongly influenced the observed antioxidant activity in our experiments and compounds 1-7 were generally more efficient Fe 2+ chelators than their aglycones. Similarly, rutin (quercetin 3-O-rutinoside) was found to be a stronger inhibitor of Fe 2+ -induced linoleate peroxidation than quercetin, which was attributed to the ability of rutin to form inert complexes with iron [28]. There is no available data concerning the biological activity of lentil leaf and stem flavonoids. Comparable lipid peroxidation experiments using blood plasma and H 2 O 2 /Fe-induced oxidative stress are also not common. However, our earlier work on sea buckthorn fruit flavonoids also demonstrated that two isorhamnetin glycosides exerted a similar and stronger inhibitory effect on plasma lipid peroxidation as compared to their aglycone [14].
As regards other antioxidant experiments, acylated flavonoids and kaempferol provided the highest protection of thiol groups, at all concentrations, among all tested compounds and preparations; the phenolic fraction was not active, while the protective effect of the crude extract and quercetin was observed only at the highest dose. The antioxidant activity of kaempferol and its derivates by decreasing the production of reactive oxygen species due to inhibition of pro-oxidant enzymes and activation of antioxidant enzymes. Kaempferol and derivates are also the patent scavengers of superoxide anion and hydroxyl radical [29]. Except for the crude extract and the compound 4, the investigated substances also significantly reduced the carbonylation of plasma proteins, at least at the highest dose; quercetin and the phenolic fraction had the highest inhibitory activity. The antioxidant mechanism of quercetin is based on directed scavenging of reactive oxygen species, chelation of metals involved in the generation of reactive oxygen species and inhibition of enzymes generating reactive oxygen species [30]. Zhou et al. [31] demonstrated the antioxidant activity of kaempferol in in vivo studies on a rat model. The animals were divided into four groups: the control group, the ischemia-reperfusion injury group, the kaempferol group and the 4-benzyl-2-methyl-1,2,4-thiadiazolidine-3,5-dione group. Oxidative stress was analyzed by measuring the level of superoxide dismutase (SOD), malondialdehyde (MDA) and glutathione disulfide ratio. The obtained results demonstrated the antioxidant activity of kaempferol due to the increased level of SOD and decreased MDA level in the kaempferol group compared to the control group [31].
Oxidative stress is very often linked with modulation of hemostasis and cardiovascular disorders. The coagulation process (also known as clotting) is an important element of hemostasis, and it includes blood changes from a liquid to a gel, forming a blood clot. In our experiment, we measured different coagulation times (TT, PT, and APTT) using a coagulometer. One of the key findings of our experiments is a demonstration of anticoagulant properties of derivatives of quercetin and kaempferol isolated from lentil aerial parts. These compounds prolonged clotting time-the TT of human plasma. We suppose that the anticoagulant activity of tested flavonoids may be associated with a modulation of thrombin activity. Similar effects have been observed in other experiments. The results of Choi et al. [32] indicate that various flavonoids, including quercetin 3-O-β-dglucoside, may inhibit the enzymatic activity of thrombin [32]. Liu et al. [33] also proved that flavonoids could inhibit the enzymatic activity of thrombin. The effect of various natural flavonoids, including quercetin and kaempferol, on thrombotic time, was tested. Both kaempferol and quercetin extend the thrombotic time [33].
The wide range of applied concentrations of the crude extract, the phenolic fraction, and phenolic compounds isolated from lentil aerial parts (1-50 µg/mL) was in accordance with the general practice in in vitro model [11,14,17]. Besides, the concentration range used in studies was the same as in our earlier studies to maintain the continuity of research [8]. Moreover, the lower concentrations (1 and 5 µg/mL) may be considered as physiologically achievable after consumption of phenolic-rich plant materials [34]. For example, the maximal achievable concentration of plant phenolic compounds in plasma can reach up to 5 µg/mL [34,35]. However, foods with a high concentration of kaempferol and quercetin are not necessarily the most bioavailable source. The absorbed these flavonoids are metabolized in the liver and circulate as glucuronide, methyl, and sulfate metabolites [18]. On the other hand, recently, Stainer et al. [36] have observed that two quercetin metabolitesisorhamnetin and tamarixetin possess antithrombotic properties. For example, these compounds inhibited blood platelet activation, including platelet aggregation, granule secretion, calcium mobilization, and integrin α IIb β 3 function. In addition, isorhamnetin had antioxidant and anticoagulant activity [14].
In conclusion, the present paper is the first detailed study on the biological activity of the extract, the phenolic fraction, and pure phenolic compounds from the aerial parts of lentils. We also observe that not only tested extract and fraction but also quercetin and kaempferol derivatives have antioxidant and sometimes additionally anticoagulant potential. The results reveal that lentil aerial parts may be recommended as a material for use in functional food products. However, though antioxidant and anticoagulant properties of the investigated preparations were demonstrated in vitro in human plasma, their real effect should be verified in in vivo models. | 2021-01-23T06:16:24.962Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "91769c9a07e5d1561b9192cc95e343a8f299a286",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/2/497/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb5aad9e849d3de6dc3f5d5e38c726153c5d0272",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
174812255 | pes2o/s2orc | v3-fos-license | Successful endovascular coil embolisation of a ruptured V1-segment vertebral artery dissecting aneurysm making a fistula with the adjacent vein
Sudden supraclavicular pain is often associated with myocardial infarction but seldom due to a rupture of V1-segment vertebral artery aneurysm. A ruptured V1 segment of vertebral artery dissecting aneurysm making a fistula with the adjacent vein has rarely been described in literature. Here we present a case of a 29-year-old healthy woman with sudden supraclavicular pain and palpable mass that developed after pain. Initial ultrasound showed suspicion of large haematoma. CT angiogram showed a left-sided dissecting V1-segment vertebral artery ruptured aneurysm. Angiography showed an additional fistula between the aneurysm and the adjacent vein. The patient was treated successfully with coil embolisation. The vertebral artery occlusion was well tolerated without any complications. Endovascular coiling is a fast and effective treatment modality. However, a parent vessel occlusion can be sometimes dangerous if the contralateral vertebral artery supply is not sufficient. Surgical possibilities to reconstruct the parent vessel should also be considered in complex cases.
Summary
sudden supraclavicular pain is often associated with myocardial infarction but seldom due to a rupture of V1-segment vertebral artery aneurysm. a ruptured V1 segment of vertebral artery dissecting aneurysm making a fistula with the adjacent vein has rarely been described in literature. Here we present a case of a 29-year-old healthy woman with sudden supraclavicular pain and palpable mass that developed after pain. Initial ultrasound showed suspicion of large haematoma. Ct angiogram showed a left-sided dissecting V1-segment vertebral artery ruptured aneurysm. angiography showed an additional fistula between the aneurysm and the adjacent vein. the patient was treated successfully with coil embolisation. the vertebral artery occlusion was well tolerated without any complications. endovascular coiling is a fast and effective treatment modality. However, a parent vessel occlusion can be sometimes dangerous if the contralateral vertebral artery supply is not sufficient. surgical possibilities to reconstruct the parent vessel should also be considered in complex cases.
BaCkground
Sudden onset of severe supraclavicular pain is often associated with myocardial infarction. Rarely, it could be due to an extracranial V1-segment vertebral artery aneurysm rupture. Extracranial vertebral artery aneurysms in the V1 segment are extremely rare 1 and the formation of a fistula of a ruptured aneurysm is even rarer. 2 These aneurysms are difficult to manage because of the high risk of ischaemic complications to the posterior circulation. Both surgical and endovascular treatments carry potential risks and technical difficulties. The best treatment options are still controversial in such aneurysms. [3][4][5][6] Here we present a case of a successfully treated ruptured dissecting V1 vertebral artery aneurysm making a fistula with the adjacent vein using endovascular coils and placement of a distal plug to close the fistula and proximally occlude the parent artery. This method was well tolerated by the patient.
CaSe preSenTaTion
A 29-year-old woman presented with sudden supraclavicular pain and palpable mass above the supraclavicular region. She was otherwise healthy without any remarkable medical history. On examination the patient was alert, cranial nerves were intact, and no sensorimotor neurological deficits were present. rare disease admitted to our hospital for coil embolisation of the dissecting aneurysm as described above.
TreaTmenT
After interdisciplinary discussion we decided on the endovascular treatment of choice. The contralateral vertebral artery was normal and the probability of tolerating an ipsilateral vertebral artery occlusion was high in this patient. An endovascular procedure with deposition of coils and placement of a distal plug to close the arteriovenous fistula (figure 3). We used two-microvascular plug (Reverse Medical), target XL coils (Stryker), and finally the proximal occlusion of vertebral artery was performed with the aim of interrupting the flow to the aneurysm and fistula point.
ouTCome and Follow-up
The patient tolerated the proximal vertebral artery occlusion very well. Postintervention images show a complete occlusion of the fistula and the parent vessel ( figure 2A-F). No postinterventional neurological deficit was noted. The patient was discharged from hospital after 3 days. The patient was completely symptom-free after a follow-up of 4 weeks.
diSCuSSion
Extracranial vertebral artery aneurysms are extremely rare and account only for 0.5% of all aneurysms. Most extracranial vertebral artery aneurysms are located in the V3 segment followed by the V1 segment. 7 These aneurysms are diagnosed secondary to an embolic infarct or incidentally as a palpable mass. Patients with connective tissue disorders, including Ehlers-Danlos syndrome, Marfan syndrome and neurofibromatosis type I are at higher risk of developing extracranial vertebral artery aneurysms. A ruptured vertebral artery aneurysm with local pain and haematoma are often found in this particular group of patients,. 3 4 8 In contrast, our case report presents a young patient without any trauma who presented with sudden onset of severe supraclavicular pain. CT angiography and DSA are the standard tools to diagnose and reveal the anatomy of vasculature and to plan treatment. Treatment options include ligation, isolation, balloon embolisation, onyx embolisation and coil embolisation. 6 9-11 There is no single standardised treatment option for V1-segment vertebral artery aneurysms with a fistula. The anatomical location at C7-Th1 level might be difficult for end-to-side anasthamosis with the carotid artery. The contralateral vertebral artery was slightly dominant, which supported the possibility to take the risk of endovascular proximal occlusion if needed. Hence, we decided on coil embolisation of the fistula. We could not reconstruct the parent vessel and thus we performed a proximal occlusion of the ipsilateral vertebral artery. Although endovascular modalities have a risk of embolic stroke, our patient tolerated the procedure well. The fistula (including dissecting aneurysm) was completely occluded and the patient had no adverse events.
acknowledgements the authors are thankful to ehrnrooth Foundation for the Funding to first author (sM) for a clinical vascular and skull base fellowship at the Department of Neurosurgery in Helsinki.
Funding the authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared. patient consent for publication obtained. learning points ► Endovascular coiling is a fast and effective treatment modality. However, a parent vessel occlusion can be sometimes dangerous if the contralateral vertebral artery supply is not sufficient. ► Surgical possibilities to reconstruct the parent vessel should also be considered in complex cases. | 2019-06-07T20:32:36.669Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "d36c24b0c7d5b869e4fcf967785827152cea9093",
"oa_license": "CCBYNC",
"oa_url": "https://casereports.bmj.com/content/bmjcr/12/6/e229108.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d36c24b0c7d5b869e4fcf967785827152cea9093",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259138917 | pes2o/s2orc | v3-fos-license | List distinguishing index of graphs
We say that an edge colouring breaks an automorphism if some edge is mapped to an edge of a different colour. We say that the colouring is distinguishing if it breaks every non-identity automorphism. We show that such colouring can be chosen from any set of lists associated to the edges of a graph G, whenever the size of each list is at least $\Delta-1$, where $\Delta$ is the maximum degree of G, apart from a few exceptions. This holds both for finite and infinite graphs. The bound is optimal for every $\Delta\ge 3$, and it is the same as in the non-list version.
Introduction
In 1977, Babai [1] introduced a concept of distinguishing vertex colourings, which are those preserved only by the identity automorphism. The minimum number of colours in a distinguishing vertex colouring of a graph G is called the distinguishing number of G, and it is denoted by D(G). The analogous parameter for edge colourings, introduced in 2015 by Pilśniak and Kalinowski [12], is called the distinguishing index of G and denoted by D ′ (G). These concepts lie on the borderland between graph theory and abstract algebra, as they naturally generalize to an arbitrary group action [5]. Automorphism breaking also plays an important role in the quasipolynomial time algorithm of Babai [2] for the graph isomorphism problem.
In this paper, we study the list version of distinguishing edge colourings. For each edge e ∈ E(G), let L(e) be a set of colours available for that edge. We are asking for the minimum cardinal number k such that for any set of lists of cardinality k we can Conjecture 1. Let G be a connected, infinite or finite graph. Then D ′ l (G) = D ′ (G). In the paper, we aim to provide a general upper bound for connected graphs, both finite and infinite. These types of bounds are known for the distinguishing index. For finite graphs, Pilśniak [13] in 2017 proved the following.
Theorem 2 ( [13]). Let G be a connected, finite graph that is neither a symmetric nor a bisymmetric tree. If the maximum degree of G is at least 3, then D ′ (G) ≤ ∆(G) − 1 unless G is K 4 or K 3,3 .
Later, Pilśniak and Stawiski [14] proved the same claim for infinite graphs.
We show that these two bounds also hold for the list version of the problem. Since the above two results are optimal, so is ours. In particular, it follows that D ′ l (G) = D ′ (G) for every subcubic connected graph.
The proof is divided into two parts. The first, major part contains a proof for graph with cycles, and then we separately check trees. In formulating the theorems, we exclude the same exceptional graphs as Pilśniak [13], so we describe them shortly in the last section.
Graphs with a cycle
From now on, we only consider edge colourings. In the proofs below, we skip the case where all the lists are identical, as this case follows from Theorems 2 and 3. However, we note that our approach would allow this case to be included, at the expense of complexity of the proofs.
Theorem 4. Let G be a connected graph with maximum degree ∆ ≥ 3 which is not a tree and not isomorphic to K 3,3 , nor K 4 . Then D ′ l (G) ≤ ∆ − 1.
Proof. Let G = (V, E) be a connected graph and ∆ = ∆(G) be its maximum degree. Assume that G is not a tree and G ∈ {K 3,3 , K 4 }. Let L = {L(e)} e∈E be a set of lists, each of size ∆ − 1. Denote L(u) = uv∈E L uv for any u ∈ V . First, consider the case when ∆ is infinite. Since G is connected, then G must have exactly ∆ edges. Hence, we can pick a different colour for each edge to obtain a distinguishing colouring with ∆ = ∆ − 1 colours. For the rest of the proof, we shall assume that ∆ is finite.
For each colour i ∈ e∈E L(e), we consider a subgraph H i induced by all the edges e, such that i ∈ L(e). If H i = G, then we call such a subgraph trivial (we shall also sometimes say that the colour i is trivial). If every H i is trivial, then we have a standard non-list colouring, which exists by Theorems 2 and 3 (we use the assumptions that G is not a tree, so it is not a symmetric nor a bisymmetric tree, and that G ∈ {K 3,3 , K 4 }). Therefore, we can assume that not every H i is trivial.
We shall describe a greedy algorithm which iteratively chooses the colours of the edges of G from the respective lists. The algorithm starts by colouring some starting subgraph G 0 . All the edges of G 0 are coloured at this step, and this colouring is distinguishing for G 0 . We shall guarantee in the further course that G 0 is coloured uniquely, which will cause G 0 to be fixed. Then, the algorithm processes the remaining vertices, one by one, and fixes each new vertex it has reached, i.e. any vertex that is incident to a coloured edge.
I. The starting subgraph
We consider the following cases to select a suitable starting subgraph. This choice also affects the later colouring strategy, when we must avoid the colour pattern used on the starting subgraph. Case 1. There exists a colour p such that H p is non-trivial and it contains a cycle. We shall call this colour pink.
Let C be an induced cycle in H p . Since H p is non-trivial, it must contain a vertex v, in the same connected component of H p as C, which has an incident edge vw outside H p (note that w may be in H p ). By the choice of v, there exists a shortest path R from v to C ending in a vertex u of C (and u must be the only common vertex of R and C). In particular, it may be the case that v lies on C, then R is trivial and u = v. Denote by u + a neighbour of u on C. We define our starting subgraph G 0 as the subgraph induced by all the edges incident to the vertices of C and R.
We now specify a distinguishing colouring of the starting subgraph. We colour all the edges of C and R except uu + pink (this is possible since C and R are contained in H p , so these edges have the colour pink on their lists) and assign uu + a colour other than pink; we shall call this colour blue. Let us consider all possible extensions of the current colouring to G and all possible automorphisms of these coloured graphs that stabilise C ∪ R. If none of these automorphisms acts non-trivially on it, then we only need to choose the colours for the edges not in C nor R. For each vertex in C ∪ R, we assign different colours other than pink to these edges. This can be done since each such vertex except v has at most ∆ − 2 neighbours outside C ∪ R, and the lists have size ∆ − 1. The vertex v may have one more neighbour outside C ∪ R but it has also one incident edge with ∆ − 1 colours different from pink.
If, on the other hand, there exists such an automorphism, it interchanges v and u + and we must break it at this moment. Since uu + is an edge, then either u = v or v has a neighbour on C different from its successor on R. This means that v must have two neighbours in G 0 . In this case, we would like to choose the colours on the edges incident to v and u + such that these two vertices receive different palettes. But in this case, v has at most ∆ − 2 neighbours outside C ∪ R and L(vw) does not contain the colour pink, so we have two possibilities for the last edge vw we colour, which result in two different palettes of v. For the other vertices on C ∪ R, including u + , we do not have such freedom, but we can just succeed. Therefore, we first choose the colours for the edges incident to vertices other than v (following the rule that for each vertex we choose different colours other than pink on the incident edges), and then to v such that the palettes of u + and v are different. This way, we break all the automorphisms of G 0 .
Case 2. For every colour p, the graph H p is either trivial or H p is a forest. Consider any induced cycle C in G. If any edge of C contained only non-trivial colours on its list, then all the lists in G would be identical, and we have already assumed that this is not the case. Therefore, each edge of C has a colour p in its list such that H p is a forest. For any non-trivial colour p on the lists of C, we can consider the longest path P contained both in C and H p . Each such path P is contained in a maximal path, a maximal ray, or a double ray in H p , which we denote by R. If it is possible, that R is not entirely contained in C, then we choose p, C and P accordingly (in other words, first we consider only the colours p that have the longest P 's, and then we choose, if there is one, the one with R = P ). We define our starting subgraph G 0 as the subgraph induced by all the edges incident to the vertices of R and C.
Denote by u and v the end-vertices of P . Let R ′ be a maximal subpath or a subray of R ending with u or v (without loss of generality, let it be u). If R ′ = P then we call the edge of R ′ − P incident with the cycle C the gadget of P .
We start with colouring all the edges of R ′ pink. The colouring of the rest of the edges of G 0 depends on the number of edges in C − P .
If C − P contains at least two edges, then we choose different colours for the edges uu − and vv + , where u − and v + are the neighbours of u and v, respectively, in C − P . These colours are different from pink by the maximality of P . We shall refer to these colours as blue and green, respectively. Next, for each vertex of R ′ , we choose different colours other than pink for the edges outside R ′ (this is possible for the same reason as in Case 1). Then, we perform the following scheme, which we write down separately as it will be used again later.
Cycle colouring scheme. We take two passes on the cycle, each time considering the vertices u 1 , . . . , u k consecutively. First, we choose the colours for the edges of the cycle. If the current edge has the colour pink on its list, then we choose pink, unless this is the last edge we are colouring and this would result in exactly two pink paths of length |P | and only two non-pink edges on the cycle. In this case, we choose a colour other than pink. If the current edge does not have the colour pink, then we choose any colour, unless the previous |P | edges are pink, in which case we disallow blue or green, whichever would create the pink path of length |P | surrounded by blue and green. Subsequently, we do a second pass and colour all the other edges adjacent to the vertices of the cycle. Take a vertex u i . We consider a few cases: • If u i u i+1 is pink, then we choose different colours other than pink for all the uncoloured edges incident to u i . It is possible, since there are at most ∆ − 2 such edges, and each of them has ∆ − 2 colours other than pink on its list.
• If u i u i+1 has a colour other than pink, and the colour pink does not appear on all the lists of the uncoloured incident edges, then we forbid both pink and the colour of u i u i+1 on the incident edges and again choose different colours. Moreover, if u i is an end-vertex of a pink path of length |P | on the cycle, then we forbid also blue or green (whichever does not appear on the other side of this path) on all the currently coloured edges. To argue that we can succeed, we observe that if blue or green is present on the list of u i u i+1 , then this colour cannot appear on any of the lists of the incident edges outside C. This is because we would choose this colour to be called pink at the beginning, as it would yield a gadget. Furthermore, either there is no pink at all on the lists of the edges incident to u i (so we have only two forbidden colours) or there is one list with pink and one without it (which gives us an additional colour to choose from).
• If u i u i+1 has a colour other than pink, and all the incident edges have pink on their lists, then again we choose different colours other than pink and the colour of u i u i+1 on the incident edges. Moreover, if u i is an end-vertex of a pink path of length |P | on the cycle, and we are forced to use blue or green (whichever does not appear on the other side of this path) somewhere on the edge incident to u i , then we put this colour on u i u i+1 . This may create a copy of a pink path of length |P | surrounded by blue and green, but it will cause no problem due to an absence of a gadget (and P must have a gadget, since the path just created could have one). Note that we have used the rule, that if an edge on the cycle has a colour other than pink, then there is no pink on its list. There might be one exception to this rule, but it does not concern us because this exception does not occur at the end of the pink path of length |P |, but rather of |P | − 1.
If C − P contains only one edge uv (it must contain at least one, since H p is a forest) and u and v have degree at least three in G, then we choose different colours of edges incident to u and v so that the palettes of u and v are different. We shall refer to the colour of uv as blue. If R ′ = P , then u has two adjacent pink edges and u only one, so they are already distinguished. Otherwise, by the maximality of R, none of the edges incident to u and v outside C has pink in its list, so there are at least one and at most ∆ − 2 neighbours of each of these vertices outside C and we can choose two different palettes. Then we choose the colours of the remaining edges, again like in the second pass of the Cycle colouring scheme.
If C − P contains only one edge uv and d(u) = d(v) = 2, then we recolour the edge uu + , where u + is a neighbour of u on C other than v, with a new colour different from pink. We shall refer to this colour as blue. We choose a colour other than pink and blue for the edge uv and call it green. Then we choose the colours of the remaining edges, like in the second pass of the Cycle colouring scheme.
Depending on what the starting subgraph G 0 looks like and on the chosen colouring, we shall avoid the specific patterns during the remaining part of the algorithm. This will guarantee that G 0 is stabilised and, given the colouring of G 0 , also fixed.
Note that, in fact, there are only two types of starting subgraph: either an induced cycle with all incident edges, or an induced cycle with an attached path or ray, with all incident edges. In both cases, all the edges in G 0 not contained in the cycle, path nor ray are assigned a colour other than pink. Let k be the length of the cycle. We shall use the name gadget not only for the edge defined in Case 2, but also for the analogous edge in Case 1 (i.e. the one on the non-trivial path R, incident to a vertex of C). Moreover, we shall refer to the pink path on the cycle in G 0 as P , regardless of whether it was formed in Case 1 or Case 2.
We will also reuse the Cycle colouring scheme during the next part. In Case 2, the scheme started from some specific pre-coloured cycle, but we have never used the fact, what this initial colouring looked like. The main property of this scheme is that it will never produce another pink path of length |P | surrounded by green and blue, with or without a gadget (depending on the existence of a gadget in G 0 ). Therefore, we shall use it, starting with some other initial colourings.
II. The iterative procedure
We shall now iteratively extend the set of reached vertices, i.e. the ones with a coloured incident edge, starting from G 0 . We shall execute the procedure until there are no uncoloured edges left. Let A be the set of the automorphisms which stabilise G 0 and preserve the partial colouring we defined so-far. After each execution of the procedure, we shall guarantee that the following conditions are satisfied: (A1) Each reached vertex is fixed pointwise with respect to A.
(A2) If a vertex v / ∈ V (G 0 ) has a pink incident edge, and it is the only coloured edge incident to v, then this edge is not contained in any cycle of length k.
Note that these conditions are satisfied for the initial colouring of G 0 . The procedure starts by taking a reached vertex v with the smallest distance from G 0 , which has an uncoloured edge. We shall call the already coloured edges of v as back edges, the uncoloured edges to the reached vertices as horizontal edges and the remaining ones as forward edges. If none of the forward edges of v appear in any induced cycle of length k, then we simply colour each forward edge of v with a different colour, avoiding pink if possible, and then each horizontal edge with an arbitrary colour other than pink. This is possible since there are at most ∆ − 1 of such edges, and it fixes pointwise each newly coloured vertices, so the conditions (A1) and (A2) are fulfilled.
If there is an induced cycle of length k containing a forward edge of v, then we first check the following conditions: (C1) Each forward edge of v appears on a cycle of length k.
(C2) All the lists of the forward edges of v are the same, and each of them contains pink.
If any of these condition is not satisfied, then we can colour the forward edges with different colours either without using pink (C2 or C3) or we can use pink on the edge which does not appear in such cycle (C1). If, however, all these conditions hold, our further actions shall depend on the structure of G 0 . Let C ′ be a cycle of length k that contains a forward edge of v. If C ′ contains also the unique back edge of v, then this edge is not pink by (A2) and C ′ has two fixed vertices by (A1). Therefore, we just need to realise the cycle colouring scheme from Case 2 and then C ′ will be fixed pointwise as long as G 0 is stabilized. There is only one exception: if G 0 is a cycle with all edges except one coloured pink, and the cycle colouring scheme produced an identical copy of G 0 , then we change the colour of the blue edge to any other (including pink).
Assume now that C ′ is a cycle of length k that contains two forward edges of v. We must ensure that the colouring of C ′ will be different from the one in G 0 , otherwise G 0 will not be stabilized. Therefore, we will again colour the whole C ′ with all incident edges at once, along with the edges incident to v. We colour the forward edges of v which are not in C ′ with different colours other than pink and blue. Then we colour one of the edges on C ′ incident to v pink, and the other one with any colour other than pink and blue.
This last choice may be impossible if ∆ = 3 and the lists of both forward edges consist of exactly pink and blue. In this case, we colour both edges pink and continue to choose pink in both directions on C ′ , until possible. Then on each side, we colour one next edge (it may be the same one edge) so that at least one of them is not blue. Afterwards, we continue like for the other values of ∆, depending on the structure of G 0 .
• If G 0 has no gadget, or G 0 has a gadget but the back edge of v is not pink, then, we just execute the cycle colouring scheme on C ′ . Note, that the cycle colouring scheme does not produce gadgets, so the back edge of v would be the only candidate for one.
• If G 0 has a gadget and the back edge e of v is pink, then by the assumption of the procedure, the edge e is not contained in any cycle of length k. Hence, if we follow the cycle colouring scheme, then the only gadget created in this step can be e. But the gadget in G 0 was always incident to a blue edge, and there is no blue edge incident to v, therefore we are safe to execute the cycle colouring scheme on C ′ .
By the colouring of the two edges on C ′ incident to v, we broke all the automorphisms of C ′ , given that v was fixed. This and the cycle colouring scheme guarantee that all the reached vertices are fixed pointwise, so (A1) is satisfied. Moreover, we used the colour pink only on the cycle C ′ or on some forward edge of v which does not belong to any cycle of length k. This gives us (A2).
We are left to show that we did not create a second copy of G 0 throughout the iterative procedure. Assume otherwise, and denote by C ′′ the cycle isomorphic to the cycle in G 0 . There must be a pink edge xy contained in a pink path P ′′ of length |P | on C ′′ , surrounded by blue and green edges or one blue edge, and the edge xy does not belong to G 0 . Let us assume that xy is the edge incident to a blue edge on C ′′ . Consider the step of the procedure when this edge was coloured. In the procedure, we used the colour pink for an edge in a cycle of length k only when we coloured a cycle C ′ . We used the cycle colouring scheme, where the only possibility to create a pink path of length |P | surrounded by blue and green edges was if P had a gadget. But we ensured that the only pink edges incident to C ′ lie on C ′ itself, except for the currently processed vertex v which has no incident blue edges, and therefore we could not have created a gadget of P ′′ . We could not have created a cycle of length k with all pink edges except one blue, either, as any pink path of length k − 1 would be contained in C ′ , and this cycle is induced. This contradiction allows us to conclude that G 0 is fixed after the procedure, hence also the whole graph G.
Trees
Theorem 5. Let G be a tree with maximum degree ∆ ≥ 3. Then either G is a symmetric tree, G is a bisymmetric tree, or D ′ l (G) ≤ ∆ − 1.
Proof. Like in the proof of Theorem 4 we can assume that ∆ is finite. We shall choose one vertex r and refer to it as the root. We shall use the standard notation and for any vertex u, we shall call the incident edge on the unique path from u to r as the back edge, and all other edges incident to u as forward edges.
We call a colouring of (G, r) the standard colouring if every vertex except r has all the forward edges coloured with distinct colours. We claim that any standard colouring which fixes N [r] (a closed neighbourhood of r) is a distinguishing colouring of G. To see this, consider any vertex u outside N [r] (as the elements of N [r] are already fixed). Then there is a unique path from r to u through a neighbour v of r. Consider the last vertex w on that path, starting from r, which is fixed. If w = u, then some automorphism maps one forward edge of w to another. But this is impossible, since these two edges, by the assumption, have different colours. This means that w = u, and u must be fixed.
The remaining of the proof will consist of a few cases where we shall find a suitable root vertex r and a standard colouring of (G, r) with the above property. Note that having the edges incident to r coloured, it is straightforward to find a standard colouring of the graph, e.g. by considering the vertices of G one by one, from those closest to r. We shall usually be doing a variation of such procedure, as we shall need some additional properties.
Case 1. There is no vertex of degree at least two, with all incident edges sharing the same list.
We choose an arbitrary vertex r and we colour all its incident edges with different colours. Let pink be one of these colours. We colour the second end-vertex v of this pink edge so that r and v have distinct palettes. This is possible, since the edges incident to v, by our assumption, have different lists. Finally, we colour the remaining edges of G with a standard colouring without using pink. This is again possible by our assumption.
In the following cases, we shall assume that there is a vertex which is not a leaf, such that all its incident edges share the same palette.
Case 2. G contains a vertex v such that 1 < d(v) < ∆.
We take such a vertex as a root r and colour all its incident edges with different colours. Then, we colour the remaining edges to get a standard colouring, with the condition that each vertex of degree d(r), apart from r, has a distinct palette than that of r.
Case 3. G is the regular tree of degree ∆. Let r be an arbitrary vertex of degree at least two, with all incident edges sharing the same list. We start by colouring all the edges incident to r with the same colour, say pink. We shall ensure that r is the only vertex with all incident pink edges. Then, we iteratively fix the remaining vertices. During each iteration, we fix possibly only one vertex, but we choose a colour for multiple edges.
Let v be a vertex which is not yet fixed, and is the closest to r among all such vertices. Let i be the smallest natural number such that all the currently coloured edges are contained in B(r, i) (i.e. the ball of radius i centred at r). We choose a vertex w which is a descendant of v in B(r, i) \ B(r, i − 1). If there is such a vertex w that has a forward edge with pink on its list, then we pick this vertex and colour that edge pink. Otherwise, we choose any such w and pick an arbitrary colour (say red) for any of its forward edges. Then, we colour all the remaining uncoloured edges in B(r, i) with arbitrary colours, such that: • if w has a red forward edge, then the colour red is not used on the forward edges of the vertices in B(r, i) \ B(r, i − 1), and • if w has a pink forward edge, then we do not use the colour pink, and • if w has a red forward edge, then each vertex in B(r, i) except r has at most one pink forward edge.
After these steps, w is the only vertex in a distance d(r, w) from r with a pink (or red) forward edge. Therefore, w is fixed, and so are all the vertices between r and w (including v). Since ∆ ≥ 3, we did not create vertices with all incident pink edges, apart from r. Repeating these steps, we fix all the vertices of G.
Case 4. G is not regular, and the degree of every vertex of G is in {1, ∆}. We consider three subcases: Case 4a. G is finite. Then G contains either a central vertex or a central edge. If G has a central vertex r, then, as G is not a symmetric tree, G − r must contain two rooted subtrees which are not isomorphic. We colour the edges incident to r with distinct colours, except possibly two edges to two non-isomorphic subtrees. For the remaining edges, we use a standard colouring.
If G has a central edge xy, we choose an arbitrary colour for that edge. Since G is not a bisymmetric tree, among all the rooted subtrees of G − e, there must be two which are non-isomorphic. The roots of these subtrees are either the neighbours of the same end-vertex of the central edge, say x, or of two different end-vertices of the central edge.
In both cases, we can colour the remaining edges incident to y with different colours, and the same with x (possibly using the same colour on the edges to the non-isomorphic subtrees), so that the palettes of x and y are different. Then we can continue with a standard colouring.
Case 4b. G contains a ray but not a double ray. Let r be any non-leaf vertex on the unique ray of G. All but one subtree of r must be finite, since otherwise G would have a double ray. We colour all the edges from r to its finite subtrees with different colours, and we choose any colour, say pink, for the last edge incident to r. Then, we complete this colouring to a standard colouring, with the additional condition that any forward edge to an infinite subtree has a colour other than pink. For any considered vertex, there will be at most one such forward edge, so this is possible, and it guarantees that r is fixed.
Case 4c. G contains a double ray. Since G has a leaf, there exists a vertex r that lies on a double ray and has a finite subtree (and also two infinite ones). We try to choose different colours on the edges incident to r, and if it is impossible, then we repeat the colour on the edges to two non-isomorphic subtrees. Note that there is still an edge from r to an infinite subtree with a different colour than the one to the finite subtree.
Then, we continue with a standard colouring, with the additional condition that if for some vertex r ′ we are forced to use the same palette as r, and there is an automorphism mapping r ′ to r, then we use on the forward edge to the finite subtree a different colour than r has. Note that the back edge of r ′ leads to the subtree containing r, hence, to an infinite one. Therefore, the finite subtree, the existence of which is guaranteed by the automorphism, must be attached to one of the forward edges.
Exceptional graphs
For completeness, we append this short section about the locally finite graphs not covered by Theorems 4 and 5. We state the following theorems without proofs, as they are straightforward analogues of the proofs for the non-list distinguishing index, see [12]. Theorem 7. Let G be the double ray, a symmetric tree, a bisymmetric tree, K 4 , or K 3,3 . Then D ′ l (G) = D ′ (G) = ∆(G). Moreover, the only lists of length ∆ − 1 which do not yield a distinguishing colouring are the identical ones, except for bisymmetric trees, where the central edge may have an arbitrary list (and the remaining ones must be identical). | 2023-06-13T01:16:01.292Z | 2023-06-10T00:00:00.000 | {
"year": 2023,
"sha1": "1c53f3684aeb786addde5d2bca4a1a8b8929eb23",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1c53f3684aeb786addde5d2bca4a1a8b8929eb23",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
204663451 | pes2o/s2orc | v3-fos-license | The Relationship Between Depressive Symptoms and Oral Health Among Elderly People
Background: Depressionandoralhealthproblemsarecommonintheelders. Theyareoftennotwell-diagnosedandtreatedprop-erly. Objectives: Thisstudywasconductedtodeterminethepredictionroleof depressivesymptomsontheoralhealthof elderlypeople. Methods: In this descriptive-correlation study, 206 elderly people were selected using a stratified random sampling method from Health Centers of Qaemshahr, Iran. Data were collected by questionnaires including the General Oral Health Assessment Index (GOHAI),TheGeriatricDepressionScale(GDS),CognitiveStateTest(COST),andthesocio-demographicquestionnaire. Dentalhistory and cardiovascular risk factors were also documented. Multiple linear regression was used for data analysis. Results: Two hundred six elders were evaluated. Forty-three percent were aged between 65 - 74 (67.71 ± 7.28), 53% were female. The resultsof thestudyshowedthatover76% of participantshadmilddepression. Depression(Beta=0.17,P=0.01)andcognitivestatus (Beta = 0.29, P < 0.001) were predictors of oral health. The predictive power of this model was 24%. Conclusions: Depression and cognitive status were those factors that could predict elders’ oral health condition. Any oral health care program for elders would be better to be provided as a package that evaluates elders’ cognitive status, depression, and oral health condition.
Background
Untreated depression significantly reduces the quality of life of elders and their families (1). The prevalence of significant symptoms of depression among the elderly in the Iranian community is between 8% to 15%, and in elderly people live in nursing homes about 30% (2). Depressed elders often have demonstrations like dementia and perform poorly in mental tests (3). As age increases, the risk of progressive disease, especially cardiovascular disease, increases the risk of depression.
According to recent studies, the salivary reduction, due to depression and lack of motivation for self-care, may provide the basis for the growth of pathogenic bacteria and tooth decay (4,5). Antidepressants can also cause hyposalivation. Depression may also contribute to impaired immune function and is associated with an increased risk of infection (6). The results of one recent systematic review and meta-analyses studies showed a positive associ-ation between depression and oral diseases, specifically dental caries, tooth loss, and edentulous in adults and elders. The authors believed that more longitudinal studies are required to test the causal and temporal relationship between depression and oral health status (7). However, the findings of some studies did not show a relationship between the symptoms of depression and periodontal diseases (8,9).
Polypharmacy is common in Iranian elders because of their chronic illnesses (10). The side effects of these medications, such as dry mouth, can compromise the care of oral health. Besides, movement disorders, stroke, and impaired vision in elders prevent efficient and adequate brushing or flossing (11). Moreover, in Iran, statistics are not satisfactory when 60.6% of elders have oral and dental problems (12,13). The Global Burden of Disease study, in 2010, indicated that the most significant burden due to dental caries and periodontal disease were found in Irani-ans aged 15 -49 and 50 -69 years, respectively. In addition, mouth cancer led to the highest-burden in Iranians older than 70 years of age (14).
Failure to observe oral hygiene not only causes the disease of the oral cavity but can also threaten the vital organs of the body and even cause death of the patient (15).
Objectives
Regarding the lack of conducting such a study on the elders in our country, this study was designed to determine the predicting role of depressive symptoms on the oral health of elderly people.
Methods
In this descriptive correlation cross-sectional study, 206 elders were selected among the older people of the health centers of Qaemshahr. This study was approved by the Ethics Committee of Mazandaran University of Medical Sciences, Sari, Iran (IR.Mazums.REC.95-2236). Written informed consent was obtained from the elderly people.
Participants
The population of this study included all the elders with health records in the Health Centers of Qaemshahr, Iran, in 2016. The sampling method was stratified randomly. Five out of 12 centers in the North, South, East, West, and Center of the city were randomly selected, and the number of elders of each center was calculated relative to the total amount of elderly people. From the above centers, a contact list of the elders was made. According to Hair et al., the sample size can be estimated by considering the number of variables. For each variable, five to 20 participants should be undertaken (16). The present study included 16 variables, therefore, 192 subjects were calculated. A total of 206 individuals gave consent to be part of this study. The inclusion criteria included individuals at or above the age of 60 and without any history of hypothyroidism, stroke, and dementia (17). The exclusion criteria included unwillingness of patient for co-operation; all subjects signed the consent form.
Procedures
This study was approved by the Ethics in Research Committee of Mazandaran University (reference number: IR.Mazums.REC.96-2236). The research places were health centers and family physicians' centers. The researcher interviewed each participant by a questionnaire, which included demographic data, cardiovascular risk factors (high blood pressure, diabetes, high blood fat, smoking, overweight, regular physical activity), number of taken medicine, artificial teeth, most recent dental check, Geriatric Oral Health Index (GOHAI), Geriatric Depression Scale (GDS), and Cognition Status Test (COST) validated for Persian culture (18)(19)(20)(21).
The GOHAI designed by Atchison (1990) to assess oral health was validated and has been reliable for Persian culture. Cronbach's alpha was 0.74 (20). The questionnaire has 12 questions regarding physical, psychological, and social domains. This questionnaire uses a 5-point Likert scale for each item including always (1), most often (2), sometimes (3), rarely (4), and never (5). Questions 3, 5, and 7 are scored reversely. The range of scores is from 12 to 60 and a higher score indicates better oral health condition (20).
The GDS evaluates depression symptoms in elders with 15 questions. The scoring system is dichotomous (yes/no). Scores 0 -4 in this scale indicate no depression, 5 -8 mild depression, 9 -11 moderate, and 12 -15 severe depression. Validity and reliability of this questionnaire were assessed in Iran, with a cutoff point of eight, sensitivity of 0.9, and specificity of 0.84 (21).
COST contains 19 items such as orientation (4 points Data were analyzed by SPSS software version 21.0. Data were analyzed using descriptive statistics (absolute and relative frequency distribution, mean, and standard deviation) and inferential statistical tests (Spearman correlation coefficient, multiple linear regression).
Depressive Symptoms
The results of the study showed that over 67% of participants had mild depression (using GDS) ( Table 2).
Regression Model
All of the variables that had a significant relationship, along with the variables which had P < 0.3 (gender, marital status) were included in the regression model (Table 4). In multiple regression analysis, only the depressive symptoms and the cognitive status were those factors that could predict oral health. The predictive power of this model was 24% (df = 11, P < 0.001, F = 4.8).
Discussion
The present study confirmed that elders who had depression did not look after their oral health properly. This finding was in line with the results of the previous studies that showed the highest frequency (76.7%), which related to mild depression, economic and cultural factors such as previous use of medication, ease of access to nonprescribed medication, and other people's recommendations, which result in widespread antidepressant self-medication (22). The regression analysis also provided evidence that the oral health condition might be compromised because of elders' cognition. This finding was in concert with the results of a systematic review and meta-analysis that showed that psychiatric conditions are among the most common self-medicated diseases in Iran (23).
The results of multiple regression analysis indicated that depression symptoms predict oral health in the elderly. This finding corresponds with the study of Hybels et al., on 944 elderly people in the United States (24). However, it does not match with the study of Kim and Won, in Korea (9) and Solis et al. (8). In the study of Hybels et al., after controlling demographic variables, health status and dentition, moderate depression, and low oral health status were significantly correlated. The studies of Kim et al., and Solis et al., did not show a meaningful relationship between depression and periodontal disease (8,9). The two mentioned studies were not performed on the elderly age group. In some of these studies there was a significant relationship between the symptoms of depression and low oral health (25) or the quality of life associated with oral health (26). In the study of Okoro et al., people with recent depression also had a greater risk of tooth loss (27). Symptoms of depression may affect oral health through some biological pathways. Depression may play a role in stimulating the production of inflammatory cytokines and impairment in immune function in the development of periodontal disease and oral infections (6). In addition to the lack of motivation for self-care, it may provide a potential for pathogenic bacteria and tooth decay by reducing oral salivation. Antidepressants can also reduce salivation.
The findings of bivariate analysis showed there is no significant relationship between gender and oral health in the elders. This finding did not correspond with the study of Hernandez-Palacios et al., in Mexico (28). On the other side, some studies have shown that caries rates are higher in women than in man (29,30). More caries risk among women may originate a different salivary composition and flow rate, hormonal fluctuations, genetic variations, and dietary habits (31). People with a vegetarian diet were found to have the highest numbers of caries. A lack of putrefaction, due to protein consumption, contributes to the formation of a less acidotic oral environment (32).
There was a significant relationship between oral health in elders and their socioeconomic status (SES). The SES usually includes annual income and education. Such a finding was a support to the previous studies where oral health may be a low priority among low income and low educational level older adults because they first have to meet their primary needs (28). Another study in London showed there was a significant relationship between education and oral health quality of life in elders, which was not explained by differences in income (33). Therefore, cultural factors related to oral health needs to be assessed.
Research Limitations
The present study did not include the nutritional status and the ability of self-care in elders as the risk factors. Future studies may find different direct or indirect effects on the results of this study.
Conclusions
Depression and cognitive status were those factors that could predict elders' oral health condition. Any oral health care program for elders would be better to be provided as a package that evaluates elders' cognitive status, depression, and oral health condition. | 2019-09-19T09:09:52.480Z | 2019-09-18T00:00:00.000 | {
"year": 2019,
"sha1": "b78f6b84cf9be5a88850fdcbfdb635e18e67f6ca",
"oa_license": "CCBYNC",
"oa_url": "https://mejrh.kowsarpub.com/cdn/dl/db18c5cc-f813-11e9-b8f1-3379aea6fdf1",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a4e1c7c0c7d3be59b3a931c9cb8a6eb0aabae46a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21792567 | pes2o/s2orc | v3-fos-license | Termination of Protease-activated Receptor-1 Signaling by (cid:1) -Arrestins Is Independent of Receptor Phosphorylation*
Protease-activated receptor 1 (PAR1), a G protein-cou-pled receptor (GPCR) for thrombin, is the prototypic member of a family of protease-activated receptors. PAR1 is irreversibly proteolytically activated; thus, the magnitude and duration of thrombin cellular responses are determined primarily by mechanisms responsible for termination of receptor signaling. Both phosphorylation and (cid:1) -arrestins contribute to rapid desensitization of PAR1 signaling. However, the relative contribution of each of these pathways to the termination of PAR1 signaling is not known. Co-expression of PAR1 with (cid:1) -arrestin 1 ( (cid:1) arr1) in COS-7 cells resulted in a marked inhibition of PAR1 signaling, whereas (cid:1) -arres-tin 2 ( (cid:1) arr2) was essentially inactive. Strikingly, signaling by a PAR1 cytoplasmic tail mutant defective in ago-nist-induced phosphorylation was also attenuated more effectively by (cid:1) arr1 compared with (cid:1) arr2. In contrast, both (cid:1) -arrestin isoforms were equally effective at desensitizing the substance P receptor, a classic reversibly activated GPCR. PAR1 coimmunoprecipitated (cid:1) arr1 in an agonist-dependent manner, whereas (cid:1) arr2 association (cid:3) 0.009 and 0.472 (cid:3) 0.008, 0.449 (cid:3) 0.005, or 0.459 (cid:3) 0.014, respectively. The amount of antibody binding to untransfected cells was less than 5% of that ob- served in transfected cells. The insets reveal a similar amount of (cid:1) arr1 and (cid:1) arr2 expression in total cell lysates from an equivalent well. PAR1 These findings provide further support arrestin-independent internalization of PAR1 COS-7 cells.
Protease-activated receptor 1 (PAR1), a G protein-coupled receptor (GPCR) for thrombin, is the prototypic member of a family of protease-activated receptors. PAR1 is irreversibly proteolytically activated; thus, the magnitude and duration of thrombin cellular responses are determined primarily by mechanisms responsible for termination of receptor signaling. Both phosphorylation and -arrestins contribute to rapid desensitization of PAR1 signaling. However, the relative contribution of each of these pathways to the termination of PAR1 signaling is not known. Co-expression of PAR1 with -arrestin 1 (arr1) in COS-7 cells resulted in a marked inhibition of PAR1 signaling, whereas -arrestin 2 (arr2) was essentially inactive. Strikingly, signaling by a PAR1 cytoplasmic tail mutant defective in agonist-induced phosphorylation was also attenuated more effectively by arr1 compared with arr2. In contrast, both -arrestin isoforms were equally effective at desensitizing the substance P receptor, a classic reversibly activated GPCR. PAR1 coimmunoprecipitated arr1 in an agonist-dependent manner, whereas arr2 association was virtually undetectable. Remarkably, arr1 also interacted with phosphorylation defective PAR1 mutant, whereas arr2 did not. Moreover, constitutively active -arrestin mutants, arr1 R169E and arr2 R170E, that bind to activated receptor independent of phosphorylation failed to enhance either wild type or mutant PAR1 desensitization compared with normal versions of these proteins. In contrast, -arrestin mutants displayed enhanced activity at desensitizing the serotonin 5-hydroxytryptamine 2A receptor. Taken together, these results suggest that, in addition to PAR1 cytoplasmic tail phosphorylation itself, -arrestin binding independent of phosphorylation promotes desensitization of PAR1 signaling. These findings reveal a new level of complexity in the regulation of protease-activated GPCR signaling.
Thrombin, a coagulant protease, is generated at sites of vascular injury and produces a variety of cellular effects critical for hemostasis, thrombosis, and inflammatory and proliferative responses triggered by vascular damage (1,2). Thrombin activates cells through at least three proteolytically activated G protein-coupled receptors: PAR1, 1 -3, and -4 (3). The prototype of this family, PAR1, is activated by an unusual irreversible proteolytic mechanism in which thrombin binds to and cleaves the amino-terminal exodomain of the receptor. This cleavage generates a new amino terminus that functions as a tethered ligand by binding intramolecularly to the body of the receptor to cause transmembrane signaling (4 -6). The synthetic peptide SFLLRN, which represents the newly formed amino terminus of the receptor, can activate PAR1 independent of thrombin and receptor cleavage. PAR1 is irreversibly activated; thus, the mechanisms that contribute to the termination of signaling are critical determinants of the magnitude and kinetics of the thrombin response in cells. Given the irreversible nature of PAR1 activation, we hypothesize that signal termination events are probably unique, since all other GPCRs are reversibly activated.
The molecular events responsible for GPCR desensitization and resensitization have been extensively studied using the  2 -adrenergic receptor (7,8). In the classic paradigm, GPCRs are initially desensitized by rapid phosphorylation of activated receptors by G protein-coupled receptor kinases (GRKs) and other kinases. Receptor phosphorylation enhances the affinity of interaction with arrestins, and arrestin binding prevents receptor-G protein interaction, thereby uncoupling the receptor from signaling. Arrestins also interact with components of the endocytic machinery to facilitate recruitment of GPCRs to clathrin-coated pits and internalization from the plasma membrane (9,10). Once internalized into endosomes, GPCRs dissociate from their ligands, become dephosphorylated, and then return to the cell surface in a state capable of responding to ligand. Thus, for most classic, reversibly activated GPCRs, signaling is terminated at the plasma membrane, and receptor trafficking is linked to resensitization of signaling.
Phosphorylation of activated PAR1 also appears to be important for rapid uncoupling from G protein signaling. Overexpression of either GRK3 or GRK5 enhances PAR1 phosphorylation and markedly inhibits inositol phosphate (IP) accumulation (11,12). A PAR1 mutant in which all of the serines and threonines in the cytoplasmic tail (C-tail) are converted to alanines (S/T3 A) is neither extensively phosphorylated nor inhibited by GRK3 overexpression in multiple cell types (11,13,14). In addition, we recently found that arrestins are also critical for the termination of PAR1 signaling. Desensitization of PAR1-promoted phosphoinositide (PI) hydrolysis is significantly impaired in mouse embryonic fibroblasts lacking both arrestin isoforms, arrestin 2 and arrestin 3 (also termed -arrestin 1 and -arrestin 2), whereas PAR1 internalization remained intact (15). However, in both wild-type and -arrestindeficient cells, phosphorylation of activated PAR1 is still necessary for internalization through clathrin-coated pits. Moreover, unlike classic GPCRs, proteolytically activated PAR1 is internalized and sorted rapidly to lysosomes, an event critical for termination of receptor signaling (16,17). Thus, PAR1 defines a new class of GPCRs that utilize a phosphorylation-, clathrin-, and dynamin-dependent pathway for endocytosis that operates independent of -arrestins and receptor trafficking is linked to termination of signaling.
The precise function of arrestins in signal regulation of a GPCR such as PAR1 that does not use these molecules for internalization through clathrin-coated pits has not been examined. Moreover, the relative contribution of phosphorylation versus -arrestins to the termination of PAR1 signaling remains to be determined. In the present study, we used COS-7 cells to investigate the roles of phosphorylation and -arrestins in uncoupling PAR1 from G protein signaling. Our findings strongly suggest that -arrestins are able to bind and desensitize activated PAR1 independent of phosphorylation. Thus, these studies reveal a complex regulation of PAR1 signaling that involves both PAR1 C-tail phosphorylation and phosphorylation-independent binding of -arrestins.
EXPERIMENTAL PROCEDURES
Reagents and Antibodies-Human ␣-thrombin was purchased from Enzyme Research Laboratories. Agonist peptide SFLLRN was synthesized as the carboxyl amide and purified by reverse phase high pressure liquid chromatography (UNC Peptide Facility, Chapel Hill, NC). Substance P peptide was purchased from Phoenix Pharmaceuticals. 2,5-Dimethoxy-4-iodophenylisopropylamine was from Sigma.
Monoclonal M1 and M2 anti-FLAG antibodies were from Sigma. Rabbit polyclonal anti--arrestin antibody A1CT was previously described (18) and generously provided by Robert J. Lefkowitz (Duke University). Anti-PAR1 rabbit polyclonal antibody was generated as previously described (19). Horseradish peroxidase-conjugated goat antimouse and anti-rabbit secondary antibodies were from Bio-Rad.
cDNAs and Cell Lines-The cDNAs encoding FLAG-tagged PAR1 wild-type and C-tail phosphorylation site mutant (S/T3 A) were previously described (11). The PAR1 third intracellular loop (IC 3 ) mutants in which serine residues Ser 297 , Ser 298 , and Ser 299 were converted to alanine (IC 3 S 297 SS 299 mutant) were generated using the QuikChange TM site-directed mutagenesis kit (Stratagene), specific mutations were confirmed by dideoxy sequencing. A plasmid encoding wild type substance P receptor containing an amino-terminal FLAG epitope was generated as described (17). cDNAs encoding untagged and FLAGtagged -arrestins were gifts from Robert J. Lefkowitz (Duke University). Green fluorescent protein (GFP)-tagged -arrestins were obtained from Marc Caron (Duke University). Mutant arr1 R169E and arr2 R170E were kindly provided by Vsevolod V. Gurevich (Vanderbilt University) and have been previously described (20). The FLAG-tagged human 5-hydroxytryptamine 2A (5-HT 2A ) serotonin receptor was generously provided by Bryan L. Roth (Case Western Reserve University). The plasmids encoding G␣ q wild type and GTPase-deficient, constitutively active Q205L mutant were generously provided by T. Kendall Harden (University of North Carolina, Chapel Hill, NC). COS-7 cells were obtained from the American Type Culture Collection (Manassas, VA) and grown in DMEM supplemented with 10% fetal bovine serum, 4.5 mg/ml glucose, 100 units/ml penicillin, and 100 g/ml streptomycin.
Phosphoinositide Hydrolysis-COS-7 cells plated at 4 ϫ 10 4 cells/ well of 24-well dishes were grown overnight, transiently transfected, and then labeled with 2 Ci/ml myo-[ 3 H]inositol (American Radiolabeled Chemicals, Inc.) in serum-free DMEM containing 1 mg/ml bovine serum albumin for 18 -24 h. Cells were washed with DMEM containing 1 mg/ml bovine serum albumin, 10 mM HEPES buffer, and 20 mM lithium chloride. Cells were then incubated in the absence or presence of either 10 nM ␣-thrombin, 100 nM substance P, or 10 M 2,5-dimethoxy-4-iodophenylisopropylamine diluted in DMEM containing lithium chloride for various times at 37°C. Cell incubation medium was removed, and [ 3 H]inositol phosphates ([ 3 H]IPs) were extracted with 50 mM formic acid. Cell extracts were neutralized with 150 mM NH 4 OH, and IPs were isolated by column chromatography as described (15). Scintillation counting was then used to quantitate IPs eluted in this assay.
Data Analysis-Data were analyzed using Prism 3.0 software, and statistical significance was determined using InStat 3.0 (GraphPAD, San Diego, CA). The initial rate of PAR1 desensitization was determined by quantifying the decrease in thrombin response over time. The data were normalized to the amount of [ 3 H]IPs formed in untreated control cells for each time point.
Cell Surface ELISA-Transiently transfected COS-7 cells plated at 4 ϫ 10 4 cells/well in 24-well dishes were either left untreated or treated with 50 M SFLLRN or 100 nM substance P for 30 min at 37°C. Cells were fixed with 4% paraformaldehyde for 5 min at 4°C and then incubated with M1 anti-FLAG antibody for 1 h at 25°C in DMEM containing 1 mg/ml bovine serum albumin and 10 mM HEPES, pH 7.4. Cells were then washed and incubated with horseradish peroxidaseconjugated goat anti-mouse secondary antibody for 1 h at 25°C. Cells were washed again and incubated with one-step 2,2Ј-azino-bis-3-ethylbenz-thiazoline-6-sulfonic acid (Pierce) for 10 -20 min at room temperature. An aliquot was removed, and the optical density was determined at 405 nm using a Molecular Devices SpectraMax Plus microplate reader.
Immunofluorescence Confocal Microscopy-Transiently transfected COS-7 cells were grown on fibronectin-coated glass coverslips (22 ϫ 22 mm) and incubated with M1 anti-FLAG antibody for 1 h at 4°C, washed, and exposed to agonist at 37°C. Cells were fixed and then processed for immunofluorescence microscopy as described (15). Images were collected using an Fluoview 300 laser-scanning confocal imaging system (Olympus) configured with an IX70 fluorescent microscope fitted with a PlanApo ϫ 60 oil objective (Olympus). The final composite image was created using Adobe Photoshop 6.0 (Adobe Systems).
-Arrestin-mediated Desensitization of PAR1 Signaling Is
Independent of Receptor Phosphorylation-PAR1 couples to G␣ q and stimulates PI hydrolysis through the activation of phospholipase C- (21). Thus, we sought to determine the roles of phosphorylation and -arrestins in PAR1 desensitization by measuring G␣ q activation of PI hydrolysis in COS-7 cells. COS-7 cells are known to express low levels of endogenous
-Arrestin Regulation of PAR1 Signaling
-arrestins (22). We initially compared the signaling properties of PAR1 wild type and a phosphorylation-defective mutant that lacks all potential C-tail phosphorylation sites (S/T3 A) and is insensitive to GRK-mediated desensitization in multiple cell types including COS-7 (11,14). The concentration effect curves for thrombin at wild type and mutant PAR1 were determined by incubating cells labeled with myo-[ 3 H]inositol and varying concentrations of thrombin for 5 min at 37°C. The accumulation of [ 3 H]IPs was then measured. The effective concentration of thrombin to stimulate a half-maximal response after 5 min was similar for both PAR1 wild type and S/T3 A mutant in these studies (Fig. 1A). However, activated PAR1 S/T3 A mutant caused an enhanced maximal signaling response compared with wild type receptor (Fig. 1A). These findings suggest that each activated PAR1 S/T3 A mutant coupled longer to PI hydrolysis before signaling was shut off.
Both phosphorylation and -arrestins contribute to PAR1 desensitization (11,15). However, the relative contribution of each of these pathways to termination of PAR1 signaling re-mains to be determined. We initially compared the rates of agonist-induced PI hydrolysis in COS-7 cells transiently transfected with PAR1 and either arr1 or arr2 to establish that -arrestins are capable of regulating PAR1 signaling in these cells. Cells were incubated in the absence or presence of a saturating concentration of thrombin for various times at 37°C, and [ 3 H]IPs were then measured. The initial rate of thrombin-induced PI hydrolysis was similar in all transfection conditions (Fig. 1B). After 30 min of agonist exposure, a marked ϳ2.5-fold increase in PI hydrolysis was detected in cells expressing PAR1 only (Fig. 1B). Interestingly, agonist caused a similar ϳ2.5-fold increase in IP accumulation in cells expressing PAR1 and arr2 (Fig. 1B), suggesting that arr2 does not play a significant role in PAR1 uncoupling from G protein signaling. In contrast, agonist-stimulated signaling was markedly impaired in cells expressing PAR1 and arr1; an ϳ1.5-fold increase in PI hydrolysis was detected after 30 min of agonist treatment (Fig. 1B), indicating that arr1 is more effective than arr2 at terminating PAR1 signaling.
To examine the contribution of phosphorylation versus -arrestin binding to PAR1 desensitization, we assessed signaling by the PAR1 S/T3 A phosphorylation-defective mutant in cells co-expressing either arr1 or arr2. In COS-7 cells expressing the PAR1 S/T3 A mutant, thrombin stimulated an ϳ5-fold increase in PI hydrolysis (Fig. 1C), a response substantially greater than that observed with comparable amounts of wild type receptor in these same cells (Fig. 1B). Expression of arr2 failed to significantly decrease signaling by PAR1 S/T3 A mutant (Fig. 1C), similar to that observed with wild type receptor. In contrast, however, arr1 caused a marked ϳ50% inhibition of PAR1 S/T3 A signaling (Fig. 1C), suggesting that arr1mediated PAR1 uncoupling from G protein signaling is independent of phosphorylation.
We next examined whether the initial coupling of activated PAR1 to G␣ q -promoted PI hydrolysis was affected by either arr1 or arr2. PAR1 wild type or S/T3 A mutant was transiently co-expressed with either arr1 or arr2, and the capacity of receptor to promote IP accumulation was compared. The concentration effect curves for thrombin at wild type and mutant PAR1 co-expressed with either arr1, arr2, or vector was shown in Fig. 2. The EC 50 values for stimulation (5-min assay) of IP accumulation by thrombin were comparable in each transfection condition (Fig. 2, Table I). The maximal effect of 30 nM thrombin for stimulation of IP accumulation by PAR1 wild type and S/T3 A mutant co-expressed with either arr1, arr2, or vector was also similar (Fig. 2, Table I). Together, these findings imply that the initial coupling of activated PAR1 wild type and S/T3 A mutant to G protein-induced signaling response is not affected by arrestins.
-Arrestin Regulation of PAR1 Signaling
To assess desensitization rates, COS-7 cells transiently expressing PAR1 wild type or S/T3 A mutant together with either arr1, arr2, or vector were exposed to a saturating concentration of thrombin for 10 min at 37°C. The extent of PAR1 signaling activity remaining after various times of thrombin incubation was then determined by the addition of lithium chloride and quantification of the amounts of IPs formed. In the absence of lithium chloride, thrombin-induced IP formation was not detectable in these cells (data not shown). In cells expressing PAR1 wild type only and either arr1 or arr2, the apparent rates of desensitization were not significantly different (Fig. 3A). These findings suggest that the major initiating event of PAR1 wild type desensitization is independent of -arrestin binding. Interestingly, PAR1 S/T3 A phosphorylation-defective mutant also showed a similar rate of desensitization in cells co-transfected with either arr2 or vector only (Fig. 3B). In contrast, in cells co-expressing arr1, the PAR1 S/T3 A mutant desensitization appeared to occur more rapidly (Fig. 3B). At face value, these findings suggest that arr1 enhances the rate of PAR1 desensitization independent of receptor phosphorylation.
To determine whether other G␣ q -linked GPCRs are similarly regulated by -arrestins in COS-7 cells, we examined the effects of -arrestins on signaling by the substance P receptor (SPR), also known as the neurokinin-1 receptor. In COS-7 cells expressing SPR only, a ϳ3.3-fold increase in IP formation was measured after 30 min of agonist exposure (Fig. 4A). In contrast to responses observed with PAR1, agonist-stimulated SPR signaling is substantially diminished in cells expressing either arr1 or arr2 (Fig. 4A), suggesting that both arr1 and arr2 are equally effective at uncoupling activated SPR from G protein signaling in these cells. These findings are consistent with previous studies demonstrating that activated SPR is rapidly desensitized via a GRK-mediated redistribution, and presumably binding, of -arrestins to the receptor in other cell types (23,24). In addition, these results establish that heterologous expression of arr2 is able to desensitize GPCR signaling in COS-7 cells.
We also examined the ability of arrestins to directly modulate signaling by G␣ q to ensure that ectopic expression of -arrestins does not globally disrupt signaling by this G protein in COS-7 -Arrestin Regulation of PAR1 Signaling cells. In cells overexpressing wild type G␣ q , the basal IP accumulation measured after 30 min of incubation in medium containing lithium chloride was comparable with that measured in vector control cells (Fig. 4B). Compared with G␣ q wild type or vector control cells, the GTPase deficient, constitutively active mutant of G␣ q Q205L caused a ϳ5.5-fold increase in PI hydrolysis (Fig. 4B). Interestingly, however, expression of either arr1 or arr2 failed to diminish the G␣ q Q205L signaling response (Fig. 4B), suggesting that neither arr1 nor arr2 globally disrupts signaling by G␣ q in COS-7 cells.
We next assessed thrombin-stimulated PI hydrolysis in cells expressing wild type and mutant PAR1 and varying amounts of the individual -arrestin isoforms to exclude the possibility that the differential effects of -arrestins on PAR1 signaling are due to differences in the levels of -arrestin expression. COS-7 cells transiently transfected with either PAR1 wild type or S/T3 A mutant and varying amounts of FLAG-tagged arr1 or FLAG-tagged arr2 were incubated in the absence or presence of agonist for 30 min at 37°C. The generation of IPs was then measured, or cell lysates were prepared, and -arrestin -Arrestin Regulation of PAR1 Signaling expression was detected by immunoblotting. In the absence of -arrestin expression, an ϳ2-fold and ϳ4-fold increase in IP accumulation was detected in PAR1 wild type-and S/T3 A mutant-expressing cells following 30 min of agonist exposure, respectively (Fig. 5, A and B, lane 1). In cells expressing wild type PAR1 and maximum amounts of arr2, activated PAR1 signaling was modestly diminished by ϳ20% (Fig. 5A), whereas arr1 caused a significantly greater 50% inhibition of agoniststimulated signaling (Fig. 5A). In PAR1 S/T3 A mutant-expressing cells, agonist-stimulated PI hydrolysis was decreased more effectively by arr1 compared with arr2 (Fig. 5B), similar to the results observed with wild type receptor. However, both -arrestin isoforms were quite efficacious at attenuating thrombin-induced PI hydrolyis, suggesting that the PAR1 S/T3 A mutant is more sensitive than wild type receptor to arrestins. Regardless, in cells expressing comparable amounts of arr1 and arr2, the arr1 isoform appears more effective than arr2 at terminating activated PAR1 signaling even in the absence of receptor phosphorylation.
-Arrestins Fail to Enhance PAR1 Internalization in COS-7 Cells-To determine whether the differential effects of -arrestins on PAR1 signaling result from differences in receptor trafficking, we examined agonist-induced receptor internalization. COS-7 cells transiently expressing FLAG-tagged PAR1 and either arr1 or arr2 were incubated in the absence or presence of saturating concentrations of SFLLRN for 30 min at 37°C. Since thrombin removes the amino terminus of PAR1 containing the FLAG epitope, the peptide agonist SFLLRN was used instead. After agonist treatment, the amount of PAR1 remaining on the cell surface was measured by cell surface ELISA. In PAR1-expressing cells, agonist induced an ϳ30% loss of receptor from the cell surface (Fig. 6A), consistent with PAR1 internalization observed in other cell types (15,25). A similar extent of PAR1 internalization was induced by agonist in cells expressing either arr1 or arr2 (Fig. 6A). The failure of -arrestins to enhance PAR1 internalization suggests that receptor trafficking occurs independent of -arrestins in COS-7 cells. These data are consistent with -arrestin-independent internalization of activated PAR1 observed in mouse embryonic fibroblasts deficient in -arrestin expression (15).
We next examined the effects of -arrestins on agonist-induced internalization of PAR1 S/T3 A phosphorylation-defective mutant. Consistent with phosphorylation-dependent internalization of activated PAR1 reported previously (15,25), agonist fails to promote PAR1 S/T3 A internalization (Fig. 6B), whereas wild type PAR1 is robustly internalized (Fig. 6A). Moreover, neither arr1 nor arr2 significantly enhance agonist-induced PAR1 S/T3 A mutant internalization (Fig. 6B), suggesting that the differential regulation of PAR1 S/T3 A signaling by the individual isoforms of -arrestins is not due to effects on receptor trafficking. We also determined whether SPR internalization is similarly regulated by -arrestins in COS-7 cells. In contrast to wild type and mutant PAR1, both arr1 and arr2 significantly enhance agonist-induced internalization of SPR (Fig. 6C), consistent with a -arrestindependent internalization of SPR reported previously (26). Together, these results further suggest that the differential regulation of PAR1 signaling by the individual isoforms of -arrestin is not due to differences in their ability to affect receptor trafficking.
Immunofluorescence confocal microscopy studies are consistent with a failure of -arrestins to enhance internalization of PAR1. COS-7 cells were transiently co-transfected with PAR1 wild type or S/T3 A mutant together with either GFP-tagged arr1 or GFP-arr2, and internalization of PAR1 was assessed by confocal microscopy. In the absence of agonist, both wild type and mutant PAR1 are localized predominantly to the cell surface (Fig. 7, A and B, top panels). However, a small fraction of unactivated receptor was found in an intracellular pool in both wild type-and mutant PAR1-expressing cells, consistent with tonic cycling of these receptors as previously reported (25). In cells expressing wild type PAR1, exposure to SFLLRN for 10 min at 37°C caused substantial internalization of receptor into endocytic vesicles (Fig. 7A). A similar extent of agonist-induced PAR1 internalization was observed in both arr1and arr2expressing cells (Fig. 7A). In contrast, agonist failed to promote -Arrestin Regulation of PAR1 Signaling PAR1 S/T3 A mutant internalization, even in cells overexpressing arr1 and arr2 (Fig. 7B). These findings provide further support for an arrestin-independent internalization of PAR1 in COS-7 cells.
-Arrestins Interact with Activated PAR1 Independent of Receptor Phosphorylation-We next determined whether activated PAR1 and -arrestins directly associate by coimmunoprecipitation. COS-7 cells transiently co-transfected with FLAG-PAR1 and either arr1 or arr2 were incubated with or without SFLLRN for 2.5 min at 37°C. Cells were lysed, and PAR1 was immunoprecipitated with M2 anti-FLAG antibody, and the presence of -arrestins was detected by immunoblotting. In untreated control cells expressing arr1, PAR1 was immunoprecipitated, and a small amount of arr1 coimmuno-precipitated with the receptor, suggesting that unactivated receptor weakly associates with arr1 (Fig. 8A). In contrast, immunoprecipitates from agonist-treated cells revealed a significant more than ϳ2-fold increase in arr1 associated with activated PAR1, whereas arr2 was at most weakly associated with PAR1 (Fig. 8A). Strikingly, however, a substantial amount of arr1 associated with PAR1 S/T3 A phosphorylation-defective mutant in both agonist-treated and untreated control cells (Fig. 8B); this may result from partial constitutive activity observed with this mutant (Fig. 9C). Consistent with a lack of robust interaction between wild type PAR1 and arr2, a weak association between arr2 and PAR1 S/T3 A mutant was observed even in cells where a substantial amount of receptor was immunoprecipitated (Fig. 8B, middle panel). The apparent differences in the amount of arr1 versus arr2 expression detected in COS-7 cell lysates is due to the greater affinity of A1CT anti-arrestin antibody for arr1 protein (Fig. 8, bottom panels) (18). This differential affinity is not responsible for the lack of association observed between PAR1 and arr2, since similar results were found in cells expressing PAR1 and FLAG--arrestins, where the presence of -arrestins in immunoprecipitates was detected using anti-FLAG antibody (data not shown). Together, these findings suggest that agonist enhances binding of arr1 to wild type PAR1, and the phosphorylationdefective PAR1 S/T3 A mutant binds arr1 even in the absence of receptor phosphorylation.
Constitutively Active -Arrestin Mutants Fail to Enhance PAR1 Desensitization-To further investigate whether -arrestins are capable of binding to activated PAR1 independent of phosphorylation, we utilized the "constitutively active" -arrestin mutants, arr1 R169E and arr2 R170E, that bind with high affinity to agonist-activated receptors independent of phosphorylation (20,27). We first evaluated the ability of wild type and mutant -arrestins to regulate signaling by wild type PAR1 in transiently transfected COS-7 cells. Compared with control cells lacking -arrestins, agonist-stimulated PI hydrolysis was decreased by ϳ35 and ϳ40% in cells expressing either arr1 wild type or arr1 R169E mutant, respectively (Fig. 9A). Thus, both wild type and mutant R169E arr1 are equally effective at decreasing signaling by wild type PAR1. Consistent with a lack of arr2 effectiveness at desensitizing PAR1, neither arr2 wild type nor arr2 R170E mutant significantly decreased signaling by activated PAR1 (Fig. 9B). Together, these results indicate that desensitization of activated PAR1 is equally sensitive to wild type and mutant -arrestins. Since mutant -arrestins are capable of binding to activated receptors independent of phosphorylation, these findings suggest that phosphorylation of activated PAR1 is not essential for -arrestin binding.
Next, we examined the ability of wild type and mutant -arrestins to desensitize PAR1 S/T3 A mutant signaling. In cells expressing PAR1 S/T3 A mutant alone, a significant increase in basal signaling was consistently observed compared with cells expressing comparable amounts of wild type receptor (Fig. 9). These findings suggest that the PAR1 S/T3 A phosphorylationdefective mutant is at the least partially constitutive active. Interestingly, expression of either arr1 or arr1 R169E caused a significant ϳ50% decrease in both basal and agonist-induced signaling by PAR1 S/T3 A mutant (Fig. 9C). These findings suggest that arr1 is able to uncouple activated PAR1 from signaling independent of phosphorylation. arr2 and arr2 R170E mutant also modestly decrease both basal and agonist-induced signaling by PAR1 S/T3 A mutant but were clearly less effective than arr1 (Fig. 9, C and D). Surprisingly, however, compared with wild type -arrestins, arr1 R169E and arr2 R170E fail to significantly attenuate signaling of either PAR1 wild type or S/T3 A phosphorylation-defective mutant.
A cluster of three serine residues residing in the third intracellular loop (IC 3 ) of PAR1 could potentially contribute to -arrestin binding and desensitization of PAR1 signaling. To assess whether these residues are important for termination of PAR1 signaling, the IC 3 serine residues (S 297 SS 299 ) of both PAR1 wild type and S/T3 A mutant were mutated to alanines. COS-7 cells expressing PAR1 wild type or IC 3 S 297 SS 299 mutant and either arr1 or arr1 R169E mutant were exposed to agonist for 30 min, and IP accumulation was assessed. The mutation of the IC 3 serine cluster failed to effect the ability of either arr1 or arr1 R169E mutant to terminate PAR1 signaling (Fig. 10A). Interestingly, mutation of the three serine residues in the IC 3 loop of PAR1 S/T3 A mutant also failed to effect desensitization of signaling by either arr1 or arr1 R169E (Fig. 10B). Both arr2 wild type and arr2 R170E mutant also failed to alter signaling by PAR1 wild type or S/T3 A mutant in which the IC 3 serine cluster was mutated (data not shown). Together, these findings support the distinct possibility that phosphorylation-independent -arrestin binding contributes to PAR1 desensitization.
To determine whether the arr1 R169E and arr2 R170E mutants display enhanced activity at desensitizing GPCRs in COS-7 cells as reported in other cell types (27,28), we examined their effects on desensitization of the serotonin 5-HT 2A FIG. 8. Agonist-induced association of -arrestins with PAR1. A and B, COS-7 cells transiently expressing PAR1 wild type or S/T3 A mutant and either arr1, arr2, or pcDNA vector were incubated in the absence or presence of 50 M SFLLRN for 2.5 min at 37°C. Cells were lysed, and PAR1 was immunoprecipitated with M2 anti-FLAG antibody. Immunoprecipitates (IP) were resolved by SDS-PAGE and then immunoblotted (IB) for either -arrestins or PAR1 using rabbit polyclonal anti-arrestin A1CT antibody or anti-PAR1 antibody, respectively. The expression of -arrestins in total cell lysates was detected with anti-arrestin A1CT antibody. Similar findings were observed in three separate experiments. Results in the bar graphs represent the mean Ϯ S.E. from three independent experiments and are shown as the -fold increase in arr associated with PAR1 compared with untreated control. The extent of arr1 associated with activated wild type PAR1 was significant (**, p Ͻ 0.01). Statistical analysis was determined using an unpaired t test.
-Arrestin Regulation of PAR1 Signaling
receptor. In COS-7 cells expressing FLAG-tagged 5-HT 2A receptor in the absence of -arrestins, the addition of selective agonist 2,5-dimethoxy-4-iodophenylisopropylamine stimulated a robust ϳ4-fold increase in IP accumulation measured after 30 min of agonist exposure (Fig. 11A). Agonist-stimulated PI hydrolysis was markedly inhibited in cells expressing 5-HT 2A receptor and either wild type arr1 or arr2 (Fig. 11A), an ϳ48% decrease was caused by both arr1 and arr2, respectively. Interestingly, however, both arr1 R169E and arr2 R170E mutants were significantly more effective than wild type -arrestins and caused virtually complete inhibition in activated 5-HT 2A receptor signaling compared with control cells lacking -arrestin expression (Fig. 11A). The differential ability of -arrestins to desensitize 5-HT 2A receptor signaling is not due to differences in expression of 5-HT 2A receptor at the cell surface (Fig. 11B). These findings are consistent with published studies demonstrating that mutant arr1 R169E and arr2 R170E are able to bind to activated GPCRs with high affinity and decrease signaling responses more effectively than wild type arrestins. DISCUSSION PAR1 is proteolytically irreversibly activated, and thus mechanisms that control PAR1 signaling determine the magnitude and duration of thrombin cellular responses. In this study, we demonstrate that -arrestins bind to activated PAR1 independent of phosphorylation and promote termination of receptor signaling. Moreover, arr1 is more effective than arr2 at uncoupling activated PAR1 from signaling, suggesting that -arrestins can differentially regulate PAR1 signaling independent of receptor phosphorylation. Consistent with these results, activated PAR1 associated with arr1, whereas PAR1 interaction with arr2 was virtually undetectable. By contrast, both arr1 and arr2 were equally effective at desensitizing the classic reversibly activated SPR. Together, these findings suggest that PAR1 signaling is regulated by multiple independent mechanisms including receptor phosphorylation itself and the binding of -arrestins independent of phosphorylation.
The two -arrestin isoforms appear to have redundant functions in regulating desensitization of most classic GPCRs (18). However, their capacity to differentially regulate GPCR internalization suggests that these molecules are not absolutely functionally redundant. Indeed, our finding that arr1 is more effective than arr2 at decreasing thrombin signaling responses ( Figs. 1 and 5), implies that -arrestins differentially regulate PAR1 signaling even in the absence of receptor phosphorylation. These results are consistent with our previous studies in which desensitization of PAR1 signaling is markedly impaired in mouse embryonic fibroblasts that lack arr1 but retain arr2 expression (15). Moreover, we demonstrate that neither arr1 nor arr2 enhances PAR1 internalization in COS-7 cells (Figs. 6 and 7), suggesting that receptor trafficking is not responsible for differential effects of -arrestins on PAR1 signaling. The molecular basis for the differential ability of the individual isoforms of -arrestin to regulate GPCR signaling is not known. It is possible that the individual -arrestin isoforms have distinct determinants for binding to PAR1. It is also possible that post-translational modifications of either arr1 or arr2 differentially regulate their ability to desensitize or internalize PAR1. Phosphorylation and ubiquitination regulate the endocytic functions of arrestins (29,30); however, whether these changes modulate the ability of -arrestins to desensitize PAR1 signaling is not known.
Previous studies have shown that arrestins interact preferentially with the third cytoplasmic loop of certain GPCRs (31, 32). More recent in vivo studies suggest that the C-tails of many classic GPCRs are also involved in determining -arrestin interaction (33). In the latter case, -arrestin binding promotes GPCR internalization. It is possible that the binding of -arrestins to different domains on a GPCR could confer differential functions (i.e. desensitization versus internalization). The C-tail of PAR1 is the major site of phosphorylation and is involved in desensitization (11,13). However, it is unlikely that PAR1 C-tail phosphorylation is solely responsible for -arrestin interaction, since -arrestins bind to PAR1 S/T3 A phosphorylation-defective mutant and promote desensitization (Figs. 1 and 8). Moreover, we also found that arr1 binds to an activated PAR1 truncation mutant lacking the entire C-tail domain (data not shown), suggesting that the C-tail is not essential for -arrestin binding. Although there is currently no evidence to suggest that other residues besides those residing in the C-tail of PAR1 are major sites of phosphorylation, a cluster of three serine residues residing in the third cytoplasmic loop of PAR1 could potentially contribute to -arrestin binding. However, in both PAR1 wild type and S/T3 A mutant in which the serines (S 297 SS 299 ) were converted to alanines, we observed no difference in the ability of -arrestins to regulate thrombin-induced signaling responses (Fig. 10). Together, these findings raise the distinct possibility that C-tail phosphorylation and phosphorylation-independent -arrestin binding both contribute to PAR1 desensitization.
Most activated GPCRs require phosphorylation for -arrestin binding and consequent receptor desensitization. In contrast, -arrestins bind to activated PAR1 independent of phosphorylation to promote uncoupling from G protein signaling. The mutant arrestins, arr1 R169E and arr2 R170E, which bind with high affinity to activated GPCRs independent of phosphorylation (20,27), are equally effective at promoting desensitization of both PAR1 wild type and S/T3 A mutant. These findings suggesting that PAR1 phosphorylation per se is not critical for -arrestin binding. Moreover, agonist-induced enhanced association of -arrestins with activated PAR1 (Fig. 8A) supports the idea that -arrestins recognize the active conformation of the receptor. Thus, activation of PAR1 may expose negatively charged residues or another critical domain residing on the cytoplasmic face of the receptor that perhaps mimic phosphorylation and thereby promotes binding of -arrestins. Consistent with these findings, wild type and mutant arrestins are equally effective at desensitizing the luteinizing hormone/choriogonadotropin receptor (34). This receptor is desensitized in a phosphorylation independent manner and requires a conserved negatively charged Asp-564 residue localized to the third intracellular loop for -arrestin binding and desensitization.
In conclusion, we examined the contribution of phosphorylation versus -arrestin binding to the termination of PAR1 signaling in COS-7 cells. In these studies, we demonstrate that -arrestins can bind to activated PAR1 independent of phosphorylation and promote termination of receptor signaling. We also demonstrate that arr1 is more effective than arr2 at desensitizing both PAR1 wild-type and S/T3 A phosphorylation-defective mutant. These findings suggest that the individual isoforms of -arrestins can differentially regulate GPCR desensitization independent of receptor phosphorylation. PAR1 couples to G␣ q as well as G␣ i and G␣ 12/13 , and whether arrestins differentially regulate PAR1 coupling to distinct G protein subtypes is not known. Thus, desensitization of PAR1 signaling is regulated by multiple independent mechanisms including C-tail phosphorylation itself and binding of -arrestins independent of phosphorylation. The precise mechanisms by which -arrestins bind to and desensitize activated PAR1 remain to be determined. These findings bring new insight into how signaling by irreversibly proteolytically activated GPCRs is regulated. -Arrestin Regulation of PAR1 Signaling | 2018-04-03T00:42:47.117Z | 2004-03-12T00:00:00.000 | {
"year": 2004,
"sha1": "576167f1d23739a39be85cf387f647a3f51e1e15",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/11/10020.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a25affd91850d0bad92eb4013b15dbf32782198f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3509777 | pes2o/s2orc | v3-fos-license | The power of deeper networks for expressing natural functions
It is well-known that neural networks are universal approximators, but that deeper networks tend to be much more efficient than shallow ones. We shed light on this by proving that the total number of neurons $m$ required to approximate natural classes of multivariate polynomials of $n$ variables grows only linearly with $n$ for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from $1$ to $k$, the neuron requirement grows exponentially not with $n$ but with $n^{1/k}$, suggesting that the minimum number of layers required for computational tractability grows only logarithmically with $n$.
I. INTRODUCTION
Deep learning has lately been shown to be a very powerful tool for a wide range of problems, from image segmentation to machine translation. Despite its success, many of the techniques developed by practitioners of artificial neural networks (ANNs) are heuristics without theoretical guarantees. Perhaps most notably, the power of feedforward networks with many layers (deep networks) has not been fully explained. The goal of this paper is to shed more light on this question and to suggest heuristics for how deep is deep enough.
It is well-known [1][2][3] that nonlinear neural networks with a single hidden layer can approximate any function under reasonable assumptions, but it is possible that the networks required will be extremely large. Recent authors have shown that some functions can be approximated by deeper networks much more efficiently (i.e. with fewer neurons) than by shallower ones. However, many of the functions in question are complicated or arise from "existence proofs" without explicit constructions, and the results often apply only to types of network rarely used in practice.
Deeper networks have been shown to have greater representational power with respect to various notions of complexity, including piecewise linear decision boundaries [4] and topological invariants [5]. Recently, Poole et al. [6] and Raghu et al. [7] showed that the trajectories of input variables attain exponentially greater length and curvature with greater network depth. Work including [8] and [9] shows that there exist functions that require exponential width to be approximated by a shallow network. Mhaskar, Liao, and Poggio [10], in considering compositional functions with this property, inquire whether explicit examples must be pathologically complicated.
Various authors have also considered the power of deeper * drolnick@mit.edu † tegmark@mit.edu network of types other than the standard feedforward model. The problem has also been posed for sum-product networks [11] and restricted Boltzmann machines [12]. Cohen, Sharir, and Shashua [13] showed, using tools from tensor decomposition, that shallow arithmetic circuits can express only a measure-zero set of the functions expressible by deep circuits. A weak generalization of this result to convolutional neural networks was shown in [14].
In summary, recent years have seen a wide variety of theoretical demonstrations of the power of deep neural networks. It is important and timely to extend this work to make it more concrete and actionable, by deriving resource requirements for approximating natural classes of functions using today's most common neural network architectures. Lin, Tegmark, and Rolnick [15] recently proved that it is exponentially more efficient to use a deep network than a shallow network when approximating the product of input variables. In the present paper, we will greatly extend these results to include broad natural classes of multivariate polynomials, and to tackle the question of how resource use depends on the precise number of layers. Our results apply to standard feedforward ANNs with general nonlinearities and are borne out by empirical tests.
The rest of this paper is organized as follows. In §II B, we consider the complexity of approximating a multivariate polynomial p(x) using a feedforward neural network with input x = (x 1 , . . . , x n ). We show (Theorem II.4) that for general sparse p, exponentially many neurons are required if the network is shallow, but at most linearly many for a deep network. For monomials p, we calculate (Theorem II.1) exactly the minimum number of neurons required for a shallow network. These theorems apply for all nonlinear activation functions σ with nonzero Taylor coefficients; a slightly weaker result (Theorem II.2) holds for an even broader class of σ.
In §II C, we present similar results (Propositions II.5 and II.6) for approximating univariate polynomials. In this case, shallow networks require linearly many neurons while for deep networks it suffices to use only logarithmically many neurons. In §II D, we tie the difficulty of approximating polynomials by shallow networks to the complexity of tensor decomposition (Proposition II.7).
In §III A, we consider networks with a constant number k of hidden layers. For input of dimension n, we show (Theorem III.1) that products can be approximated with a number of neurons exponential in n 1/k , and justify our theoretical predictions with empirical results. While still exponential, this shows that problems unsolvable by shallow networks can be tractable even for k > 1 of modest size.
Finally, in §III B, we compare our results on feedforward neural networks to prior work on the complexity of Boolean circuits. We conclude that these problems are independent, and therefore that established hard problems in Boolean complexity do not provide any obstacle to analogous results for standard deep neural networks.
II. THE INEFFICIENCY OF SHALLOW NETWORKS
In this section, we compare the efficiency of shallow networks (those with a single hidden layer) and deep networks at approximating multivariate polynomials.
A. Definitions
Let σ(x) be a nonlinear function, k a positive integer, and p(x) a multivariate polynomial of degree d. We define m k (p, σ) to be the minimum number of neurons (excluding input and output) required to approximate p with a neural net having k hidden layers and nonlinearity σ, where the error of approximation is of degree at least d + 1 in the input variables. Thus, in particular, m 1 (p, σ) is the minimal integer m such that: Note that approximation up to degree d allows us to approximate any polynomial to high precision as long as the input variables are small enough. In particular, for homogeneous polynomials of degree d, we can adjust the weights so as to scale each variable by a constant λ 1 before input, and then scale the output by 1/λ d , in order to achieve arbitrary precision.
We set m(p, σ) = min k≥0 m k (p, σ). We will show that there is an exponential gap between m 1 (p, σ) and m(p, σ) for various classes of polynomials p.
B. Multivariate polynomials
The following theorem generalizes a result of Lin, Tegmark, and Rolnick [15] to arbitrary monomials. By setting r 1 = r 2 = . . . = r n = 1 below, we recover their result that the product of n numbers requires 2 n neurons in a shallow network but can be done with linearly many neurons in a deep network.
Theorem II.1. Let p(x) denote the monomial x r1 1 x r2 2 · · · x rn n , with N = n i=1 r i . Suppose that the nonlinearity σ(x) has nonzero Taylor coefficients up to x N . Then: where x denotes the smallest integer that is at least x.
Proof. Without loss of generality, suppose that r i > 0 for i = 1, . . . , n. Let X be the multiset 1 in which x i occurs with multiplicity r i .
We first show that n i=1 (r i + 1) neurons are sufficient to approximate p(x). Appendix A in [15] demonstrates that for variables y 1 , . . . , y N , the product y 1 · · · · · y N can be approximated as a linear combination of the 2 N functions σ(±y 1 ± · · · ± y N ).
Consider setting y 1 , . . . , y N equal to the elements of multiset X. Then, we conclude that we can approximate p(x) as a linear combination of the functions σ(±y 1 ±· · ·±y N ). However, these functions are not all distinct: there are r i + 1 distinct ways to assign ± signs to r i copies of x i (ignoring permutations of the signs). Therefore, there are n i=1 (r i + 1) distinct functions σ(±y 1 ± · · · ± y N ), proving that m 1 (p, σ) ≤ n i=1 (r i + 1). We now adapt methods introduced in [15] to show that this number of neurons is also necessary for approximating p(x). Let m ≡ m 1 (p, σ) and suppose that σ(x) has the Taylor expansion ∞ k=0 σ k x k . Then, by grouping terms of each order, we conclude that there exist constants a ij and w j such that 1 That is, a collection of elements of X, in which each element is allowed to occur multiple times.
For each S ⊆ X, let us take the derivative of equations (3) and (4) by every variable that occurs in S, where we take multiple derivatives of variables that occur multiple times. This gives We claim that A has full row rank. This would show that the number of columns m is at least the number of rows n i=1 (r i + 1), proving the desired lower bound on m. Suppose towards contradiction that the rows A S ,• admit a linear dependence: where the coefficients c are nonzero and the S denote distinct subsets of X. Set s = max |S |. Then, take the dot product of each side of the above equation by the vector with entries (indexed by j) equal to We can use (5) to simplify the first term and (6) (with k = N + |S | − s) to simplify the second term, giving us: Since the monomials xi∈S x i are linearly independent, this contradicts our assumption that the c are nonzero. We conclude that A has full row rank, and therefore that . This completes the proof of equation (1).
For equation (2), it follows from Proposition II.6 part (2) below that for each i, we can approximate x ri i using 7 log 2 (r i ) neurons arranged in a deep network. Therefore, we can approximate all of the x ri i using a total of i 7 log 2 (r i ) neurons. From [15], we know that these n terms can be multiplied using 4n additional neurons, giving us a total of i (7 log 2 (r i ) + 4). This completes the proof.
In order to approximate a degree-N polynomial effectively with a single hidden layer, we must naturally assume that σ has nonzero N th Taylor coefficient. However, if we do not wish to make assumptions about the other Taylor coefficients of σ, we can still prove the following weaker result: Proof outline. The first statement follows as in the proof of Theorem II.1. For the second, consider once again the equation (5). The left-hand side can be written, for any S, as the linear combination of basis functions of the form: Letting S vary over multisets of fixed size s, we see that the right-hand side of (5) attains every degree-s monomial that divides p(x). The number of such monomials is the coefficient C s of the term y s in the polynomial g(y).
Since such monomials are linearly independent, we conclude that the number of basis functions of the form (7) above must be at least at least C s . Picking s to maximize C s gives us the desired result.
It is natural now to consider the cost of approximating general polynomials. However, without further constraint, this is relatively uninstructive because polynomials of degree d in n variables live within a space of dimension n+d d , and therefore most require exponentially many neurons for any depth of network. We therefore consider polynomials of sparsity c: that is, those that can be represented as the sum of c monomials. This includes many natural functions.
The following theorem, when combined with Theorem II.1, shows that general polynomials p with subexponential sparsity have exponential large m 1 (p, σ), but subexponential m(p, σ).
Theorem II.4. Suppose that p(x) is a multivariate polynomial of degree N , with c monomials q 1 (x), q 2 (x), . . . , q c (x). Suppose that the nonlinearity σ(x) has nonzero Taylor coefficients up to x N . Then, Proof outline. Our proof in Theorem II.1 relied upon the fact that all nonzero partial derivatives of a monomial are linearly independent. This fact is not true for general polynomials p; however, an exactly similar argument shows that m 1 (p, σ) is at least the number of linearly independent partial derivatives of p, taken with respect to multisets of the input variables.
Consider the monomial q of p such that m 1 (q, σ) is maximized, and suppose that q(x) = x r1 1 x r2 2 · · · x rn n . By Theorem II.1, m 1 (q, σ) is equal to the number n i=1 (r i + 1) of distinct monomials that can be obtained by taking partial derivatives of q. Let Q be the set of such monomials, and let D be the set of (iterated) partial derivatives corresponding to them, so that for d ∈ D, we have d(q) ∈ Q.
Consider the set of polynomials P = {d(p) | d ∈ D}. We claim that there exists a linearly independent subset of P with size at least |D|/c. Suppose to the contrary that P is a maximal linearly independent subset of P with |P | < |D|/c.
Since p has c monomials, every element of P has at most c monomials. Therefore, the total number of distinct monomials in elements of P is less than |D|. However, there are at least |D| distinct monomials contained in elements of P , since for d ∈ D, the polynomial d(p) contains the monomial d(q), and by definition all d(q) are distinct as d varies. We conclude that there is some polynomial p ∈ P \P containing a monomial that does not appear in any element of P . But then p is linearly independent of P , a contradiction since we assumed that P was maximal.
We conclude that some linearly independent subset of P has size at least |D|/c, and therefore that the space of partial derivatives of p has rank at least |D|/c = m 1 (q, σ)/c. This proves part (1) of the theorem. Part (2) follows immediately from the definition of m(p, σ).
C. Univariate polynomials
As with multivariate polynomials, depth can offer an exponential savings when approximating univariate polynomials. We show below (Proposition II.5) that a shallow network can approximate any degree-d univariate polynomial with a number of neurons at most linear in d.
The monomial x d requires d + 1 neurons in a shallow network (Proposition II.6), but can be approximated with only logarithmically many neurons in a deep network. Thus, depth allows us to reduce networks from linear to logarithmic size, while for multivariate polynomials the gap was between exponential and linear. The difference here arises because the dimensionality of the space of univariate degree-d polynomials is linear in d, which the dimensionality of the space of multivariate degree-d polynomials is exponential in d.
Proposition II.5. Let σ be a nonlinear function with nonzero Taylor coefficients. Then, we have m 1 (p, σ) ≤ d + 1 for every univariate polynomial p of degree d.
Proof. Pick a 0 , a 1 , . . . , a d to be arbitrary, distinct real numbers. Consider the Vandermonde matrix A with entries A ij = a j i . It is well-known that det(A) = i<i (a i − a i ) = 0. Hence, A is invertible, which means that multiplying its columns by nonzero values gives another invertible matrix. Suppose that we multiply the jth column of A by σ j to get A , where σ(x) = j σ j x j is the Taylor expansion of σ(x). Now, observe that the ith row of A is exactly the coefficients of σ(a i x), up to the degree-d term. Since A is invertible, the rows must be linearly independent, so the polynomials σ(a i x), restricted to terms of degree at most d, must themselves be linearly independent. Since the space of degree-d univariate polynomials is (d + 1)dimensional, these d+1 linearly independent polynomials must span the space. Hence, m 1 (p, σ) ≤ d + 1 for any univariate degree-d polynomial p. In fact, we can fix the weights from the input neuron to the hidden layer (to be a 0 , a 1 , . . . , a d , respectively) and still represent any polynomial p with d + 1 hidden neurons.
Proposition II.6. Let p(x) = x d , and suppose that σ(x) is a nonlinear function with nonzero Taylor coefficients. Then: Proof. Part (1) follows from part (1) of Theorem II.1 above, by setting n = 1 and r 1 = d.
For part (2), observe that we can approximate the square x 2 of an input x with three neurons in a single layer: We refer to this construction as a square gate, and the construction of [15] as a product gate. We also use identity gate to refer to a neuron that simply preserves the input of a neuron from the preceding layer (this is equivalent to the skip connections in residual nets).
Consider a network in which each layer contains a square gate (3 neurons) and either a product gate or an identity gate (4 or 1 neurons, respectively), according to the following construction: The square gate squares the output of the preceding square gate, yielding inductively a result of the form x 2 k , where k is the depth of the layer. Writing d in binary, we use a product gate if there is a 1 in the 2 k−1 -place; if so, the product gate multiplies the output of the preceding product gate by the output of the preceding square gate. If there is a 0 in the 2 k−1place, we use an identity gate instead of a product gate. Thus, each layer computes x 2 k and multiplies x 2 k−1 to the computation if the 2 k−1 -place in d is 1. The process stops when the product gate outputs u d .
This network clearly uses at most 7 log 2 (d) neurons, with a worst case scenario where d + 1 is a power of 2.
D. Tensor decomposition
We conclude this section by noting interesting connections between the value m 1 (p, σ) and the established hard problem of tensor decomposition. Specifically, we show (Proposition II.7) that the minimum number of neurons required to approximate a polynomial p equals the symmetric tensor rank of a tensor constructed from the coefficients of p.
Let T be an order-d symmetric tensor of dimensions n × · · · × n. We say that T is symmetric if the entry T i1i2···i d is identical for any permutation of i 1 , i 2 , . . . , i d . For T symmetric, the symmetric tensor rank R S (T) is defined to be the minimum r such that T can be written with λ ∈ R and a i ∈ R n for each i [16]. Thus, for d = 1, T is simply a real number and R S (T) = 1. For d = 2, T is a symmetric matrix, and the symmetric tensor rank is simply the rank of T, where one can take the values λ i above to be eigenvalues of the matrix T.
For a multiset S, we let π(S) denote the number of distinct permutations of S. Thus, if q 1 , . . . , q s are frequencies for elements in S, we have π(S) = q 1 + · · · + q s q 1 , . . . , q s .
For p(u) = p(u 1 , . . . , u n ) a homogeneous multivariate polynomial of degree d, we define the monomial tensor T(p) as follows: where [•] p denotes the coefficient of a monomial in p.
Proof. Suppose that we can approximate p(u) using a neural net with m neurons in a single hidden layer. Then, there exist vectors a 1 , a 2 , . . . , a m of weights from the input to the hidden layer such that p(u) ≈ w 1 σ(a 1 · u) + · · · + w m σ(a m · u).
If we consider only terms of degree d, we obtain The symmetric tensor has symmetric tensor rank at most m. Furthermore, by equation (8), we have By the definition of T, the entries T j1···j d are equal up to permutation of j 1 , . . . , j d . Therefore, T must equal the monomial tensor T(p), showing that m ≥ R S (T(p)) as desired.
Corollary II.8. Let p(u) = p(u 1 , . . . , u n ) be a multivariate polynomial of degree d. For c = 0, 1, . . . , d, let p c (u) be the homogeneous polynomial obtained by taking all terms of p(u) with degree c. If R S is the maximum symmetric rank of T(p c (u)), then m 1 (p) ≥ R S .
The proof of this statement closely follows that of Proposition II.7.
Various results are known for the symmetric rank of tensors over the complex numbers C [16]. Notably, Alexander and Hirschowitz [17] showed in a highly non-trivial proof that the symmetric rank of a generic symmetric tensor of dimension n and order k over C equals n+k−1 k /n , with the exception of a few small values of k and n. However, this result does not hold over the real numbers R, and in fact there can be several possible generic ranks for symmetric tensors over R [16].
III. HOW EFFICIENCY IMPROVES WITH DEPTH
We now consider how m k (p, σ) scales with k, interpolating between exponential in n (for k = 1) and linear in n (for k = log n). In practice, networks with modest k > 1 are effective at representing natural functions. We explain this theoretically by showing that the cost of approximating the product polynomial drops off rapidly as k increases.
A. Networks of constant depth
By repeated application of the shallow network construction in Lin, Tegmark, and Rolnick [15], we obtain the following upper bound on m k (p, σ), which we conjecture to be essentially tight. Our approach is reminiscent of tree-like network architectures discussed e.g. in [10], in which groups of input variables are recursively processed in successive layers.
Theorem III.1. For p(x) equal to the product x 1 x 2 · · · x n , and for any σ with all nonzero Taylor coefficients, we have: Proof. We construct a network in which groups of the n inputs are recursively multiplied. The n inputs are first divided into groups of size b 1 , and each group is multiplied in the first hidden layer using 2 b1 neurons (as described in [15]). Thus, the first hidden layer includes a total of 2 b1 n/b 1 neurons. This gives us n/b 1 values to multiply, which are in turn divided into groups of size b 2 . Each group is multiplied in the second hidden layer using 2 b2 neurons. Thus, the second hidden layer includes a total of 2 b2 n/(b 1 b 2 ) neurons.
We continue in this fashion for b 1 , b 2 , . . . , b k such that b 1 b 2 · · · b k = n, giving us one neuron which is the product of all of our inputs. By considering the total number of neurons used, we conclude Setting b i = n 1/k , for each i, gives us the desired bound (9).
In fact, we can solve for the choice of b i such that the upper bound in (10) is minimized, under the condition b 1 b 2 · · · b k = n. Using the technique of Lagrange multipliers, we know that the optimum occurs at a minimum of the function Differentiating L with respect to b i , we obtain the con- Dividing (11) by k j=i+1 b j and rearranging gives us the recursion Thus, the optimal b i are not exactly equal but very slowly increasing with i (see Figure 1).
FIG. 1: The optimal settings for {bi} k i=1 as n varies are shown for k = 1, 2, 3. Observe that the bi converge to n 1/k for large n, as witnessed by a linear fit in the log-log plot. The exact values are given by equations (12) and (13).
The following conjecture states that the bound given in Theorem III.1 is (approximately) optimal.
Conjecture III.2. For p(x) equal to the product x 1 x 2 · · · x n , and for any σ with all nonzero Taylor coefficients, we have i.e., the exponent grows as n 1/k as n → ∞.
We empirically tested Conjecture III.2 by training ANNs to predict the product of input values x 1 , . . . , x n with n = 20 (see Figure 2). The rapid interpolation from exponential to linear width aligns with our predictions.
In our experiments, we used feedforward networks with dense connections between successive layers, with nonlinearities instantiated as the hyperbolic tangent function. Similar results were also obtained for rectified linear units (ReLUs) as the nonlinearity, despite the fact that this function does not satisfy our hypothesis of being everywhere differentiable. The number of layers was varied, as was the number of neurons within a single layer. The networks were trained using the AdaDelta optimizer [18] to minimize the absolute value of the difference between the predicted and actual values. Input variables x i were drawn uniformly at random from the interval [0, 2], so that the expected value of the output would be of manageable size.
FIG. 2: Performance of trained networks in approximating the product of 20 input variables, ranging from red (high mean error) to blue (low mean error). The curve w = n (k−1)/k · 2 n 1/k for n = 20 is shown in black. In the region above and to the right of the curve, it is possible to effectively approximate the product function (Theorem III.1).
Eq. (14) provides a helpful rule of thumb for how deep is deep enough. Suppose, for instance, that we wish to keep typical layers no wider than about a thousand (∼ 2 10 ) neurons. Eq. (14) then implies n 1/k ∼ < 10, i.e., that the number of layers should be at least k ∼ > log 10 n.
B. Circuit complexity
It is interesting to consider how our results on the inapproximability of simple polynomials by polynomial-size neural networks compare to results for Boolean circuits.
Recall that T C 0 is defined as the set of problems that can be solved by a Boolean circuit of constant depth and polynomial size, where the circuit is allowed to use AND, OR, NOT, and MAJORITY gates of arbitrary fan-in. 2 It is an open problem whether T C 0 equals the class T C 1 of problems solvable by circuits for which the depth is logarithmic in the size of the input.
In this section, we consider the feasibility of strong general no-flattening results. It would be very interesting if one could show that general polynomials p in n variables require a superpolynomial number of neurons to approximate for any constant number of hidden layers. That is, for each integer k ≥ 1, we would like to prove a lower bound on m k (p, σ) that grows fast than polynomially in n.
Such a result might seem to address questions such as whether T C 0 and T C 1 are equal. However, Boolean circuits compute using 0/1 values, while the neurons of our artificial neural networks take on arbitrary real values. To preserve 0/1 values at all neurons, we can restrict inputs to such values and take the nonlinear activation to be the Heaviside step function: This gives us essentially a multi-layered perceptron, as inspired by McCulloch and Pitts [19].
We assume also that each neuron has access to a fixed bias constant: that is, a neuron receiving inputs x 1 , x 2 , . . . , x n is of the form σ(a 0 + a 1 x 1 + a 2 x 2 + · · · + a n x n ) where a 0 , a 1 , a 2 , . . . , a n are real constants. Such a neuron corresponds to a weighted threshold gate in Boolean circuits.
It follows from the work of [20] and [21] that such artificial neural nets (ANNs), with constant depth and polynomial size, have exactly the same power as T C 0 circuits. That is, weighted threshold gates can simulate and be simulated by constant-depth, polynomial-size circuits of AND, OR, NOT, and MAJORITY gates. The following are simple constructions for these four types of gates using weighted thresholds.
AND(x 1 , x 2 , . . . , Thus, we should not hope easily to prove general noflattening results for Boolean functions, but the case of polynomials in real-valued variables may be more tractable. Simply approximating a real value by a Boolean circuit requires arbitrarily many bits. Therefore, performing direct computations on real values is clearly intractable for T C 0 circuits. Moreover, related work such as [4,8,13] has already proven gaps in expressivity for real-valued neural networks of different depths, for which the analogous results remain unknown in Boolean circuits.
IV. CONCLUSION
We have shown how the power of deeper ANNs can be quantified even for simple polynomials. We have proven that there is an exponential gap between the width of shallow and deep networks required for approximating a given sparse polynomial. For n variables, a shallow network requires size exponential in n, while a deep network requires at most linearly many neurons. Networks with a constant number k > 1 of hidden layers appear to interpolate between these extremes, following a curve exponential in n 1/k . This suggests a rough heuristic for the number of layers required for approximating simple functions with neural networks. For example, if we want no layers to have more than 10 3 neurons, say, then the minimum number of layers required grows only as log 10 n.
It is worth noting that our constructions enjoy the property of locality mentioned in [13], which is also a feature of convolutional neural nets. That is, each neuron in a layer is assumed to be connected only to a small subset of neurons from the previous layer, rather than the entirety of them (or some large fraction). In fact, we showed (e.g. Prop. II.6) that there exist natural functions that can be computed in a linear number of neurons, where each neuron is connected to at most two neurons in the preceding layer, which nonetheless cannot be computed with fewer than exponentially many neurons in a single layer, no matter how may connections are used. Our construction can also easily be framed with reference to the other properties mentioned in [13]: those of sharing (in which weights are shared between neural connections) and pooling (in which layers are gradually collapsed, as our construction essentially does with recursive combination of inputs).
This paper has focused exclusively on the resources (notably neurons and synapses) required to compute a given function. An important complementary challenge is to quantify the resources (e.g. training steps) required to learn the computation, i.e., to converge to appropriate weights using training data -possibly a fixed amount thereof, as suggested in [22]. There are simple functions that can be computed with polynomial resources but require exponential resources to learn [23]. It is quite possible that architectures we have not considered increase the feasibility of learning. For example, residual networks (ResNets) [24] and unitary nets (see e.g. [25,26]) are no more powerful in representational ability than conventional networks of the same size, but by being less susceptible to the "vanishing/exploding gradient" problem, it is far easier to optimize them in practice. We look forward to future work that will help us understand the power of neural networks to learn.
V. ACKNOWLEDGMENTS
This work was supported by the Foundational Questions Institute http://fqxi.org/, the Rothberg Family Fund for Cognitive Science and NSF grant 1122374. We thank Scott Aaronson, Surya Ganguli, David Budden, and Henry Lin for helpful discussions and suggestions. | 2017-05-16T02:02:24.000Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "60a74f80f6e9b924ec92c2a31245560b12469481",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "60a74f80f6e9b924ec92c2a31245560b12469481",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
201991767 | pes2o/s2orc | v3-fos-license | The disuse of hearing aids in elderly people diagnosed with a presbycusis at an old age home, in Johannesburg, South Africa: a pilot study
Background Hearing loss is the most common form of human sensory deficit with its prevalence highest within the geriatric population. Approximately a third of adults aged from 61 years exhibit the characteristics of presbycusis, a number one contributor to communication disorders among the elderly, thereby affecting the social, functional and psychological wellbeing of the elderly. Subsiquently, this leads to loneliness, isolation, dependence and frustration. Objective To explore reasons why elderly people diagnosed with presbycusis and fitted with hearing aids stop using hearing aids post fitting. Method A qualitative research design was adopted. Through purposive sampling, ten participants consisting of three males and seven females, aged between 74 and 85 participated in face-to-face and semi-structured interviews. Results The following themes emerged: discomfort, lack of information about hearing aids, difficulty with function and maintenance and the lack of patient involvement in the hearing aid selection process. Conclusion There are different reasons for disuse of hearing aids in elderly patients. Audiologists should ensure that hearing aids selection is patient specific and inclusive. Expectations of the elderly regarding hearing aids benefits and limitations should also be addressed by audiologists before fitting hearing aids.
Introduction
Being able to communicate is a foundation of healthy ageing as communication allows people to remain cognitively and socially engaged with families, friends and other individuals in the society 1 . The inability to communicate due to hearing impairments can lead to social isolation, a significant contributor to morbidity and mortality in the elderly population 1 . Dewane 2 asserts that if there were no need to communicate every day, older adults with a hearing loss would have no problem. This statement is true, considering that communication is a basic human need 3 . This statement is even truer when considering that communication is a complex process that can be affected by age 4 . Ageing is associated with a general decline in health; therefore, elderly people tend to seek more medical attention with advancing age. At this stage, communication becomes critically, when life and physiological changes become apparent 4 . These physiological changes may include the presence of a hearing loss, which may prompt the elderly to seek assistance regarding their hearing status. It is at this stage that some elderly people are diagnosed with age related hearing loss known as presbycusis.
and is a number one contributor to communication disorder among the elderly. 9 It affects social, functional and psychological wellbeing, subsequently leading to loneliness, isolation, dependence and frustration 5 . Presbycusis is characterized by the loss of hearing sensitivity in the high frequencies, difficulties hearing speech in the presence of background noise, slowed central processing of acoustic information and impaired localisation of sound sources 8-10 hence the communication difficulties. If the hearing loss is left untreated, it can have a severe impact on the patients, significant others and society as a whole 11 .
The diagnosis of presbycusis is primarily made by audiologists-professionals qualified in identifying and managing hearing loss. The primary management of presbycusis is through the use hearing aids 8,12 , which are primarily prescribed and fitted by audiologists. While hearing aids do not restore lost sensory cells, they do provide acoustic power for declining metabolic function 8 . Sadly, in some cases, hearing aids do not yield adequate benefits 12 . Consequently, a number of patients do not use hearing aids. Gates and Mills 8 reported that 25-40% of people with hearing aids either underuse or abandon hearing aids. Fewer elderly Americans with hearing loss use hearing aids, even with the advances in technology on hearing health care 1 . Some elderly people fitted encounter different experiences with hearing amplification which directly influences their attitudes toward hearing amplification devices 1 . Therefore, the purpose of the study is to understand the reasons why elderly people fitted with hearing aids stop using hearing aids. The need for the study was informed by an observation made by an audiologist employed at an old age home, where residents fitted with hearing aids stopped using them. In order to understand the underlying reasons behind this phenomenon, and to provide audiologist's with evidence-based findings, this pilot study was undertaken.
Methodology
A qualitative research design through purposive sampling was employed to recruit participants. An audiologist working at an Old Age Home in Johannesburg was approached to act as a gatekeeper in order to identify, inform and request participation from suitable participants. When participants agreed, the audiologist forwarded contact details to the researcher for completion of the process.
Participants were recruited from a privately-funded modest age old home situated in Johannesburg, South Africa, This old age home has been in operation for approximately 100 years. This facility caters for approximately 500 residents, of whom the majority are in need of long-term medical and nursing care, hence, there is a full complement of nursing staff, general practitioners and consulting specialists. Residents have access to a variety services as physiotherapy, speech pathology, audiology, radiography, as well as recreational activities such as hairdressing and library visits.
Participants had to be 65 years and older, diagnosed with presbycusis, confirmed by the in-house audiologists and recorded as such in the patient's file; moreover, patients had to have been previously fitted with hearing aids but currently not using them. Participants who have never used hearing aids and those who were still using hearing aids were excluded from this study. Furthermore, individuals below the age of 65 were excluded from the study. Subsequently, 10 participants consisting of three males and seven females, with ages ranging from 74 to 85 were recruited to participate in this pilot study. Data were collected through semi-structured interviews formulated by the researcher. Interviews were conducted face-to-face, in English and were approximately 45 minutes. The interviews were conducted in a private room at the old age home, over a period of three months.
Prior to commencing with the study, Ethical approval was obtained from the Human Research Ethics committee Non-Medical Protocol Number: H110922. Thereafter, permission was requested from the old age home, the resident audiologist who acted as the gatekeeper, and from participants themselves. On the days of the interviews, the researcher furnished participants with information letters and consent forms. The information letter contained details regarding the aim and nature of the study as well as ethical considerations. Participants were informed that all the information obtained will be kept confidential, any identifying information such as the participant's name will be removed; participants could withdraw from the study without any negative consequences and lastly, anonymity could not be guaranteed as the participants were referred by the resident audiologist. Consent forms were given to participants to sign, indicating that they have agreed to participate. Permission to digitally record the interviews was also requested and obtained.
Data were analysed using inductive thematic analysis as this allowed for the coding of data without trying to fit it into a pre-existing coding frame, or the researcher's analytic preconceptions and thereby allowing for themes to emerge from the data themselves 13 . The analysis of the data was in accordance with recommendations by Creswell 14 .
To address any concerns pertaining to bias or subjectivity in the analysis of data as an audiologist, the author acknowledges that "all research is subject to researcher bias" 15 . Hence, a peer reviewer was requested to review some of the analysed data to confirm that there was no bias in the analysis and interpretation of results.
Results
Individuals with presbycusis encounter different experiences with hearing amplification, which directly influences their opinions and attitudes toward hearing aids. The following themes emerged: discomfort, lack of information about hearing aids, and difficulty with function and maintenance of the hearing aid.
Theme 1: Discomfort
Seven participants complained of discomfort when wearing hearing aids. Discomfort was determined in terms of pain, background noise and/or a tight fit of the hearing aid.
Subtheme 1: Pain
Three participants shared their experiences: "When I put it in, I'm not comfortable with it. My ear gets sore."P1. P2 also experienced discomfort relating to pain, "They have to shove them in and they hurt!" while P4 stated that; "the thing that goes inside the ear you know? It was like when you have a stone in your shoe, it was painful. When I put it in, it is painful. It is sore and I take it out for that reason".
Subtheme 2: Background noise
The presence of background noise was also cited as one of the contributors to poor compliance. P5 stated "It just made a noise, all I could ever hear was noise." Similarly, P6 shared his experience: "I was getting frustrated with the noise! Firstly I'm the sort of person that doesn't like a noisy background and when people are trying to talk to me and this background noise is coming through I can't hear anything." P8 also expressed frustration with background noise: "I didn't count on the noise, there's more noise than anything else and I just can't get used to it. Like the generator, its right outside my window and it makes so much noise."
Subtheme 3: Tight fit
Lastly, concerns regarding the fit of the hearing aids were discussed. P1 shared, "I used to wear it every day but it's too big and thick and I'm not comfortable with it. My ear doesn't feel right with it. It's just a nuisance". Similarly, P 2 stated, "Well they were too big for the opening of my ear. They force them in; I think they use Vaseline to make them slippery. They were forced in and they were too tight."
Theme 2: Lack of information about hearing aids
To elicit responses with regard to information provided to patients during orientation, participants were asked whether they believe they were provided with sufficient information on how to care and operate their hearing aids. Six participants indicated that they were not provided with adequate information. The remaining participants could not remember, however, the majority of the participants felt that thier expectations of hearing aids were not met by audiologists. Therefore, three subthemes emerged from this theme: poor orientation, poor patient involvement and unmet expectations.
Subtheme 1: Poor Hearing aid orientation
Six participants reported during that they were not provided with adequate informatiom the hearing aid orientation process. P2 stated that he received "no information at all! Nothing. It was a woman and she didn't give me any information she only shoved the hearing aids into my ears." Likewise, P7 stated that he was provided with no information "Nothing. You know you have your hearing tested. And okay. And this is the price and it will be ready on Monday Tuesday Wednesday. Came back, fitted it on and that was it." P9 had a similar experience "Nothing! They just told me how to put it in and to press there and that's all they told me". Based on the responses above, it was hard to ascertain if these participants were not provided with information or if they forgot the information. Four participants indicated that they could not remember if there were provided with adequate information. P1 stated:, "I cannot remember." Similarly, P5 responded, "I can't recall." P10 elaborated on her response, "Look I'm 85, I can't remember anything from a month ago." Subtheme 2: Patient involvement A majority of participants reported that they were not involved in the decision making process in terms of choosing their hearing aids or being informed about different options available. P8 reported "I have a friend and he's got a tiny little thing that fits right in the ear and that is what I've been trying to get and the agent for South Africa has a firm in Edenvale, I can give you the name. It's a tiny little thing, doesn't fit over the ear, it goes right inside the ear. I didn't know there were different types to choose from. I was not told about the different options". P3 stated "Well, they didn't even give us an option, it was 'this is the one we have' and that was it."
Subtheme 3: Unmet Expectations
Six participants expressed that they were disappointed when the hearing aids did not meet their expectations. They attribute their disappointment to not being provided with sufficient information on how hearing aids function. P9 stated, "Well I have had no benefits so far. And how you need a battery every three days I don't understand." P5 also reflected on her disappointment "I was very disappointed, because I still couldn't hear". Similarly, P6 reported, "Well I expected it to help in the way that I wanted it to, but I wasn't happy." Lastly, P10 shared "Well I thought it was going to be wonderful and I was going to hear everybody but you can't hear everybody at the same time it's impossible and its often very difficult subjects. It's not just about the cat and the dog; it's sometimes about political things".
Theme 3: Difficulty with function and maintenance
Five participants reported difficulty with function and maintenance of their hearing aids. Difficulties included trouble with hearing aid placement, difficulties with cleaning hearing aid and/or difficulty working the controls.
Sub-theme 1: Hearing Aid Placement
P2 reported difficulty with hearing aid placement, "I can't shove it in myself, I don't know how." P3 stated her similar experiences, "I had difficulty with it because it was clumsy and I had difficulty inserting it." Subtheme 2: Working the controls P8 expressed difficulty regarding the controls on the hearing aid, "you know what I didn't like about it was that I couldn't figure out how to make it louder or softer." P1 expressed related difficulties: "I had difficulty working the controls, I didn't know how to work it." Subtheme 3: Cleaning the hearing aids Participant nine reported his struggle with cleaning the hearing aid, "Firstly it's uncomfortable and it causes wax in the ear, and I can't be bothered cleaning it afterward."
Discussion
Findings shed some light regarding reasons why some elderly people fitted with hearing aids stop using hearing aids post-hearing aid fitting. Three broad themes were identified, namely discomfort, lack of sufficient information regarding hearing aids and difficulty with handling and maintaining hearing aids. The findings highlighted a need for audiologists to take into account the mentioned aspects of hearing aid use when fitting them in the elderly population. With age, comfort plays a huge part in making life enjoyable. To ensure that elderly people, who can benefit from using hearing aids, receive the best ear care, comfort should be a standard consideration as discomfort can negatively influence the decision to use hearing aids thereby impoverishing the quality of life of elderly people 16,17 .
Lack of appropriate information regarding hearing aids and difficulty with function and maintenance highlight the importance of counselling and continuous provision of information to elderly patients when fitted with hearing aids. Normal old age is associated with minor forgetfulness and a decrease in the ability to learn new information. 4,18 . When acquiring a hearing aid, individuals of the geriatric population are expected to learn and adapt to something completely unknown to them 19 . Learning new information in old age is a challenge because with old age comes accompanying difficulties such as dexterities and memory loss 19 . Dexterities can result in difficulties with fine adjustments and controlling of the hearing aid, such as fitting the hearing aid, changing the volume or cleaning hearing aid 19 . Furthermore, memory loss can result in forgetting how to maintain and care for one's hearing aid leading to dearth of use of hearing aids 19 .
With advancing age, it is possible that elderly people may be unable to retain and process information provided by audiologists during consultations. Hearing aid orientation is a lengthy period and it addresses a variety of information, therefore, elderly people may forget some information they were provided. Hearing orientation provides information to the client regarding how to use and main-tain hearing aids for best outcomes possible 20 . Therefore, coupling hearing aid orientation with counselling can be useful in addressing expectation regarding hearing aid benefits 21 . Elderly people need to be aware that, although hearing amplification provides a large amount of beneficence, there are still certain limitations, which are evident through the discrepancy when comparing beneficence of hearing amplification with normal hearing 21 . This knowledge will assist the elderly in creating realistic expectation. It is therefore important that elderly people fitted with hearing aids undergo continued counselling and follow-up sessions with audiologists to ensure they continue to use hearing aids and to address any challenges that may be experienced with hearing aids. Additionally, elderly people can be provided with strategies to strengthen their ability to remember how to use and maintain their hearing aids.
Limitations of the study
This was a pilot study, with a small sample size, therefore, the findings cannot be generalised to a larger population. Furthermore, this study was conducted at one old age home, it would be beneficial to conduct a similar study with a larger sample size at various old age homes to see trends in different contexts. Lastly, this study relied on the reports of the participants; it would have benefited the study to also incorporate observations of the practises of the residents with a hearing loss
Conclusion
The findings of this study highlighted a wide range of reasons why some elderly people diagnosed with presbycusis and subsequently fitted with hearing aids stop using their hearing aids. The three major themes that emerged highlighted the fundamental and critical aspects that audiologists need to consider when providing services to the elderly population. These findings, as evidenced by the experiences of the participants, clearly illustrate that the audiologist plays a fundamental role in the likelihood of individuals of the geriatric population wearing their hearing amplification and thus improving their quality of life.
Recommendations
The findings of this study highlight the need for audiologists to ensure that hearing aid selection is patient specific and that the patient is involved in the selection process. It is also important for the audiologist to address patient's expectations of the hearing amplification and to ensure that their expectations are realistic by clearly explaining how the hearing aid will benefit them and the challenges they may experience with their hearing aids such as background noise and feedback. | 2019-09-09T18:39:17.380Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "5ddd0509c7e0fad27f117ab7dd5d5a0fbfa73253",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/189160/178398",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b2306dae4478390e59f2b46a658c553a75fb167",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
83629204 | pes2o/s2orc | v3-fos-license | Process Contribution to the Time-Varying Residual Circulation in Tidally Dominated Estuarine Environments
In tide-dominated environments, residual circulation is the comparatively weak net flow in addition to the oscillatory tidal current. Understanding the 3D structure of this circulation is of importance for coastal management as it impacts the net (longer term and event-scale) transport of suspended particles and the advection of tracer quantities. The Dee Estuary, northwest Britain, is used to understand which physical processes have an important contribution to the time-varying residual circulation. Model simulations are used to extract the time-varying contributions of tidal, riverine (baroclinicity and discharge), meteorological, external and wave processes, along with their interactions. Under hypertidal conditions, strong semi-diurnal interaction within the residual makes it difficult to clearly see the effect of a process without filtering. An approach to separate the residual into the isolated process contribution and the contribution due to interaction is described. Applying this method to two hypertidal estuarine channels, one tide dominant and one baroclinic dominant, reveals that process interaction can be as important as the sub-tidal residual process contributions themselves. The time variation of the residual circulation highlights the impact of different physical process components at the event scale of tidal conditions (neap and spring cycles) and offshore storms (wind, wave and surge influence). This gives insight into short-term deviation from the typical estuarine residual. Both channels are found to react differently to the same local conditions, with different short-term change in process dominance during events of high and low energy.
Introduction 37
This research continues from earlier studies of the 3D circulation within the channels of this 38 hypertidal estuary system (Bolaños et al. 2013) and coastal wave impact across Liverpool Bay 39 ). Bolaños Komen et al. 1994), modified for coastal applications (see Monbaliu et 170 al. 2000) and the generation of radiation stress (using Mellor 2003Mellor , 2005, enabled 3D wave-171 induced currents and enhanced bottom friction and surface roughness to be included. The 172 modelling system was coupled such that a 2-way exchange of information occurred between the 173 component models and was configured to include wetting and drying, making it apt for this 174 estuarine application. The wave coupling was initiated on the 21 st February 00:00, when the 175 conditions were no longer considered calm and the waves exceed 0.6 m (> 168 hrs Fig. 2c), to 176 reduce computational cost. No wave induced residual is therefore shown in later figures during the 177 calm period. Prior to this time wave activity is assumed to be minimal within the estuary. Details 178 of the modelling system setup and validation of this period confirming this approach is acceptable 179 are given by Bolaños et al. (2013). Previous studies have also shown it to give good multi-year 180 tide-surge hindcast across the eastern Irish Sea ) and within Liverpool Bay 181 (Brown et al. 2011). 182 Operational atmospheric forcing from the UK Met Office was used to drive the local Liverpool 184 Bay model. The full set of ~12 km resolution atmospheric conditions (3 hourly air temperature 185 and specific humidity, with hourly pressure and 10 m wind components) are used to include air-186 sea heat and momentum fluxes. Freshwater input is considered using daily mean gauged discharge 187 at all available river sources around the Irish Sea. The offshore (Liverpool Bay) model boundary 188 have been compared with observation ( Fig. 3 a and b). Taking the depth-average enables the full 201 water column to be considered at each time instance, and does not incur problems relating to the 202 volume conservation of sigma co-ordinates when time-averaged. The comparison is performed 203 using the ADCP measurements in the Hilbre Channel for the full period of observation at the 204 fixed mooring. Both the model results and observations are filtered (see Section 3.2) to obtain the sub-tidal residual. This technique causes a loss of data at the ends of the time series. It is clear that 206 the model over predicts the depth-mean currents and has less accuracy during the stormier period 207 (around hour 300). Generally the model shows less fluctuation than the observations. The wave 208 conditions are also compared using a wave buoy deployed in the Hilbre Channel close to the 209 ADCP mooring (Fig. 1). The modulation in the wave properties over the tidal cycle in response to 210 depth change is captured (Fig. 3 c and In hypertidal estuaries the tide has a strong modulating influence on the other non-tidal physical 232 processes, not only due to fast currents (~1.2 ms -1 during spring tide in the Dee, Fig. 2b), but also 233 due to the wetting and drying of banks, which modifies the bathymetric cross-sectional estuary 234 profile. The model can be used to simulate circulation due to user chosen inputs, for example 235 whether the atmospheric forcing is turned on or not in the model. In this model application the 236 physical processes available for user selection are: meteorological forcing (M), baroclinicity (B), 237 river flow (R), external residual (E), tides (T) and waves (W). Filtering methods are also applied 238 to the model data to remove all energy at tidal frequencies to isolate the tidally-interactive residual 239 component. Here the Chebyshev Type II filter is used as a low-pass filter with a stop-band of 26 240 hours and a pass-band of 30 hours to remove all energy at tidal frequencies. A standard 3 decibels 241 pass-band amplitude was applied with a stop-band attenuation of 30 decibels, which is an 242 attenuation factor of 1000. This leaves only the low frequency (≥ 30 hours, sub-tidal) residual 243 without any tidal energy or tidal interaction, which is removed as it has a similar frequency to the 244 tide. Tidal harmonics with a period ≥ 30 hours will not be removed by this filter design, but within 245 an estuary environment their contribution is expected to be small. A 2-way filtering process was 246 applied so no phase shift occurred in the residual, however the start and end of the residual cannot 247 be accurately obtained, hence a shorter time series is later presented. When applied to the total 248 modelled current velocity more data are lost to filter error, at the ends of the time series, than 249 when applied to the weaker residual current velocities obtained from model simulations. This is 250 because the length of the erroneous period is a percentage of the input signal magnitude. Later 251 figures for the filtered tidal and total current simulations are therefore shorter than those for the 252 filtered (much weaker) residual current. This filter setup has previously been show to successfully 253 remove the tidal energy within surface elevations compared with harmonic tidal analysis methods 254 within this estuary ), so has been used again in this study. The results presented consider the total residual circulation and its component parts, a sub-tidal 294 (≥30 hour period) process driven component and an interactive component due to intra-tidal (< 30 295 hour period) process interaction for all (i) processes modelled: 296 total residual = Ʃ i (sub-tidal process residual + intra-tidal process residual).
…(1) 297
From the model simulation the full sub-tidal residual for all processes can be obtained by filtering: 298 where < > denote filtering has been applied. The difference between modelling experiments 300 considering different processes, are used to obtain the time-varying residual circulation due to 301 isolated processes including interactive effects (Table 2). For a single process the process residual 302 obtained from model simulation is: 303 process residual = full simulationreduced simulation.
…(3) 304
The sub-tidal residual and intra-tidal residual for that process are then defined as: 305 sub-tidal process residual = <full simulationreduced simulation>,
…(4) 306
intra-tidal process residual = (full simulationreduced simulation) -<full simulationreduced simulation>, …(5) 307 For example, the Metrological residual (M in Table 2, row 4) is the difference between a full 308 process model simulation (PGW_MBRET) and a reduced process simulation that does not include 309 Meteorology (PGW_BRET). Filtering this model residual removes any component with a 310 coherent phase, thus removing interaction with intra-tidal frequency between the residual process 311 itself and all other processes considered, mainly the tide. This method extracts the sub-tidal 312 residual induced by the non-tidal process and its nonlinear interactions with other non-tidal 313 forcing. The intra-tidal residual for meteorology is then obtained by subtracting the sub-tidal 314 residual (< PGW_MBRET -PGW_BRET>) from the process residual (PGW_MBRET -315 PGW_BRET). To obtain the non-tidal sub-tidal residual (Table 1, row 3, Fig. 4 and 5) the 316 difference between a model full physics simulation containing all processes (PGW_MBRETW) 317 and that of the tide only (PG_T) is filtered to remove all intra-tidal iteration. 318 In Section 4 the total (sub-tidal and intra-tidal) residual for one or more selected processes is 320 obtained by subtracting a model simulation without the processes in question from one which 321 includes them. The sub-tidal residual is obtained by filtering the total residual and the intra-tidal 322 residual calculated as the difference between the total and sub-tidal residual. By filtering the tide-323 alone (PG_T, Residual 1) and the fully coupled (PGW_MBRETW, Residual 2) model simulations 324 the sub-tidal (≥30 hours) tide-only and full process residual is obtained. This gives an idea of how 325 the tide behaves within the modelled estuary and how it contributes compared with the non-tidal 326 processes to the total residual circulation within the estuary channels. 327 The modelled tide only (PG_T) and total circulation (PGW_MBRETW) are filtered to remove all 338 semi-diurnal interaction to give the sub-tidal (≥ 30 hrs) residuals. For these two cases, the much 339 larger input signal to the filter causes the data loss at the ends of the time series to be over a longer 340 period than for the weaker residual currents presented later (refer to section 3.2). Comparison of 341 these sub-tidal residuals determines the importance of the tide relative to the non-tidal processes in influencing the total residual circulation. Filtering the tide-alone simulation (PG_T) enables the 343 tidal residual, generated by asymmetries and bathymetric constraint, to be obtained from the 344 model. In both channels the tide causes a long-term (time-averaged) 2-layer horizontal structure 345 (see Bolaños and also to the weakening of the seaward sub-tidal tidal residual in the Welsh Channel during 359 neap tides. In this channel the tidal residual (Fig. 4a) is about half the magnitude of the sub-tidal 360 residual generated by the combined non-tidal processes (Fig. 6a), considered in the next section. 361 The influence of non-tidal processes on the total (tide plus non-tidal) residual is therefore clearly 362 seen (Fig. 4b). 363 In the Welsh Channel a strong seaward flow occurs during spring tide, weakening to zero residual 365 during neap tides (Fig. 4c). The seaward direction of this flow is related to mooring being located 366 on the right side of the channel, when facing out to sea. The time-mean residual within the Welsh 367 Channel has net out flow to the right and net inward flow to the left (Bolaños et al. 2013), with 368 flow speeds more than double that modelled in the Hilbre Channel. At spring tide the magnitude 369 of the Welsh tidal residual is much larger than that due to the non-tidal processes considered, thus, 370 greatly influences the total (tide plus non-tidal) residual at this time (Fig. 4d). However, during 371 neap tide stronger stratification and therefore baroclinicity determines the residual pattern and not 372 the tide, especially during calm atmospheric conditions (> 75 hrs, Fig. 4d). Storm impact, 373 coinciding with neap tide, weakens the stratification modifying the total residual, which becomes 374 storm process driven (~375 -450 hrs, Fig. 4d). 375
376
The same effects as those seen in the major channel axis component occur in the minor channel 377 axis component of both channels (Fig. 5). The Hilbre Channel has a complex sub-tidal residual in 378 the minor channel axis component (Fig. 5b). The surface flow varies in direction from westerly 379 due to baroclinic processes (see Section 4.2) to intense easterly during storm conditions; however The non-tidal sub-tidal residuals (3 -8 given in Table 2) are analysed to determine the importance 388 of different physical non-tidal process, in contributing to the total residual circulation. In the 389 Hilbre Channel comparison of the non-tidal sub-tidal residual (Figs 6a and 7a) with the total sub-390 tidal residual (Figs 4b and 5b) column, in addition to influence over the full depth at neap tide, which is particularly strong 396 during the storm event. 397 398 Figures 6 and 7 show that the non-tidal processes (considered in Table 2, rows 3 -8) have greater 399 influence in the Hilbre Channel, except for the external surge which has a similar influence in 400 both. Baroclinicity (Figures 6c, and 7c,) is the dominant process at generating sub-tidal residual 401 circulation when all of the considered non-tidal processes are simulated together (Figures 6a and 402 7a). This is seen clearly by the similarity in time-varying pattern. It is important to recall that 403 baroclinicity in this model application represents any process driven by density gradients and their 404 straining. This creates a seaward surface flow and landward bottom flow in the major channel axis 405 component (Fig. 6c). Even at high water, when the water column becomes mixed, straining 406 continues to drive this circulation due to modifications in the flood tide velocity profile and 407 turbulent mixing (Burchard and Baumert 1998), which interacts with the vertical profile of both 408 the river flow and the seaward mass transport in response to Stoke's drift. The tidal straining 409 induced residual therefore has similar characteristics to the classical density-driven flow 410 (Burchard et al. 2011). In the major channel axis the baroclinic residual component (Fig. 6c) is 411 weakened following waves enhancing the seaward flow under windy conditions from the west, 412 both processes reducing stratification (e.g. 300 -320 hrs), or when the wind is southerly (e.g. 460 413 hrs) and therefore opposing estuarine circulation. The depth of the baroclinic residual surface 414 layer is also found to deepen during the extreme storm once the initially south-westerly winds 415 have veered more westerly (e.g. 380 -460 hrs). Fig 6i), even though the river 419 discharge is low and decreasing. Under these conditions stratification is able to form and is 420 strengthened by wind straining. During the extreme storm event the waves (359 -447 hrs, Figs. 6l 421 and 7l), external surge (captured in the external residual ~400 hrs, Figs. 6k and 7k) and local 422 meteorology ( Fig. 6h and 7h) have greatest influence. These processes weaken the stratification in 423 the Welsh Channel and therefore also weaken the persistent density-driven flow pattern (Fig. 6i). 424
425
In both channels the river discharge has the least influence (note the different color scale in Fig. 426 6d, j and 7d, j) creating a weak offshore flow in both channels. The strength of this residual 427 component is related to the river discharge entering the upper estuary from the catchment (Fig. 2e) 428 and not the local storm event itself. Non-regular quasi-period oscillation is seen in the river flow 429 at the mouth (≥ 30 hrs) due to interaction with the atmospheric forcing and possibly the long-430 period variability in the channel cross-sectional area due to the surge component influencing the 431 total water elevation over the intertidal shoals. The local meteorological (wind) forcing and the 432 external surge seem to have counteractive effects at the event-scale (compare Fig. 6b, h and 7b, h 433 with Fig. 6e, k and 7e, k). The external surge acts to increase water levels causing flow into the 434 estuary during southwest storm events, while the local southwest wind promotes seaward flow for 435 the Hilbre Channel alignment. In the Welsh Channel, these two processes cause opposing 436 bidirectional 2-layer vertical residual flow structures. Finally waves (Fig. 6f, l), when present (> 437 168 hrs, Fig. 2c The general pattern (Fig. 7a, g) in the minor channel axis residual component is driven by 446 baroclinicity (Fig. 7c, i) The interactions within this hypertidal estuary are predominantly controlled by the tide. The intra-454 tidal residuals produced by tidal interactions are similar in magnitude to the sub-tidal residuals 455 induced by the non-tidal processes; they are therefore equally as important in contributing to the 456 total (sub-tidal plus intra-tidal) time-varying residual circulation. 457
458
The non-tidal processes (3 -8 given in Table 2), which have greater influence in the Hilbre 459 Channel also cause a greater intra-tidal (interaction driven) residual within this channel ( Fig. 8 and 460 Fig. 9). The interactions generating the intra-tidal residual within Figures 8 and 9 are given in 461 Table 2 (column 4), and are not just due to the tide. Bolaños et al. (2013) shows the importance of 462 tide-stratifiction interaction within this estuary, which enables periodic stratification to develop at 463 low water followed by its break down creating a well mixed water column at high water. 464 Compared with the non-tidal sub-tidal residuals ( Fig. 6 and 7), the intra-tidal residual ( Fig. 8 and 465 9), although intermittent, is a significant contribution (at least double at times) to the total residual 466 circulation generated by that process. The intra-tidal residual is of similar magnitude in both the 467 major and minor channel axis components. During the storm event the local meteorology (wind, Residual 4) interacts to create a seaward 478 surface flow at high water elevations and landward surface flow at low water elevation (Fig. 8b 479 and 9b). This interaction is clearly the result of wind straining. At high water slack the estuarine 480 stratification is weakest, but the wind fetches are greatest producing larger wind induced currents. 481 At low water slack stratification is at its strongest, the high wind speeds act to break down the 2- where the river discharge is strongest (e.g. intermittent red and blue stripes in Fig. 8d and 9d, 493 around 300 -500 hrs). sections. They compare equal periods of calm and stormy conditions to identify process 527 dominance over the longer-term, due to the cumulative effect of event-scale process contribution 528 (in magnitude and duration) presented here. The study period (Fig. 2) consists of calm and stormy 529 conditions with a mean river discharge (32 m 3 s -1 ), which is equivalent to the long-term mean (31 530 m 3 s -1 ). This period therefore gives good representation of the typical conditions within the Dee 531 Estuary. 532
533
The dynamically evolving bathymetry within the Dee Estuary (Moore et al. 2009) and lack of 534 bathymetric data at the time of observation prevents the time-varying modelled circulation from 535 being perfect at a point observation. In a hypertidal estuary the large tidal prism means inter-tidal 536 shoals in addition to the sub-tidal channels will have an important role influencing the accuracy of 537 the estuarine processes. In Section 3.1 POLCOMS-GOTM is shown to give acceptable 538 simulations of the residual circulation for the given input data (Fig. 3). Previously, the model has 539 been found to be robust at modelling the 3D current patterns within Liverpool Bay (Brown et al. period. Extreme storms have a strong, but short-term influence during the event. In Liverpool 625 Bay extreme storm surges are often associated with southwest winds, under which conditions the 626 local wind counteracts the influence of the external surge at this estuary mouth. The influence of 627 storm events on the residual circulation is different within the two channels due to their 628 orientation relative to wind direction. This is an example of how the complexity of the channel-629 bank system within an estuary prevents a consistent pattern in circulation occurring across the 630 estuary. For sediment dynamics the volume flux during such short-term events (e.g. storm events) 631 of atypical circulation may have high impact for the long-term net transport. The short-term 632 deviations in residual circulation demonstrate that time-scales longer than seasonal influence (due 633 to changes in storminess and river discharge) must be considered to truly define the long-term 634 process dominance. 635
636
In addition to the sub-tidal residual the interactions between tide and stratification are found to 637 create a strong intra-tidal residual, influencing the time-variation of the total residual. In hypertidal conditions the interactions must also be included when considering residual circulation 639 as they can be as important as the sub-tidal process contribution itself. The periodic (semi-diurnal 640 for the Hilbre Channel and during calm neap conditions for the Welsh Channel) formation of the 641 vertical 2-layered water column structure can have an important role in longer-term transport 642 pathways. It is therefore suggested that the net transport of suspended and dissolved particles 643 within a hypertidal estuary system can be dependent on the baroclinicity despite low river flow. supplementing the meteorological forcing with air temperature, humidity, and cloud cover to 661 enable full atmospheric forcing. River data has also been supplied by the Centre for Ecology and for the residual generated by the river discharge (R, case 6 in Table 2). 853 | 2019-03-20T13:07:23.500Z | 2014-09-01T00:00:00.000 | {
"year": 2013,
"sha1": "ced7f36fdc3995fad26fa4cd18c9b87a41cfdca9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12237-013-9745-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "71413dbef5d98e4d61f37568bb765f7378bcf6b9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
213551828 | pes2o/s2orc | v3-fos-license | Comparative analysis of heat supply options for small and middle-sized settlements of Eastern Siberia by using uncertain and fuzzy information
The problems of heat supply in small and medium-sized settlements of Eastern Siberia are considered. Variants of using cogeneration heat plant (CHP) or boiler plant on coal or wood waste, the use of liquid petroleum gas (LPG) and liquid natural gas (LNG) as a fuel, the use of electric boiler plant or stand-alone electric boilers, applications of solar collectors and heat pump installations are considered based on a systems approach and fuzzy modelling, by taking into account the uncertainty of some factors.
Introduction
One of the most important tasks facing the national economy of Russia is to increase the efficiency of heat supply for small and medium-sized settlements in Eastern Siberia. The low efficiency of heat supply systems is due to quite complex problems: considerable distance from fuel sources, centralized electricity and heat supply; long distances between settlements; the duration of the heating period (2/3 of a year); low outdoor temperatures. Traditionally, it is considered that the centralized heating systems are the most efficient [1]. Moreover, East Siberia is characterized by the presence of relatively cheap coal reserves, therefore coal heat sources are most prevalent in this region.
In large cities, due to the combined generation of heat and electricity and the use of highly efficient technologies, the use of coal-fired CHP plants ensures a relatively low cost of thermal energy and an acceptable degree of purification of combustion products. In small and medium-sized settlements, coal-fired boiler houses are mainly used, which are less efficient than CHP plants and, as a rule, are less environmentally friendly. All these factors lead to a high cost of thermal energy and an increase in tariffs for the population and business. Attempts by the tariff service to restrain the rise in prices for thermal energy lead to the fact that the resource supplying organizations lack the motivation to modernize and increase the energy efficiency of equipment [1][2][3]. The low level of remuneration, the shortage of funds for the necessary materials does not make it possible to ensure quality equipment operation. This, in turn, leads to significant losses due to low equipment efficiency and high wear of heat networks. This vicious circle needs to be broken. At present, it becomes obvious that the existing methods of state regulation in heating systems require serious changes. One of the most important points of the heat market liberalization planned by the Ministry of Energy is the so-called principle of [1]. On the one hand, it is assumed that the development of the market will lead to the introduction of the most efficient technologies. On the other hand, the "alternative boiler house" tariff should set a limit price for heat at the level of the amount that a consumer could spend on heating using his own gas boiler room, taking into account the cost of its purchase, installation and operation. It will be prohibited to sell heat at a price that exceeds this level. Thus, the key criterion for choosing the ways and means of modernization of heat supply systems is the economically reasonable tariff [3,4].
The modern level of development of technology creates the possibility of applying a wide variety of methods and means to the organization of heat supply to populated areas. In small and mediumsized settlements cogeneration can be effectively used in mini-CHP. In such settlements low population density and low level of operation increase losses in the transport of thermal energy which leads to low efficiency of the centralized heat supply from boiler plants. The analysis shows that in many cases autonomous energy sources using local types of fuel, as well as secondary and renewable energy sources are more profitable.
In determining the paths for the development of heat supply schemes, the final decision is made not only on the basis of the criterion of an economically reasonable tariff. When making a decision, a multi-criteria system analysis is necessary, taking into account the uncertainty of a number of parameters and the lack of clarity of judgment. Environmental constraints are becoming increasingly important. Many areas of Eastern Siberia, in particular, the southern Baikal region, are among the specially protected natural territories, where strict regulations are imposed on the emissions of harmful substances and even moderate risk of man-made disasters are unacceptable [5].
The experience of developing heat supply schemes shows that along with economic, environmental criteria, danger of emergencies, risks of interruptions in fuel supply, a significant role is played by such criteria as the attitude of the government and the population to particular technologies, prospects for the development of the settlement , etc. Such criteria are of a fuzzy nature. Often they are expressed in the following way: "the population will not accept it .."; "..this is not tested ....", "there is not enough experience." Such different nature of factors that must be considered when deciding on the direction of development of the heating system, the technical impossibility of measuring some indicators, and the fundamental unclearness of others led to the choice of fuzzy logic and optimization methods in this study. Inherent in the estimates of the operation of heating systems, fuzziness has led to the widespread use of fuzzy logic and probabilistic approaches in their analysis and the development of automated control systems [6,7].
Models and research methods
When making decisions on the future development of heat supply systems for small and medium-sized settlements in the southern part of the Baikal region, the determining factors are the factors influencing the formation of economically sound tariffs. When assessing the cost of fuel and materials, it is necessary to take into account systemic factors, including climatic conditions, environmental restrictions, logistics, etc. [2][3][4]. When choosing the direction of development of the heat supply system of populated areas, the main role is given to such criteria as: the cost of production and transportation of thermal energy, energy efficiency, environmental friendliness, safety and investment.
The algorithm for selecting the option of heating the settlement is divided into three main stages. At the initial stage, all possible sources of energy are analyzed. For further analysis, only those are selected whose use is possible without violating environmental restrictions and is expedient from a logistics point of view. At the second stage, heat supply schemes for the selected sources are formed taking into account the corresponding technologies. At the third stage, the choice of the heat supply scheme is carried out, based on the criterion of the minimum cost.
This paper uses materials obtained in the development of a heat supply scheme for the Baikal urban settlement [9], which was conducted with the participation of the authors of the article. During the development and subsequent updating of the heat supply scheme, special attention was paid to the choice of heat sources and the degree of centralization of the system. Options for centralized and decentralized heat supply and fuels such as coal, LNG, LPG, renewable and secondary energy sources were considered. The following options were investigated as possible heat supply schemes: modernization of the existing CHP plant; construction of a heat source focused on the use of various types of fuel (LNG, LPG, coal, biofuel) using cogeneration (mini CHP); construction of a boiler plant designed to use various types of fuel (LNG, LPG, coal, biofuel); construction of electric boiler; the device of the autonomous heat sources working at electric energy (EE); use of renewable and secondary energy sources.
The first stage of the research is the collection of information on each of the possible variants. The second stage is the selection of options that meet the restrictions on harmful emissions, and are accepted by the population and government officials. The final stage is the selection of options for which the cost of heat produced does not exceed a certain threshold value. The criterion for selecting an acceptable heat supply option at the second stage is as follows: where 0<R i <1 is the value of the membership function of the class of solutions that violate the ith constraint. ܴ ேை , ܴ ௌை , ܴ ௦ show the degree of violation of standards for emissions of nitrogen oxides, sulfur oxides and ash content, respectively; ܴ , ܴ , ܴ ௩ are numerical representations of the linguistic parameters reflecting the danger of an emergency, the attitude to the heat supply variant of the population and the government, respectively. A coefficient of 0.5 reflects the relative importance of the indicator: from the population the absence of pronounced protest is enough while the options that are not accepted by the government cannot be accepted in principle. At the first stage, ten possible heat supply options for the city of Baikalsk are presented for consideration. At the second stage, options based on the use of coal are excluded due to environmental constraints, options based on the use of gas are excluded for high risks of accidents; heat pumps and solar collectors are excluded for negative attitudes of population and government. As a result, biofuel-related options, electric boiler houses and autonomous electric boilers were found to satisfy all the limitations and pass to the third stage of selection.
The main results of the study on heat supply of Baikalsk city are summarized in table 1. In columns 6-11, the indicators of compliance with the restrictions are given in the form of fractions: in the numerator the absolute or linguistic values are given, and in the denominator the degrees of violation of the restrictions are given. In the derivation of the thermal energy cost when using LPG and LNG the fuel price of 20 thousand rubbles per ton is assumed. It should be noted this price might be higher if we take into account the trends of the world market and the positions of leading Russian companies towards the prospects for gas supply in Eastern Siberia.
The high risk of using LPG and LNG is due to the possible consequences of the gas release in case of accidents during transportation or storage. The government' distrust of heat pump technology and solar collectors is due to previous negative experiences. The biofuel-based technologies meet no confidence because of the lack of experience of large-scale use of the technology. The options based on the use of LNG and LPG are excluded due to the lack of confidence in the reliable gas supplies at reasonable prices. Serious concern regarding the construction of the electric boiler plant is the likelihood of a significant increase in electricity tariffs.
The negative attitude of the population towards heat pumps and solar collectors is caused by the need for serious reconstruction of building heating systems and increasing personal responsibility for the operation of the systems. A significant part of the population does not want to take any responsibility for their dwelling heating and express the position of the mere user: "the government must provide ...". However, increasing the degree of a consumer control over the consumption of energy resources increases his/her motivation to save energy. Owners of houses with autonomous heat sources and energy accounting are much more thrifty about their consumption. From this point of
Results and discussion
Comparative analysis of the cost of heating options and the required investment is given in figure 1. From the point of view of economic and energy efficiency, the most attractive are options for the use of biofuels, heat pumps and solar collectors, that is, options using secondary and renewable energy resources. The cheapest energy give options with solar collectors. However, considering other criteria, including the amount of necessary capital investments, this option cannot be accepted as a base case. However, the use of solar energy as a supplement to other heat supply options can improve the efficiency of heating systems.
The lowest cost of production of heat energy is provided by CHP due to the joint production of heat and electric energy and due to the low cost of fuel. But due to severe environmental constraints [5], the use of CHP as a base case has also been rejected. As a result, for Baikalsk city, the base option was chosen for heating from boiler houses operating on biofuel such as dry chips and/or pellets [9]. The use of combined heat and power generation using local biofuels provides a lower cost of energy supply than the boiler houses, but requires higher investments. The choice of a cheaper option with regard to investments was influenced by the low assessment of the development prospects of Baikalsk city. At the same time, from the standpoint of energy efficiency, environmental friendliness, as well as in terms of investment volumes, heat sources using electricity are more preferable. In this case, the rejection of options associated with city electric boiler or with autonomous electric boilers is caused by a high estimate of the risk of increase to an unacceptable level of electricity tariffs. When evaluating a variant using heat pump installations, the lack of experience in large-scale use of heat pumps in Russia has played a decisive role. In addition, several years ago in Baikalsk city, a heat pump of 500 kW was installed. The decision to introduce this installation was made in a purely bureaucratic way, without taking into account the fact that the wastewater treatment plant, where the heat pump was installed, was remote from consumers. As a result, the use of the heat pump installation did not justify itself, and, apparently, this formed a long term negative attitude towards innovation on the part of decision makers.
Conclusions
Given the full range of system factors, coal-fired boilers in the short term are no longer the cheapest heat sources for many small and medium-sized settlements in the Baikal region. It is advisable for small and medium-sized settlements to individually optimize heat supply schemes. Inflated risk assessments are caused by lack of experience. This, in turn, is due to the lack of pilot projects. To reduce the negative impact of overestimated risk assessments, it is important to intensify pilot projects on the use of heat pumps, solar collectors, biogas, mini CHP. As for Baikalsk city, finally the biofuel boiler plant was found to be the optimal choice considering all constraints. | 2019-11-28T12:48:24.952Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "20fca17087a91cd2f470895c1bf0c0617bdef93d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1369/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "de9c1d309b7ae9025e5145b26f8a77c34546fd20",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
} |
5127287 | pes2o/s2orc | v3-fos-license | Morphological Analysis for a German Text-to-Speech System
A central problem in speech synthesis with unrestricted vocabulary is the automatic derivation of correct pronunciation from the graphemic form of a text. The software module GRAPHON was developed to perform this conversion for German and is currently being extended by a morphological analysis component. This analysis is based on a morph lexicon and a set of rules and structural descriptions for German word-forms. It provides each text input item with an individual characterization such that the phonological, syntactic, and prosodic components may operate upon it. This systematic approach thus serves to minimize the number of wrong transcriptions and at the same time lays the foundation for the generation of stress and intonation patterns, yielding more intelligible, natural-sounding, and generally acceptable synthetic speech.
INTRODUCTION
Many applications of computer speech require unre- -One fundamental rule is that vocalic quantity is determined by the number of following consonants: the first rule given in the DUDEN Aussprache-wSrterbuch [5] states that <a) is to be pronounced /a:/ when followed by only one consonant grapheme before the stem boundary, so that the inflectional form fast of the verb rasen ("rush") becomes /ra:st/~ whereas the simplex noun Rast ("rest") becomes /fast/.
Those phenomena play a role in the domain of derivation and inflection~ which has been dealt with in several systemst e.g. SYNTEX [6] or REDE [7]; these do contain lists of common prefixes and suffixes to (/tr/) vs. FuB (/u:/)); this "defect" (cf. [10]) p.108) can be got round by maintaining the opposition between <ss> and <~> in the lemma.
The information-tree contains classificatory data pertaining to the morph itself and to those it may immediately select; they concern morphological status ( lexical stem -particle -derivational morph inflectional morph -juncture -...), native or foreign status, and combinatorial restrictions. In addition, the lexicon allows the introduction of information for the assignment to parts of speech and, wherever necessary, indications as to exceptional pronunciation or stress pattern.
Extent of the Lexical Inventory
At present the lexical inventory comprises some 2000 entries, the choice of which was based on Ortmann [11], itself compiled from four frequency lists. As for the contents of the entries, we relied on Augst [12], Mater [13], and Wahrig [14]. It ts of course not to be expected that the lexicon would ever cover the entire vocabulary of a native speaker, nor is that our intention; consequently, we foresee a "joker raorph" which can stand for any stem that may happen to occur. This is made possible by the generalization that a German stem conforms to a number of structural principles: for example, every stem must contain a vowel and the variety of consonant clusters in initial, medial, and final position is restricted (of. [8]). The following example is somewhat more complicated. Correctness of the phonemic transcription certainly accounts for a great part of the quality and acceptability of a text-to-speech system. Nevertheless it is often claimed {e.g. [6]) that synthetic speech should be evaluated along further dimensions, such as inte[-268 ligibility, listening comprehension and naturalness.
One goal Of the approach presented here is to lay the ground for the incorporation of rules for the assignment and realization of stress and intonation patterns not only on the word but also on the sentence level. Thus the basic phonetic transcription will be extended and modified so as to give a representation closer to natural speech. | 2014-07-01T00:00:00.000Z | 1986-08-25T00:00:00.000 | {
"year": 1986,
"sha1": "2e437463c8c5bd4900ea45224052afda3f53e60e",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=991443&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "2e437463c8c5bd4900ea45224052afda3f53e60e",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118772180 | pes2o/s2orc | v3-fos-license | A New Scientific Revolution at the Horizon?
At this beginning of the 21st century, the situation of physics is not without analogy with that which prevailed a hundred years ago, with the outset of the double scientific revolution of relativity and quanta. On the one hand, recent progress of observational cosmology makes think that one has discovered a new universal constant, perhaps as fundamental as the velocity of light or the Planck's constant, the cosmological constant, which could explain the acceleration of the expansion of the universe. On the other hand, just like the efforts of Planck and Einstein to reconcile thermodynamics and the electromagnetic theory of light led to the operational beginning of quantum physics, the unexpected discovery of bonds between thermodynamics and general relativity makes to foresee new concepts, perhaps heralding a new scientific revolution, like that of holography and leads to consider a"thermodynamic route towards quantum cosmology."We will discuss the possible implications of these observational and theoretical developments.
1/ Introduction
In these same places, four years ago, we celebrated the centennial of the miraculous year of Einstein during which he gave the starting point of the scientific revolution of the 20 th century. Today, we celebrate the four hundredth anniversary of the use by Galileo of his telescope which, also, marks the starting point of a scientific revolution, the one which saw being born then developing modern science, since the theory of the universal gravitation by Newton until the apogee of classical physics at the end of the 19 th century. Whereas the scientific revolution of the 20 th century leads to an apogee comparable with that of classical physics, the progress achieved by observational cosmology and very lively theoretical developments seem to suggest that a new scientific revolution appears at the horizon. It is what we will try to explain in this conference.
At the end of the 19 th century, the apogee of classical physics consisted of three theories which concretized significant syntheses or unifications and which made it possible to model in a satisfactory way the whole of the then observable phenomena: the electromagnetic theory of light by Faraday, Maxwell and Hertz which unified the electric, magnetic and optical phenomena, the theory of universal gravitation by Galileo and Newton which unified terrestrial mechanics and celestial mechanics and the conjunction of analytical mechanics by Lagrange and Hamilton, of the kinetic theory of matter and statistical thermodynamics by Maxwell and Boltzmann leading to the unification of rational mechanics with the atomistic conception of ancient Greek philosophers.
The overall structure of the theoretical framework of classical physics is maintained in the physics of the 20 th century. This framework now includes three theories taking each one into account a couple of dimensioned universal constants which prolong the theories of the framework of classical physics: quantum field theory (constants and c) which prolongs the electromagnetic theory of light and is used as a basis for the standard model of the physics of particles and their non-gravitational fundamental interactions, the general theory of relativity (constants G and c) which prolongs the theory of universal gravitation by Newton and is used as a basis for the standard cosmological model, and quantum statistics (constants and k) which prolongs analytical mechanics and statistical thermodynamics and which is used as a basis for the phenomenological consolidation of the standard models of particle physics and cosmology.
Which elements of comparison do exist between the pre-revolutionary situation of the end of the 19 th century and today? Lord Kelvin (William Thomson), analyzing in 1900 the field of investigation of physics, announced that it was about to be completed except for two "small clouds", of which he thought that they would require only some adjustments to be reabsorbed. It was about the failure of the detection of the earthmoving in ether (experiment of Michelson and Morley) and of the absence of theoretical explanation to the observed black body spectrum. The resolution of the first difficulty thus gave place to the special theory of relativity and then to the general theory of relativity and that of the second one led to quantum physics.
Nowadays, it is regularly written that contemporary physics, despite of several attempts still fails establishing a theory which would contain at the same time general relativity and quantum physics. Is it in addition possible to reinstate in physics the recent projections of observational cosmology? Will that make it possible to reach a unified comprehension of physics? We will see that there seem today to emerge some completely new approaches of these questions, some having been proposed very recently. If they have as a common point, with the resolutions of the questions raised by Lord Kelvin, to call into question the idea of space and to include statistical thermodynamics, they especially have the advantage of showing in what these two questions are dependent.
Quantum field theory
Quantum field theory (QFT) carries out the marriage of special relativity (taking into account c) and quantum mechanics (taking into account ). Due to the weakness of the gravitational interaction at experimentally accessible energies, it postpones the marriage of quantum mechanics and general relativity.
The constants c and translate fundamental limitation principles which are obeyed by QFT. The vacuum speed of light c is the universal constant translating, always and everywhere, the impossibility of instantaneous action at a distance. The existence of an elementary quantum of action excludes any subdivision of individual quantum processes, which must be neither treated, individually, as predictable or reproducible events. For these two principles to be obeyed, the quantum fields are: -Relativistic fields, i.e. defined at each point of the Minkowski space-time; -Quantum fields, i.e. fields of operators acting in a Hilbert space and causing events of emission or absorption of energy quanta; -Energy quanta are particles or antiparticles (a feature that solves the problem of negative energies).; -Interactions are described by means of local couplings, i.e. products of fields evaluated at the same point of space-time; -The vacuum is the fundamental or ground state of the system of quantum fields in which the number of energy quanta is null, but where the quantum fields are affected by quantum fluctuations (for example the formation followed by the annihilation of a particle-antiparticle pair).
The locality of the interaction couplings induces a singularity in QFT: the integrals from which one obtains the transition amplitudes are divergent. It is the theory of renormalization which makes it possible to overcome this difficulty: a theory is known as renormalisable if all observable physical quantities can be expressed without infinity in terms of parameters that depend on energy, that are redefined (it is said "renormalized") by the interaction. A renormalisable theory is predictive: the parameters on which it depends are in a finite number and they can be determined experimentally, but it is not "fundamental", since the value of the parameters depends on the resolution (i.e. on the available energy). This theory is effective because the dependence in energy of the parameters is predictable, thanks to the of the renormalization group equations (RGE).
The standard model of particle physics
The standard model of particle physics is the result of the application of QFT to the nongravitational fundamental interactions, namely the electromagnetic interaction and the strong and weak nuclear interactions.
The quantum and relativistic theory of the electromagnetic interaction, known as Quantum Electro-Dynamics (QED) was, at the end of the Forties, the first stage of the construction of the standard model. It is a theory in which the effects of the quantum corrections are calculable (because the theory is renormalisable) and measurable in atomic physics at low energy. The agreement between the theoretical predictions and the experimental data exceeded all the hopes, so that this theory was used as a model for the development of the theory of the other fundamental interactions.
This development was made possible, on the one hand, by the discovery in the Sixties, of a new level of elementarity, the level of quarks, the elementary constituents of the hadrons, the particles which take part in all fundamental interactions including the strong interaction, and on the other hand by the identification of the property of symmetry that is essential in QED and likely to be generalized to other interactions, gauge invariance. -This mechanism implies the existence of at least a not yet discovered particle, the Higgs boson, the research of which is the top priority assigned to the Large Hadron Collider (LHC), the collider that was commissioned at CERN at the end of 2009. Except for this last missing link, the overall agreement between the theoretical predictions of the standard model and the whole of the experimental data up to energies about the hundred of GeV is satisfactory (of the order of the percent).
Which new physics beyond the standard model?
The standard model which consists of effective theories is not insuperable. The parameters of which it depends, as those which measure the intensity of the interactions at the elementary level, are not constant, they depend on energy. The standard model which works well at the highest currently accessible energies can very well be embedded, at higher energy, in a theory which would include it and would give it again as a low energy approximation. As it was noted that the intensities of the non gravitational interactions seem to converge at an energy of about 10 15 GeV, it is tempting to suppose that at this energy these interactions could be unified in a grand unification theory (GUT). In such a theory the proton would be unstable and certain problems unsolved within the standard model, like the breaking of the matter-antimatter symmetry, or neutrino masses could find a solution. But such a theory would suppose the existence of a new Higgs mechanism occurring at an energy 10 13 times higher than the energy of the electroweak symmetry breaking. The articulation of two Higgs mechanisms at such different energies poses a very difficult problem, of which a solution could be found thanks to a new property of symmetry, supersymmetry. One has thus developed an extension of the standard model, the minimal supersymmetric standard model (MSSM) with which one would recover all the assets of the standard model at energies lower than that of the LHC, but which would lead to the prediction of new observable effects at the LHC energies, such as the existence of many new particles.
The way of GUT is considered by particle physicists as the direct way towards the reconciliation of quantum physics and general relativity, within a quantum theory of gravitation, whose field of validity would be that of the scales of Planck But, as we are going to show now, another way seems to open in the direction of a quantum theory of gravitation, that of thermodynamics.
General covariance, principle of equivalence and a geometrical theory of gravitation
The theory of relativity was developed by Einstein in two stages: in 1905, special relativity integrates the Galilean principle of relativity (equivalence of inertial reference frames moving in relative uniform rectilinear motion) and in 1916, general relativity extends the principle of relativity to arbitrary changes of reference frames 1 .
To generalize the principle of relativity to arbitrary changes of reference frames (principle of general covariance), Einstein makes a detour through the theory of universal gravitation: starting from the independence of the acceleration communicated to a body by gravitation with respect to the mass and to the other properties of this body, independence which had been noticed by Newton and which he promotes to the status of the principle of equivalence, Einstein shows that: (i) An arbitrary change of reference frame can be replaced, locally (i.e. in an infinitesimal domain of space-time), by an adequate gravitational field and (ii) The gravitational field can be replaced, locally, by an adequate change of reference frame.
He thus arrives at a geometrical theory of gravitation whose fundamental equation connects the Ricci-Einstein tensor G µν related to the non-Euclidean geometry of space-time to the energymomentum tensor T µν describing in a phenomenological way the properties of matter. The proportionality constant relating these two tensors is fixed in such a way that one recovers Newtonian gravitation theory at the non-relativistic limit: General covariance implies that only events of space-time coincidence are observable. Arrived at his equation, Einstein made the following comment about it: "the theory avoids all the defects which we reproached to the foundation of classical mechanics. It is sufficient, as far as we know, for the representation of the observed facts in celestial mechanics. But it resembles a building whose one wing is built of fine marble (first member of the equation) and the other of wood of lower quality (second member of the equation). The phenomenological representation of the matter compensates, actually, only very imperfectly a representation which would correspond to all the known properties of matter 2 . " General relativity implies departure from the theory of Newton only when the gravitational fields are strong. The importance of such effects can be evaluated simply by determining the escape velocity, i.e. the speed above which a test body can escape from the gravitational field generated by a planet or a star. If this speed is not negligible as compared to the velocity of light, then the Euclidean geometry, or more precisely the Minkowskian geometry of space-time of special relativity, is not relevant any more in the description of the laws of mechanics. The curvature of space-time must be taken into account. It remains to appreciate the importance of gravity. In the case of a spherical object of radius R, the escape velocity is given by 2GM/R. On Earth, it is about 11 km/s, so the relativistic effects are not very visible (although it is already necessary to introduce corrections due to general relativity into the very precise synchronization of the GPS). General relativity effects become significant only on stellar scales.
In the article of synthesis appeared in 1916, in which general relativity is completely formalized, Einstein indicates three possible validations of his theory: the advance of the perihelion of the elliptic orbit of Mercury around the sun (of 43'' per century!), a shift of the spectral lines emitted by massive stars and the deviation of the rays in the vicinity of the sun (which was measured a little time later, at the time of the eclipse in 1919, as envisaged by relativity).
General relativity and cosmology
If it thus appears that significant masses are necessary to lead to departures from the Newtonian theory of gravity, an additional argument makes it possible to understand the reason why such effects can show up at cosmological scales. Just like in classical gravitation, general relativity treats only positive masses (or in an equivalent way their energy content E = mc 2 ). As a consequence the effects of curvature are additive. Thus, with an average density of matter in the universe, the theory can associate an average curvature of space-time. General relativity thus appears as being naturally connected to cosmology, the science which has as an aim the description of the whole universe. A universal description of the world is then possible because the universe appears homogeneous and isotropic at long distance. This formalization thus rests on the cosmological principle that stipulates that there exists a universal time, but that there exists neither a geometrical position nor a direction that are privileged in space.
Inaugurating this way of thinking, Einstein realizes as soon as 1917 that the universe of general relativity is unstable. From the additive character of the gravitational attraction that we have just mentioned, follows the fact that uniformly distributed matter at rest cannot be in a stable equilibrium: spontaneously, the Universe should collapse to a point. To mitigate that, and to thus ensure cosmos a total immutability, Einstein adds to the equation of general relativity, an extra term, perfectly respecting its general covariance, which he denotes Λ, and which has been thereafter called the cosmological constant. The positive value of this constant physically translates the effect of a repulsive internal pressure able to counterbalance the attractive action of gravitation. Within the framework of the cosmological principle, the solution of the Einstein equation thus obtained 3 corresponds to a static and elliptic universe (finite, but without edge just as the two dimensional surface of a sphere), in which a light ray would return to its departure point. The first component of this equation makes it possible to connect the radius of the universe R, the cosmological constant Λ and the density of matter ρ 0 Einstein is all the more satisfied with the addition of this constant that it leads to a cosmological model satisfying what he calls the Mach's principle which requires that the gravitational field as well as inertia are completely determined by the energy content of the universe. In such a model the Mach's principle is indeed respected because the cosmological constant, that is purely geometrical (its dimensional content is that of the inverse of the square of a length) is independent of the energy content of the universe, a content that is completely described by the density ρ 0 , since the universe is finite.
It remains that this cosmological constant revealed, as shown by de Sitter a few months later 4 , a formal solution to the equations of relativity with no right hand side, completely unacceptable with respect to the Mach's principle: a universe of elliptic geometry void of matter but not deprived of gravitational field! In fact such a universe would be stable because it would not contain any matter. If particles, of mass sufficiently low not to affect the global geometry, are deposited in a given region of space, the effect of the cosmological constant will be to move them away towards an event horizon (located at a finite distance). The presence of this horizon was analyzed by Einstein as a singularity without physical counterpart.
The interpretation of the role of the cosmological term can in fact be reduced to its position in the equation of Einstein. If it is in the left hand side (related to the metric tensor), the analysis of de Sitter indicates that the Mach's principle is not respected, whereas if it is in the right hand side (related to the energy-momentum tensor), its origin is to be brought back to the microscopic interactions but then the effect of the repulsion remains to be explained.
During the Twenties, the problem of the cosmological constant appeared more and more severe. It was thus shown, on the one hand that even the idea of Einstein of relying on a cosmological constant leads to a universe that is unstable, and on the other hand, that the singularity of the horizon of the de Sitter model which Einstein denounced was actually not a singularity. It is with the works by Friedmann in 1922 and Lemaître in 1927 that the events took another turn 5 . Each one of these two authors showed, independently of the other, that there were also dynamical solutions to the equation of Einstein. Space-time thus has the property of being able to dilate or to contract. Mathematically, these solutions then connect the average density of matter not to the radius of the universe but rather to its scale factor a(t) and its time derivative ( ) a t . One thus has Such an innovation makes useless the addition of the cosmological constant. Initially, Einstein was not very much interested in these dynamical solutions because they corresponded to open universes, for which the inertial interpretation of the Mach principle becomes problematic at infinity. He started to change his opinion at the end of the decade when observational evidence of a temporal evolution of the geometry started to come up at the horizon. This is how the "big-bang" theory (according to the name that the astronomer Fred Hoyle gave it around the middle of the 20 th century), adding to relativity the expansion of space, produced the first valid cosmological model for which the addition of the cosmological constant was not necessary any more.
The standard cosmological model: from the simple big-bang model to the cosmology of concordance
It is to Edwin Hubble that one owes the first observational indices of an expansion of the universe 6 . This astronomer used for that the velocity measurement of far away distancing galaxies 4 W. de whose position was also known (velocity is in fact given by the Doppler shift on the spectral lines, whereas the distance is known thanks to Cepheid stars, present in these galaxies and whose characteristics of absolute luminosity was known). The result which he announced in 1929 was that the speed of distancing was proportional to the distance, that is to say v = H 0 d where H 0 is the constant which now bears his name and which makes it possible to determine the growth rate of space in the models of Friedmann and Lemaître (the numerical determination of this constant is frightening of difficulty; today, it seems to be equal 7 to 70,5 ± 1,3 km/s/Mpc).
The very existence of a linear relation speed/distance is a proof of the expansion -and of a uniform expansion − of the universe. Its origin is easily understood by means of the following analogy made with a one dimensional model: if one draws one end of a rubber band, the other end being maintained fixed, the displacement from the fixed origin of a particular point of the rubber band will be the larger the more it is initially distant from it. The fact that the proportionality constant is a scalar adds to the property of homogeneity of space, the one of its isotropy.
To the Hubble's proof two other major evidences of the expansion of the universe are added today. The first one relates to relative abundances of the elements resulting from primordial nucleosynthesis. It has thus been noticed that the relative proportions of helium, deuterium and lithium are appreciably uniform in the whole universe. This indicates a common origin of the light elements. As those can be formed starting from protons and of neutrons only for temperatures of about 10 9 K, one can conclude from it that it is necessary that the universe was, at a former period, hotter and thus denser. That corresponds precisely to the idea of the expansion. It is in this manner that George Gamow, Ralph Alfer and Hans Bethe proceeded to show in 1948 that the standard theory of the big-bang led to an exact forecast of relative abundances of the light elements 8 .
The second proof of the big-bang model resides in the observation of the cosmological microwave background (CMB). This electromagnetic radiation, detected in 1965 by two radio astronomers, Arno Penzias and Robert Wilson 9 , was a prediction of the big-bang theory made firstly by George Gamow, and then by Ralph Alpher and Robert Herman 10 in 1948. This radiation indeed originates in the radiative transitions from the first neutral atoms which could have been formed when the temperature of the universe became gradually rather low (about 3 000 K). Between this time (some 380 000 years after the big-bang) and today (approximately 13,7 billion years after the bigbang), the wavelength of the emitted photons then increased with the expansion of the universe so that this radiation initially lying in the visible and the ultraviolet domain is detected today in the domain of the radio waves. Perhaps this shift in wavelength constitutes the most remarkable proof of the expansion of space-time.
Let us have a pause to specify the interest for cosmology of the observation of this cosmological background. At the time of its emission, matter is not organized: atoms are in thermal equilibrium with the radiation. The maximum of intensity of the radiation can then be connected, thanks to the black body theory discovered by Planck, to the temperature of matter. It was the object of the sending of the COBE and WMAP satellites (respectively launched in 1989 and 2001) to measure with a high accuracy the space variations of the temperature of the cosmological background. This temperature is on average equal 11 to 2,726 K and its fluctuations (once are subtracted the Doppler effect caused by the earthmoving and the effects of known sources in the Milky Way) relate to the fifth figure of the preceding value. If the study of this radiation thus states that the universe was remarkably homogeneous at its beginnings, these very small fluctuations also provide significant information because the appearance of the large scale structures which one currently observes in the 420, 439 (1994) distribution of galaxies in the universe can be partly connected to the temperature fluctuations in the primordial matter.
The models of Friedmann and Lemaître make it possible to correctly describe a universe compatible with the three major evidences of the expansion which have just been given. These cosmological models however do not allow explaining that the cosmological background is so homogeneous and that the curvature of space (deduced for example from the density of matter and the growth rate provided by the measurements of the WMAP satellite) is so low. The observed homogeneity of the CMB is particularly paradoxical because it implies that two regions that have never been in the past in causal contact should have now almost exactly the same temperature. Which could thus be the reason which makes them resemble each other so much? It was proposed to explain this fact to supplement the first cosmological model which has just been presented, with a new hypothesis, the one of an initial exponential expansion called inflation of space-time.
This idea was initially proposed, at the beginning of the Eighties, by Alan Guth 12 . It can be exposed in the following way. In the very remote past of the universe, i.e. at temperatures ranging between 10 27 K and 10 32 K corresponding to times after the big bang ranging between the Planck's time (10 -43 s) and the time of the grand unification symmetry breaking (10 -35 s), a phase transition would have occurred whose causes remain to be explained. It could have been translated, in the equation of Einstein, by the appearance of a term similar to the cosmological constant. During this period, the universe would have extended in space according to an exponential of the time, so that the scale factor of the universe would have increased at least by 26 orders of magnitude. This brutal dilation would have leveled the universe while making its spatial curvature negligible. Moreover, the various areas of the sky today observed in the cosmological background would then have been causally connected in their very remote past, a fact that cannot be accounted for by the sole Hubble constant. This scenario of inflation, intellectually attractive, missed, until very recently, any observational support.
It turns out that, precisely, observational cosmology made, very recently, considerable progress in two fields: the one of the measurement of distances by means of the observation of supernovas of the type IA in remote galaxies, which made it possible to improve the determination of the temporal dependence of the scale factor of the Universe, and the one of the measurement of the cosmological background, which allowed, starting from a very detailed study of its fluctuations, to improve the determination of the various components of the energy density. Using phenomenological models, the interpretation of the data coming from these two fields converged towards what one now calls the cosmology of concordance, or the ΛCDM 13 model, for Lambda-Cold-Dark-Matter, which can be summarized in the following way: -The spectrum of the fluctuations of the CMB is compatible with the inflation scenario, which thus leaves the realm of pure speculation; -The age of the Universe is estimated at 13,7 billion years, up to a few percents; -The date of emission of the CMB is 379 000 years after the big-bang; -The energy density of the Universe is compatible 14 with the density, known as the critical density, corresponding to a spatially flat Universe: introduced and then given up by Einstein. This last observation, which one can describe as a true discovery, is a complete surprise. If confirmed, it would lead to a sign heralding a new scientific revolution since it would lead to a surprising prediction about the far future of the universe: in fact with a non vanishing cosmological constant, when time goes to infinity the horizon radius goes to the length associated with the cosmological constant, the energy density goes to the dark energy density DE ρ , the observable universe (i.e. the universe inside the horizon) becomes empty since all galaxies are beyond the horizon 16 -2 3 1 / ; 8
Entropy and temperature of black holes
Already in the Seventies, some works of Penrose, Hawking, Bekenstein and Carter on black holes led to an understanding of their physics which required elements of thermodynamics. Let us come back, to explain that, to the classical expression which connects the escape velocity to the size of a massive sphere. It turns out that, since no object can exceed the velocity of light, a body of mass M will retain with it everything lying inside a region of radius R smaller than 2GM / c 2 (the Schwarzschild radius). The boundary of this region is the event horizon of the black hole (an external observer will not see anything of what occurs inside this zone). It is the origin of the bond with thermodynamics. Contrary to classical mechanics where the motion of particles is reversible, the physics of the black hole imposes an orientation of time. This characteristic is at the very foundation of thermodynamics. Thus, as the area of the horizon of a black hole can only increase when it accretes matter, it is possible to make it play the role of an entropic variable.
Bekenstein 17 determines in 1973 the precise expression of the entropy of a black hole using arguments based on quantum physics. Let us sketch his reasoning. To increase the entropy of the black hole by the smallest amount, a bit of information kLn2 (k is the Boltzmann's constant) is dropped into the black hole in the form of a photon of the smallest possible energy, i.e. with a the wavelength equal to the Schwarzschild radius. The corresponding increase in energy is thus given by the relation of Einstein, , namely four times the Planck's surface area. This area increase is independent of any particular characteristics of the black hole (such as its mass). Starting from this differential evaluation, one can go up to the total entropy of the black hole by noting that the entropy of a black hole of null size being null, there is no constant of integration. One thus expects that the total entropy of the black hole is equal to where the unknown constant factor η is of order 1. In addition, Hawking 18 has shown that it is also possible to associate a temperature 3 1 1 2 with the black hole that quantum physics authorizes to evaporate. By writing the second principle of thermodynamics with this temperature, one can fix the value of the constant unknown factor η. One then finds
Bekenstein's bound and holographic principle
As the formation of a black hole is the most effective way to compress matter in a certain volume, the Bekenstein's entropy appears as the upper limit of the information which can be contained in a sphere of space time. This upper limit is expressed using the holographic principle 19 : "How many degrees of freedom are there in nature, at the most fundamental level? The holographic principle answers this question in terms of the area of surfaces in space-time: (…) A region with boundary of area A is fully described by no more than A/4 degrees of freedom, or about 1 bit of information per Planck area 20 ."
The problem of information non-conservation and its solution by Susskind
As the evaporation of a black hole seems to be a non-unitary purely thermal process, Hawking 21 raises the problem of the non-conservation of information in the dynamics of black holes, which could announce an irreducible incompatibility between gravitation and quantum physics for which the principle of conservation of information (also called principle of unitarity of the S-matrix) is fundamental. L. Susskind describes in his popular science book 22 the long debate which opposed him to Hawking on this subject and the way in which he managed to solve the problem. He shows 23 that the paradox raised by Hawking is due only to the approximation, known as semi-classical, which he used in the modeling of the evaporation of the black hole and that an entirely quantum treatment of gravitation should allow to solve the paradox. Although a quantum theory of gravitation is not yet available, he proposes a model which, thanks to the holographic principle, respects the principle of conservation of information in the complete process which goes from the formation to the evaporation of a black hole: the quantum dynamics of the black hole is described by means of a unitary S matrix defined on the horizon of the black hole. The unitarity of this "holographic" S matrix is ensured by a combination of the principle of equivalence of general relativity and the principle of complementarity of quantum physics. The principle of equivalence tells us that an observer maintained outside the black hole perceives its horizon like a thermal system (namely a body black), whereas an observer in free fall in the black hole does not perceive the horizon which is a border of no return. How comes that the information perceived by the observer in free fall is not irremediably lost for the external observer? The answer suggested by Susskind to this question lies in the principle of complementarity of quantum physics: according to the above mentioned holographic principle, all information concerning the evolution of the black hole is encoded on the horizon, and in quantum physics, information residing on both sides of the horizon, which can be accessed only under contradictory conditions of detection (that of the observer in free fall and that of the external observer), is complementary, like are, for example, the dynamic variables constrained by the inequalities of Heisenberg.
A thermodynamic route towards quantum cosmology
The idea that gravity can be described as an emergent phenomenon has a long history which originates with the work of Sakharov 24 . Gravity-thermodynamics connection was discovered by Jacobson 25 who used the proportionality of the entropy to the area of the horizon and a classical thermodynamical identity to assimilate the Einstein's equation to an equation of state. The implications of this connection were thoroughly analyzed by Padmanabhan 26 . In reference 27 he presents the guiding principles of his program and the stages of what would be a thermodynamic route towards quantum cosmology: 1. The horizons are inevitable in the theory and they always depend on the observer; 2. The thermal nature of the horizons cannot occur without space-time having a microstructure 3. All observers have the right to describe physics using an effective theory based on the variables to which they have accesses; 4. The problem of the cosmological constant (why is it so small?) is due only to our bad understanding of the nature of gravitation. This problem cannot be solved in a theory arising from an action which (i) is generally covariant, (ii) uses as dynamic variables the components of metric and (iii) comprises a mater sector whose energy is defined up to an additive constant; 5. Gravity is an emergent phenomenon, which means that the components of the metric tensor are not the fundamental degrees of freedom and that its fundamental equations must be derivable starting from a new paradigm based on the connection between the equations controlling the dynamics of the metric and the thermodynamics of horizons. This paradigm should make it possible to obtain the dynamical equations without being necessary to vary the metric in the principle of action; 6. The theory of Einstein is only an effective theory at low energy; thermodynamic description should provide keys to evaluate the corrections to this theory.
In the final chapter entitled Gravity as an emergent phenomenon of the book 28 which he has just published, Padmanabhan has reviewed the results that he obtained in the achievement of his program, in particular with regard to item #4: he has shown that it is possible to derive the equations of the gravitational field while varying, in a variational principle, some degrees of freedom other than the components of the metric and residing on the horizon, in agreement with the holographic principle which stipulates that "the true degrees of freedom of gravity for a volume V , which cannot be eliminated by a gauge choice [i.e. by a choice of reference frame] reside on its border 29 ∂V " He then shows that in the volume delimited by the horizon, the cosmological constant is decoupled from gravity. This decoupling is the consequence of to the fact that the equations of the field are invariant by change of the additive arbitrary constant of material energy, which gives the freedom to introduce the cosmological constant as a constant of integration once that the equations are solved and not in the action from which the equations derive. Such a scheme would make it possible to solve the problem of the cosmological constant and to lead to a satisfactory agreement with the observational data as they are consigned in the standard cosmological model.
Gravity as an entropic force
Very recently, the gravity-thermodynamics connection caused a significant renewed interest following an article of Verlinde 30 in which he interprets gravity as an entropic force. He uses a heuristic reasoning, based on an analogy with the physics of polymers. He considers a polymer molecule immersed in a thermal bath, with one of its ends fixed in the bath. If one tries to extract the molecule by drawing it by the other end, it will be submitted to a force of entropic nature, which will tend to bring back the molecule in a state maximizing its entropy. Extending his analogical reasoning to the thermodynamics of the horizons, with a temperature and an entropy defined à la Hawking and Bekenstein, he succeeds in interpreting the force of gravity of Newton as an entropic force. He continues his reasoning by showing that the laws of Newton, that on inertia and that on the force of gravitation can be regarded as emergent, and that, in the same way, the principle of equivalence is emergent. Within a relativistic framework, he thus shows that his conjecture makes it possible to derive the Einstein's equations! The strong conclusion that he draws from his work is that it will be necessary to get accustomed with the idea that gravity is not a fundamental force: "It is time we not only notice the analogy, and talk about similarity, but finally do away with gravity as a fundamental force" He also suggests that such a paradigm shift should also take place in the string theory that is regarded as the best candidate for a quantum theory of gravitation, because the relation between open strings (which describe matter) and closed strings (which describe gravitation) can be also interpreted in terms of the emergence of gravitation.
6/ Conclusion
It thus appears to us today that current physics is in a prerevolutionary situation which is not without analogy with the one that prevailed at the beginning of the 20 th century. New elements of observations like the rediscovery of the cosmological constant, the discovery of the acceleration of the cosmic expansion, the confirmation of the inflation scenario, as well as the interrogations caused by the dark matter, come to shake the contemporary theoretical building of physics. Just like what occurred at the beginning of the last century, statistical thermodynamics seems to be the missing piece of the puzzle. It makes it possible to bring closer this time two fields hitherto considered as disconnected: general relativity and quantum physics, each one resulting from the dissipation of one of the two clouds of Lord Kelvin. This time these two theories must cohabit in the description of black holes in particular, and cosmology in general. It is the holographic principle which casts the bridge between the principle of equivalence of general relativity and the principle of complementarity of quantum physics. It makes possible a description of the dynamics of black holes free from any paradox and provides the basis of an understanding of the cosmological term in the equation of Einstein. The holographic principle accounts for the echo on the horizon of the events occurring in the volume of the expanding universe. For any event horizon, this principle also applies.
The consequences of the existence of the holographic principle are of paramount importance because the central role of the horizon and of its thermal properties put into question the geometrical paradigm on which physics was built following the Greeks and since the 17 th century. Information becomes first and space is emergent; the fine marble of the left hand side of the equation of Einstein is not more solid than the cheap and ordinary wood of its right hand side. The left hand side is to some extent the geometrical screen of the cave of Plato on which the shades of a more essential world are projected. The form which it takes in the equation of Einstein constitutes a low energy approximation of a more general theory yet to be elaborated. This informational rather than geometrical approach at the same time makes it possible to comprehend the principle of Copernicus and the principle of relativity of Galileo. If space does not exist by itself, then there is no privileged position in space and any observer can be regarded as the center of the world observable by him (or her), or in other words, in absence of a viewpoint, the point 30 E. Verlinde, On the Origine of Gravity and the Laws of Newton, (arXiv:1001.0785) | 2019-04-12T22:48:18.138Z | 2011-01-06T00:00:00.000 | {
"year": 2011,
"sha1": "d9da8d4c9df65a043a655eab871d341f937f3a3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9bc015362ab0f397b876088507f03415dc64f604",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
41802565 | pes2o/s2orc | v3-fos-license | Root Coverage with a New Collagen Matrix and Coronally Advanced Flap: A Case Report
US Food and Drug Administration for regenerative therapy involving teeth and implants, including treatment of dehiscence defects around teeth. Fabricated as a matrix and composed of pure porcine collagen obtained by standardized, controlled manufacturing processes. The collagen was extracted from veterinary-certified pigs and purified to avoid antigenic reactions. The matrix was made of collagen type I and type III without further cross-linking or chemical treatment and sterilized in double blisters by gamma irradiation. CM has two layers and is approximately 2.5 mm thick. The first layer is a compact layer, facing the oral cavity, consisting of a denser collagen that protects the wound but allows tissue adherence for favorable wound healing. This layer has a smooth texture with appropriate properties to accommodate suturing to the host mucosal margins. The second layer is a thicker, porous collagen that encourages tissue integration. This porous surface is placed adjacent to the host tissue to facilitate organization of the blood clot and promote neoangiogenesis [11,12]. Because CM seems to be a promising soft tissue graft substitute, we decided to test in this case report whether its placement under a CAF in a recession defects support root coverage.
Introduction
The treatment of gingival recession is a common requirement due to aesthetic concern or root sensitivity in patients with high standards of oral hygiene [1]. Gingival recession is defined as an apical shift of the gingival margin from its position 1 mm coronal to or at the level of the cemento-enamel junction (CEJ) with exposure of the root surface to the oral environment [2].
Tooth brushing trauma is the primary etiologic factor for gingival recession; in this situation cervical abrasion defects are frequently associated with the root exposures [3]. In the last decades, different surgical procedures were proposed to obtain root coverage: Pedicles flaps (PF), connective tissue graft (CTG), guided tissue regeneration, coronally advanced flaps (CAF), CAF+CTG, and more recently CAF+CTG plus enamel matrix derivative [4][5][6][7][8][9]. The coronally advanced flap (CAF) is a very common approach for root coverage, the surgery does not involve a palatal donor site, and it was demonstrated to be a safe and predictable. Localized gingival recessions have been successfully treated with this technique [10]. Outcomes have been reported by adding a connective tissue graft to the coronally advanced flap (CAF+CTG).
Due the morbidity and time associated with soft tissue graft harvest and the limited supply, the acellular dermal matrix is an important substitute. However, because this allograft material is derived from human cadavers, it is associated with ethical concerns, possible risk of disease transmission, extensive shrinkage during the healing period and is not completely incorporated histologically. 1:100,000, the root surface was gently scaled and planed with Gracey curettes(Hu-Friedy, Chicago, IL, USA), which contributed to reduce buccal prominence, and conditioned with 24% EDTA gel for 2 min to remove the smear layer. The exposed root surface was rinsed abundantly with sterile saline solution to remove all EDTA residues.
The surgical technique used to achieve soft tissue coverage was CAF. Two oblique, divergent beveled incisions were performed at the mesial and distal line angles of the two peripherical teeth with gingival recession. These incisions, together with the intrasulcular incisions along the mesial and distal recession margins, designed the two external surgical papillae. Crossed submarginal incisions, made interproximally, created the interdental surgical papillae.
The soft tissue apical to the root exposure (including the residual keratinized tissue) was elevated full thickness by inserting a small periosteum elevator into the probeable sulcus and proceeding in the apical direction to expose 3.0 to 4.0 mm of bone apical to the bone dehiscence. This was done to include the periosteum and the maximum soft tissue thickness in the central portion of the flap covering the avascular root exposure (Figure 1b).
The vertical incisions were elevated split thickness, keeping the blade almost parallel to the bone plane, thus leaving the periosteum to protect the underlying bone in the lateral areas of the flap. Apical to the bone exposure, split-thickness flap elevation continued until it was possible to move the flap passively in the coronal direction. To permit the coronal advancement of the flap, all muscle insertions present in the thickness of the flap were eliminated. This was done keeping the blade parallel to the external mucosal surface. Coronal mobilization of the flap was considered adequate when the marginal portion of the flap was able to passively reach a level coronal to the CEJ of the recession defects. The flap should be stable in its final coronal position, even without the sutures. Once coronally advanced, the flap partially overlaid the soft tissues mesial and distal to the receiving bed. These areas and the facial soft tissue of the anatomic interdental papillae were deepithelialized to create connective tissue beds ( Figure 1c). CM test material was trimmed to extend 2.0 to 3.0 mm beyond the bone crest (both laterally and apically) ( Figure 1d) and fixed with a sling suture using a 5-0 sutures (Vicryl, Johnson & Johnson, S. J. Campos, Brazil) around the crown of the tooth (Figure 1e). The flap was coronally positioned 2.0 mm above the CEJ to fully cover the CM by suturing it to the de-epithelialized papilla regions. At all times caution was maintained to avoid over compression of the test material. Suturing of the flap started with two interrupted periosteal 5-0 sutures at the most apical extension of the vertical incisions; it proceeded coronally with other interrupted sutures, each of them directed from the flap to the adjacent buccal soft tissue, in the apical-coronal direction. This was done to facilitate the coronal displacement of the flap and to reduce the tension of the flap. The sling sutures permitted stabilization of the surgical papillae over the interdental connective tissue beds and allowed for a precise adaptation of the flap margin over the convexity of the underlying anatomic crowns. At the end of the surgery, the flap margin was coronal to the CEJ (Figure 1f). This was done to compensate for post-surgical soft tissue shrinkage.
No periodontal dressing was applied. No antibiotic was prescribed. Acetaminophen 750-mg as needed for pain was noted. The patient was instructed to rinse three times a day for 1 minute with 0.12% chlorhexidine digluconate solution during 4 weeks. Rapid surgical healing with minimal postoperative morbidity was observed at 1 week (Figure 2a). The sutures were removed 14 days after surgery. esthetic in a 3 mm buccal gingival recession associated to traumatic brushing in the maxillary left canine (Figure 1a). Her medical history was unremarkable, no contraindications for periodontal surgery and no talking medications known to interfere with periodontal tissue healing, and she denied any history of smoking and collagen allergic. Periapical radiograph was taken, in a standardized manner using the long-cone paralleling technique.
The patient agreed to participate in this study and gave their written informed consent on an Institutional Review Board consent form. The study protocol involved a initial therapy to establish optimal plaque control and gingival health conditions, surgical therapy, a maintenance phase, and a postoperative evaluation 2, 4, 6 and 12 months after the surgery. Clinical photographs were taken at baseline, at surgery and each follow-up visit.
A periodontal examination was performed and all clinical measurements were determined to the nearest millimeter using a UNC-15 periodontal probe (Hu-Friedy, Chicago, IL, USA). Vertical probing measures were made at the mid-buccal aspect of canine measured from the CEJ to the free gingival margin. All measures parameters were recorded at baseline included gingival recession depht (3 mm), probing depth (2 mm) and clinical attachment level (5 mm). The width of keratinized tissue (3 mm) was determined from gingival margin to the muco-gingival junction (MGJ) that was accomplished by coronal displacement of the alveolar mucosa against a horizontally positioned periodontal probe.
The goal of treatment was determines if a CM with CAF might be as effective in the root coverage procedure of Miller's class I recession defect. The patient was offered the option the coronally advanced flap plus connective tissue graft (CAF+CTG) is considered the gold standart procedure. Because the morbidity and time associated with soft tissue graft harvest, CM+CAF were chosen.
Surgical Procedure
Preoperative intra-oral antisepsis was accomplished using 0.12% chlorhexidinedigluconate solution rinsed for 1 min. Following administration of local anesthesia with 2% mepivacaine containing Subject was advised to practice a care mechanical oral hygiene 4 weeks following surgery to minimize trauma to the surgical site and after this period, instructed in the Bass technique with an ultrasoft toothbrush and monitored once every 2 months until the end of the study at 12 months. During this period received professional supra-gingival plaque control. The patient reported very slight discomfort.
The clinical observation at 12-months revealed complete root coverage with an adequate zone of keratinized tissue with good healing, tissue contour without adverse sequelae (like keloids), color, and texture, nicely at the native adjacent soft tissues (Figures 2b and 2c).
Discussion
This report documents the use of CM+CAF for root coverage. This surgical procedure was designed to be shorter, less aggressive, and was thought to have fewer postoperative complications. Recently published prospective clinical trials investigating the efficacy of CM in treating both keratinized mucosal deficiencies and gingival recession defects suggesting that CM may provide, viable alternative to autogenous tissue grafts and unlimited "off-the-shelf'' supply of grafting material, reducing surgery time by approximately one-third. The savings in time and discomfort is weighed against the cost of the matrix [11,12].
Despite esthetics being considered the primary goal of rootcoverage procedures, few studies evaluated the changes of esthetic conditions as they related to the opinions of patients. In these studies, the patient was satisfied with the final esthetic result. The incidence of adverse effects, such as discomfort with or without pain, was directly related to donor sites of CTG [12]. Also, procedures that made a reduction in the operatory time, eliminated the need for a second surgical site and its associated morbidity and used smaller palatal grafts was better accepted [13].
An in vitro testing of this collagen matrix showed the in-growth of primary human fibroblasts into the CM, which resulted in an increased expression of extracellular matrix proteins such as collagen type I and fibronectin. Recently, CM and another prototype with a different source of collagen were compared in a non-submerged healing environment in combination with the apically repositioned flap. Clinical results demonstrated an increase in the width and thickness of the KT. The qualitative histological analysis revealed complete healing of both CM, resulting in mature mucosal and submucosal tissues.
In an experimental study, the combination of a CM and the CAF procedure significantly reduced the recession and increased the width of KT. Histologically, allowed an uneventful healing, the matrix being completely incorporated into the adjacent host connective tissues, in the absence of a significant inflammatory response. The healing was characterized by the formation of new cementum and new connective tissue attachment in the apical aspect of the defect and by a junctional epithelium in its most coronal third. When compared with the CAF alone, the CM graft attained more tissue regeneration, with a shorter epithelium and a larger new cementum formation [14].
An in vivo evaluation of Mucograft® minimal inflammation and no multinucleated giant cells was present. The material persisted in the tissue throughout the study. At the same research, results demonstrate great potential to reverse tissue recession and promote more healthy gingival tissue [15]. Collagen matrix can enhance oral soft tissue healing compared with spontaneous healing during the first week, based on clinical observations that the two distinct structures paved the way to improve the healing by an early stabilization of the coagulum (matrix function) [16].
Summary
This present study seems to suggest that CM+CAF can provide a valid treatment procedure in Miller's class I root coverage and CM may offer a new option to CAF alone and palatal harvest. The need for twostage surgery was eliminated with a significant reduction in surgery time, no pain related and less morbidity presented. More extensive, long term clinical studies are needed to support this result obtained. | 2019-03-13T13:27:50.965Z | 2015-03-23T00:00:00.000 | {
"year": 2015,
"sha1": "7a9012312999799f72cae74e71dbeaac07aef931",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/root-coverage-with-a-new-collagen-matrix-and-coronally-advanced-flap-a-case-report-2161-1122-1000292.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cc0ba7ace72edecc7072778f8ec1fa72287c96c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232258873 | pes2o/s2orc | v3-fos-license | Impact of Medication Regimen Simplification on Medication Incidents in Residential Aged Care: SIMPLER Randomized Controlled Trial
In the SImplification of Medications Prescribed to Long-tErm care Residents (SIMPLER) cluster-randomized controlled trial, we investigated the impact of a structured medication regimen simplification intervention on medication incidents in residential aged care facilities (RACFs) over a 12-month follow-up. A clinical pharmacist applied the validated 5-step Medication Regimen Simplification Guide for Residential Aged CarE (MRS GRACE) for 96 of the 99 participating residents in the four intervention RACFs. The 143 participating residents in the comparison RACFs received usual care. Over 12 months, medication incident rates were 95 and 66 per 100 resident-years in the intervention and comparison groups, respectively (adjusted incident rate ratio (IRR) 1.13; 95% confidence interval (CI) 0.53–2.38). The 12-month pre/post incident rate almost halved among participants in the intervention group (adjusted IRR 0.56; 95%CI 0.38–0.80). A significant reduction in 12-month pre/post incident rate was also observed in the comparison group (adjusted IRR 0.67, 95%CI 0.50–0.90). Medication incidents over 12 months were often minor in severity. Declines in 12-month pre/post incident rates were observed in both study arms; however, rates were not significantly different among residents who received and did not receive a one-off structured medication regimen simplification intervention.
Introduction
Medication errors are estimated to cost USD 42 billion annually, or 0.7% of global health expenditure [1]. Medication Without Harm is the World Health Organization's (WHO) Third Global Patient Safety Challenge, and Medication Safety was recently declared an Australian national health priority area [2,3]. Medication errors and incidents have been defined as "any preventable event that may cause or lead to inappropriate medication use or patient harm while the medication is in the control of the health care professional, patient, or consumer" [4]. Incidents can arise at points in the medication management cycle including prescribing, dispensing, administration and monitoring [5,6]. A review of 36 studies across all United Kingdom (UK) National Health Service (NHS) settings reported medication error rates from 0.2% (prescribing error rate at hospital discharge) to 90.6% (proportion of residents of aged care facilities who received a potentially inappropriate medication) [7], while a systematic review of 91 direct observation studies of the NHS reported a median error rate including dose timing errors of 19.6% [8].
There is a high potential for medication incidents in residential aged care facilities (RACFs) due to high rates of multimorbidity, polypharmacy, and frequent transitions of care [9][10][11][12]. Medications often implicated in errors, such as psychotropic medications, opioids, anticoagulants, antidiabetic agents and diuretics are prevalent in RACFs [9,11,12]. A UK care home study reported four-fold higher incident rates for liquid medications and 19-fold higher for topical, injectable, or transdermal medications compared to tablets and capsules [13]. In Australia, medication management is the leading source of complaints regarding residential aged care [14]. A systematic review of medication errors in RACFs reported 16-27% of residents experienced a medication error, with 13-31% of hospital transfers examined in three studies due to medication errors [15]. One UK study involving interviews, case note review, direct observation and inspection of dispensing records reported that errors occurred in 70% of residents, while a second UK study determined 90% of residents had one or more administration errors over a three-month period [16,17]. Underreporting of errors is variable and may be due to inaccessible or difficult reporting systems, limited understanding of reporting, and fear of punitive action [11,15,18]. Apparent variability in error rates may also be explained by different methods for ascertaining and categorizing errors.
Interventions to reduce incidents include electronic or standardized medication administration charts, medication adherence aids, medication distribution technologies, computerized decision support and embedding pharmacists within RACFs [19][20][21][22][23]. No randomized controlled trial (RCT) has evaluated the impact of simplifying medication regimens on medication incidents in RACFs. Medication regimen complexity can arise due to number of medications, multiple administration times, non-oral formulations, and additional dosing instructions (e.g., crush tablets, administer with food) [24,25]. Residents with more complex medication regimens are more likely to be hospitalized over a 12-month period [26]. In hospital settings, number of medication doses and unscheduled dosing times are associated with medication incidents [27].
The SImplification of Medications Prescribed to Long-tErm care Residents (SIMPLER) study is a three-year cluster randomized controlled trial involving 242 participants [28]. The overall objective of the SIMPLER study was to improve resident health and quality of life through reducing the number of daily medication administration times. Medication simplification was possible for 62 (65%) of the 99 residents in the intervention arm of the SIMPLER study and 57 (62%) of 92 simplification recommendations made by the pharmacist delivering the intervention were implemented by four-month follow-up. The most frequent recommendations were to change an administration time (65%), formulation (27%), or dose frequency (4%). At four-month follow-up the mean number of medication administration times (the primary outcome) was significantly reduced in the intervention compared to comparison arm (−0.36, 95% confidence intervals (CI) −0.63 to −0.09, p = 0.01) and this was maintained at eight-and 12-month follow-up [29,30]. Although the rate of medication incidents was greater in the intervention arm compared to the comparison arm at four-month follow-up in the unadjusted analyses (incident rate ratio (IRR) 1.91, 95% CI 1.02 to 3.67), no significant difference was observed after adjustment for the rate of medication incidents in the four months pre-study entry (IRR 1.55, 95% CI 0.81 to 2.91, p = 0.17). The objective of this planned secondary outcome analysis was to investigate the impact of medication regimen simplification on medication incidents at 12-month follow-up of the SIMPLER study.
Study Design
The SIMPLER study is an open-label, matched-paired cluster randomized controlled trial involving eight RACFs [28]. Participating residents from the four RACFs randomized to the intervention arm received a one-off clinical pharmacist simplification intervention. Residents of the four comparison RACFs received usual care. In Australia, medications are prescribed and dispensed by off-site physicians and pharmacists and administered by RACF staff to residents who are often living with cognitive impairment or dementia [11,12]. The eight participating RACFs used hard copy medication charts, with medications administered to residents from pre-packed dose administration aids (e.g., blister packs, sachets). The study was approved by the Monash University Human Research Ethics Committee (0781) and the participating aged care provider organization. Written informed consent was obtained from participants or from their guardian, next of kin, or significant other when the resident was unable to provide written informed consent to participate. The SIMPLER trial was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12617001060336).
Participants
Participants were recruited between April and October 2017. All English-speaking residents taking at least one regular medication were eligible. Residents were excluded if RACF staff deemed they were medically unwell or were estimated to have less than three months to live. The 242 participating residents were similar to all residents of Australian RACFs in terms of age (62% vs. 59% aged 85 years or older), sex (74% vs. 67% female), and length of RACF stay (2.5 years vs. 2.9 years) [31].
Intervention
The intervention was a one-off application of the Medication Regimen Simplification Guide for Residential Aged CarE (MRS GRACE) [32]. MRS GRACE is a structured, validated implicit tool to assist pharmacists and other clinicians to identify opportunities for medication simplification. An experienced clinical pharmacist reviewed medication charts for participants in the four intervention RACFs and used the principles outlined in the MRS GRACE to identify opportunities to simplify regular medications. Regimen simplification involved consolidating administration times through administering medications at the same time, standardizing routes of administration, using long-acting rather than shortacting formulations, and switching to combination rather than single-ingredient formulations, where possible [28]. The most common recommendations made involved adjusting the timing of medication dosing: for example, consolidating medications taken at 07:00 am and 08:00 am, if appropriate. Other common recommendations included changing paracetamol from immediate-release tablets prescribed four times daily, to sustained-release tablets prescribed three times daily, and using combination products (e.g., metformin 1000 mg tablet and saxagliptin 5 mg tablet was changed to saxagliptin/metformin 5 mg/1000 mg tablet) [29].
Recommendations arising from the intervention were communicated to the residential services manager (RSM) or clinical nurse consultant at the RACF and general practitioner (GP), who were responsible for reviewing and implementing the simplification recommendations.
Outcomes
In this planned secondary analysis, the outcome of interest was the number of medication incidents in the 12 months following the intervention. Medication incident data were extracted from the organization's risk management and reporting system, which was uniform across the eight RACFs. Incidents were entered into the database after detection by the RACF staff according to the organization's Client Incident Reporting Policy. The client incident reports capture information using a combination of radio buttons and free text fields, including the incident date, time, personnel involved, person completing the form, specific location of the incident, description of the incident, immediate action taken, outcomes of investigations and other findings, controls/strategies implemented in response to the incident, hospital transfer details, family/police notifications, and additional information about the specific incident type. Incidents were then reviewed by the RACF RSM who classified the incident by type as an administration error (incorrect medication/dose/route, incorrect time/date, missing medication/medication not available, omission, other), adverse drug reaction, resident error, pharmacy dispensing error, or prescribing error. Incident severity and response were determined by the RSM using a Severity Assessment Code (SAC) matrix combining the impact of the incident with the likelihood of occurrence. This is a widely used approach in Australia and internationally and is consistent with the approach advocated by SA Health in South Australia where the RACFs were located [33]. RSMs had previously been trained on the use of the SAC Matrix and risk assessment processes. The SAC matrix is used to assess the severity of all incidents within the RACF (i.e., medication incidents, falls, near misses, incidents relating to client behaviour) and considers both resident, staff, and organizational consequences. First, the general impact of the incident is categorized as minimal, minor, moderate, major, or severe. Minor events include near misses and events managed with existing processes that did not result in resident injury or service disruption. Examples of severe events include resident or staff death, or complete loss of service provision. The likelihood of occurrence is then categorized as rare (i.e., unlikely to occur or may happen in 5-30 years), unlikely, possible, likely, or frequent (i.e., expected to recur either immediately or within weeks/months). The incident is then categorized using the SAC matrix to produce a final score from 1 to 4, with a lower SAC score representing an extreme risk (Supplementary Table S1). An incident with an SAC of 4 was managed through routine procedures while an incident with an SAC of 1 required immediate escalation to the chief executive officer and other executive members.
Incident data were then extracted for analysis by the research team after all participants had completed 12-months follow-up. Medications involved in incidents were classified retrospectively by researchers using the WHO Anatomical Therapeutic Chemical (ATC) classification system at the third level (therapeutic/pharmacological subgroup) [34], based on the information entered by the reporting RACF staff member.
Covariates
Baseline demographic data included age, gender, RACF location, and length of stay at the RACF. Medication data collected included number of charted medications, and number of regularly charted daily administration times. Comorbidity data were used to calculate Charlson Comorbidity Index [35], and frailty using the 7-item FRAIL-NH scale [36]. Medication incidents for each resident for the 12 months prior to study recruitment were also collected from the risk reporting software (12-month pre-rate). The 12-month pre-rate and any baseline demographics demonstrating significant differences between arms (p < 0.1) were included as covariates in analyses.
Analysis
Participant, incident, and medication characteristics were reported using descriptive statistics. Negative binomial regression was used to conduct intention-to-treat analysis for the associations between the intervention and medication incidents. In addition, incident rates were compared for the 12 months pre-and poststudy entry within each study arm. The results were expressed in incidents per 100 resident-years and associations were reported using IRRs with 95% CIs. This method considered that each resident contributed different lengths of follow-up time. Resident time contributed to the study was calculated taking into consideration date of entry to the RACF (pre-study entry period), date of death (post-study entry period), and days spent in hospital (both periods). RACF was included in models as a random effect to account for clustering. Two sets of per-protocol analyses were undertaken, firstly, only including residents in the intervention arm with at least one simplification recommendation and, secondly, only including residents in the intervention arm with at least one simplification recommendation implemented. We also conducted an additional sensitivity analysis by only including residents with at least two or more medication administration times at baseline. Analyses were undertaken using SAS version 9.4 (SAS Institute, Cary, NC, USA) and SPSS version 25.0 (IBM, Armonk, NY, USA), with p < 0.05 considered statistically significant.
Demographics
There were 99 residents in the four intervention RACFs and 143 residents in the comparison RACFs. Follow-up data were available for 241 residents (Table 1): one intervention arm resident withdrew from the trial after randomization and received no simplification recommendations ( Figure 1). Overall, 162 residents were alive and followed up at 12 months (intervention arm = 69; comparison arm = 93). Residents in the comparison arm were more likely to be female, live in an urban area and have a longer duration of stay. Comorbidity scores and the number of medications charted for regular administration at baseline were similar in both groups.
Number and Type of Medication Incidents during Follow-up
There were 148 medication incidents reported for 31% of residents during the 12month follow-up (Table 2). This included 72 incidents among 34 residents in the intervention arm (34%) and 76 incidents among 40 residents (28%) in the comparison group (range 0-7 per resident). Incident rates per facility ranged from 16 to 165 incidents per 100 personyears. In total, 126 medication incidents (85.1%) were medication administration incidents. A severity score was assigned for 145 incidents, of which 137 (94.5%) had an SAC of 4, and eight (5.5%) received an SAC of 3 (Supplementary Table S1). No medication incidents resulted in hospitalization.
Number and Type of Medication Incidents during Follow-Up
There were 148 medication incidents reported for 31% of residents during the 12-month follow-up (Table 2). This included 72 incidents among 34 residents in the intervention arm (34%) and 76 incidents among 40 residents (28%) in the comparison group (range 0-7 per resident). Incident rates per facility ranged from 16 to 165 incidents per 100 person-years. In total, 126 medication incidents (85.1%) were medication administration incidents. A severity score was assigned for 145 incidents, of which 137 (94.5%) had an SAC of 4, and eight (5.5%) received an SAC of 3 (Supplementary Table S1). No medication incidents resulted in hospitalization. The specific medications involved in the incident were documented for 76 of the 148 incidents. The most commonly implicated medications according to ATC code were antithrombotic agents (n = 11) and other analgesics, namely paracetamol (acetaminophen) (n = 11) (Supplementary Table S2). The incident was attributed to a Drug of Dependence or Addiction (DDA) for 54 incidents (36.4%) however the specific agent involved was not named. Incidents most commonly involved oral medications (55.4%) and transdermal (35.8%) preparations. Almost all incidents involved regularly administered medications (89.9%).
Medication Incident Rates
Over 12 months, mean medication incident rates were 95 and 66 per 100 patient-years in the intervention group and the comparison group, respectively (adjusted IRR 1.13; 95% CI 0.53-2.38) ( Table 3). During the 12 months preceding the study, residents in the intervention group had more medication incidents than residents in the comparison arm (161 vs. 97 incidents per 100 person-years, IRR 1.65; 95%CI 1.18-2.31). In the intervention group, the medication incident rate was significantly reduced during 12-month follow-up in comparison to the rate observed in the 12-months before study entry (95 incidents per 100 person-years vs. 161 incidents per 100 person-years, IRR 0.56; 95% CI 0.38-0.80). In the comparison arm, there was a nearly one-third reduction in medication incidents (64 incidents per 100 person-years vs. 97 incidents per 100 person-years, IRR 0.67, 95% CI 0.50-0.90) (Figure 2). Incident rates for individual facilities is presented in Figure 3.
Per Protocol Analysis
There were no significant differences between the intervention and comparison groups when only intervention participants with at least one recommendation (n = 62) were included (adjusted IRR 1.20; 95% CI 0.55-2.63), or intervention participants with at least one implemented recommendation (n = 46) were included (adjusted IRR 1.08; 95% CI 0.43-2.70). In sensitivity analyses only including residents with at least two daily administration times (n = 235), no significant differences between study arms were observed (adjusted IRR 1.09, 95% CI 0.61-1.95). Table 3. Medication incidents over the 12-month follow-up. The specific medications involved in the incident were documented for 76 of the incidents. The most commonly implicated medications according to ATC code were tithrombotic agents (n = 11) and other analgesics, namely paracetamol (acetaminoph (n = 11) (Supplementary Table S2). The incident was attributed to a Drug of Depende or Addiction (DDA) for 54 incidents (36.4%) however the specific agent involved was named. Incidents most commonly involved oral medications (55.4%) and transderm (35.8%) preparations. Almost all incidents involved regularly administered medicatio (89.9%).
Medication Incident Rates
Over 12 months, mean medication incident rates were 95 and 66 per 100 patient-ye in the intervention group and the comparison group, respectively (adjusted IRR 1.13; 9 CI 0.53-2.38) ( Table 3). During the 12 months preceding the study, residents in the int vention group had more medication incidents than residents in the comparison arm ( vs. 97 incidents per 100 person-years, IRR 1.65; 95%CI 1.18-2.31). In the intervent group, the medication incident rate was significantly reduced during 12-month followin comparison to the rate observed in the 12-months before study entry (95 incidents 100 person-years vs. 161 incidents per 100 person-years, IRR 0.56; 95% CI 0.38-0.80) the comparison arm, there was a nearly one-third reduction in medication incidents incidents per 100 person-years vs. 97 incidents per 100 person-years, IRR 0.67, 95% 0.50-0.90) (Figure 2). Incident rates for individual facilities is presented in Figure 3. .
Per Protocol Analysis
There were no significant differences between the intervention and comparison groups when only intervention participants with at least one recommendation (n = 62) were included (adjusted IRR 1.20; 95% CI 0.55-2.63), or intervention participants with at least one implemented recommendation (n = 46) were included (adjusted IRR 1.08; 95% CI 0.43-2.70). In sensitivity analyses only including residents with at least two daily administration times (n = 235), no significant differences between study arms were observed (adjusted IRR 1.09, 95% CI 0.61-1.95).
Discussion
The SIMPLER study is the first RCT to investigate the impact of medication regimen simplification on medication incidents in RACFs. A decline in medication incidents over time was observed in both the intervention and comparison arms. However, medication incident rates were not significantly different among residents in the intervention and comparison arm over 12 months of follow-up.
There are a number of mechanisms that may explain the decline in incidents. Our results were consistent with less complex medication regimens being associated with lower incident rates [27]. Although not statistically significant, there was a 30% lower incident rate in favour of the intervention group after eight months of follow-up. While there was considerable facility-to-facility variability in incident reports in both intervention and comparison RACFs, there was a downward trend across all four intervention RACFs. The lack of significance may be attributable to insufficient statistical power due to a limited number of clusters, participants, and incidents. We believe the decline in medication incidents in both arms was unlikely to be attributable to the Hawthorne effect arising from nurses being aware of the SIMPLER trial, as it is unlikely that nurses responsible for medication administration would recall which residents participated in the trial and adjust their behaviour over a 12-month period. This is supported by previous research reporting limited evidence for the Hawthorne effect in health professional education re- Incidents per 100-resident years RACF
Discussion
The SIMPLER study is the first RCT to investigate the impact of medication regimen simplification on medication incidents in RACFs. A decline in medication incidents over time was observed in both the intervention and comparison arms. However, medication incident rates were not significantly different among residents in the intervention and comparison arm over 12 months of follow-up.
There are a number of mechanisms that may explain the decline in incidents. Our results were consistent with less complex medication regimens being associated with lower incident rates [27]. Although not statistically significant, there was a 30% lower incident rate in favour of the intervention group after eight months of follow-up. While there was considerable facility-to-facility variability in incident reports in both intervention and comparison RACFs, there was a downward trend across all four intervention RACFs. The lack of significance may be attributable to insufficient statistical power due to a limited number of clusters, participants, and incidents. We believe the decline in medication incidents in both arms was unlikely to be attributable to the Hawthorne effect arising from nurses being aware of the SIMPLER trial, as it is unlikely that nurses responsible for medication administration would recall which residents participated in the trial and adjust their behaviour over a 12-month period. This is supported by previous research reporting limited evidence for the Hawthorne effect in health professional education research [37]. All intervention and comparison RACFs had a uniform Client Incident Reporting Policy, however, facility-to-facility variation may have arisen due to the complex nature of medication incident reporting [11].
A previous study of embedding a pharmacist within a RACF for six months in Canberra, Australia resulted in an apparent increase in medication incidents [23]. This may be because the pharmacist increased detection and reporting of incidents, either directly themselves or by nurses and GPs involved in the medication review process. However, we observed a decline rather than increase in medication incidents in both the intervention and comparison arms of the SIMPLER study. We had anticipated a small increase in incidents may have occurred immediately after a medication regimen simplification intervention due to changes to dose times and formulations. However, we did not find any evidence for this.
Approximately one-third of study participants experienced a medication incident, which is slightly higher than the 16-27% of residents in a systematic review of 11 studies [11]. However, by extrapolating based on the average number of daily medication administration times and total resident-days of follow-up, we estimate that the 148 medication incidents in our study translates to an error in less than 0.1% of medication administrations. Other studies have reported considerably higher rates of medication incidents: two-thirds of participants in Barber's study experienced an error [16], while Szcepura et. al. reported 90% of residents were exposed to medication administration errors over a three-month observation period [17]. However, these studies identified errors prospectively rather than through routine reporting. Barber et al. also reported 39% of residents had prescribing errors and 22% had administration errors [16]. In our study, the majority of incidents were administration errors; there were no prescribing errors and few dispensing errors reported. This finding likely reflects that incidents were predominately reported by nurses who were responsible for medication administration rather than prescribing. In our study, medication incidents were assessed by nursing staff to be of low-moderate severity, with no incidents scoring "extreme" or "high" SAC codes. This is in line with most medication incidents reported in other studies not having been associated with major adverse events [7,15].
The most frequently implicated medications were those affecting the central nervous system, alimentary tract and metabolism, cardiovascular system, and blood and blood forming organs. This finding is similar to previous studies. In a cross-sectional study of medication incidents in US RACFs, Desai et al. reported that analgesics and anxiolytics were implicated in 20% of incidents, followed by antidiabetics and anticoagulants [38]. In a systematic review of 91 studies across healthcare settings, common medications implicated in incidents included nervous system, gastrointestinal, blood and cardiovascular system, and anti-infective agents [8]. Over one third of incidents involved a DDA administration which must be overseen and documented by two staff members; having a second staff member oversee DAA administration may reduce the risk of resident harm but increase the likelihood of error detection and reporting. Over half (55%) of incidents in our study involved oral medications, however, oral medications comprised 75% of all regularly administered medications [39]. Transdermal formulations accounted for 36% of all incidents, which was consistent with research suggesting errors with transdermal administration are common and can occur at all stages including preparation, application, removal, monitoring and disposal [40]. Lampert et al. suggested a lack of knowledge and awareness regarding correct administration procedures is a root cause of medication incidents related to transdermal administration [40]. The likelihood of error may be increased because not all transdermal formulations have a consistent dosing interval.
A time-and-motion study conducted in conjunction with the SIMPLER randomized controlled trial found nurses take an average of 5 min per resident per round to administer medications [41]. Neither the time-and-motion study nor the present study investigated the time needed to safely administer different dose forms. However, we have estimated by extrapolating the reduction in average number of administration times at the 4-month follow up across a 100 bed RACF, the intervention would generate savings of 85 h of staff time per month [30]. This represents time that could be directed to other care, quality, and safety related activities. This includes implementing enhanced medication management activities. Although regimen simplification was not associated with a significant reduction in medication incidents in the intervention compared to the comparison group, complex medication regimens are burdensome for residents and staff. For this reason, medication regimen simplification remains a potentially important and worthwhile activity in the RACF setting.
Strengths and Limitations
Our trial has several strengths. It was the first RCT on this topic. We used a matched pair cluster randomized design to avoid potential contamination associated with the same nurses and GPs providing care to residents in the intervention and comparison arms. The simplification intervention was resident-centered and consistent with Australia's Aged Care Quality Standards that recognize that residents are important contributors in decisions about care they receive. Participants were followed over 12-months with no unexplained loss to follow-up. Incident rates were calculated in terms of person-years to account for varying lengths of follow-up. The intervention was implemented using a validated tool developed by a multidisciplinary team [32]. Incidents were also reported in both arms using the same standardized risk reporting system.
Our study also has several limitations. Our data likely represent an underestimate of the true numbers of medication incidents due to underreporting, which is a known issue with retrospectively evaluating incidents. No prescribing incidents or adverse drug reactions were reported in our study. This is likely to reflect a system-level reporting issue rather than the absence of these incidents in practice. Incident reporting systems for care organizations differ between and within countries. In Australia there is no national standard reporting system for medication incidents. Instead, aged care provider organizations develop and follow their own policies, with guidance provided by accrediting bodies regarding the recording and reporting of incidents. The Guiding Principles for Medication Management in Residential Aged Care (2012) published by the Australian Government Department of Health and Ageing also briefly outlines each aged care provider organizations' responsibility in terms of incident and error reporting [42]. Medication incidents are typically tabled and discussed at each aged care provider organization's multidisciplinary medication advisory committees (MACs) [43]. Medication incident reporting was by RACF staff as part of routine care rather than trained study personnel. Furthermore, due to the multi-site nature of the SIMPLER RCT, there were multiple RACF staff involved in assigning an SAC for each incident which may have contributed to intra-facility variation in reporting. The participating RACFs used hard copy medication charts. Research conducted in the hospital setting suggests different types of medication incidents may occur when electronic medication management systems are used instead of paper-based medication management systems [44]. This research identified that introduction of electronic prescribing and administration systems was associated with an increase in specific errors (e.g., wrong route, wrong formulation) but mitigation of other errors (e.g., wrong dose due to poor handwriting) [44]. Electronic charts may also be more difficult to edit and annotate than paper-based charts, with possible discrepancies due to delay or failure to update the electronic medication administration chart after a paper-based prescription is issued [45]. This may mean our findings are not fully generalizable to settings in which electronic medication management systems are used. Similarly, our findings are not generalizable to recipients of community-based home care services where medication administration is not typically undertaken by nurses. However, we have piloted a similar medication simplification intervention among recipients of community-based home care services [46]. Due to the cluster randomized design, there were different numbers of participating residents in the intervention and comparison arms and several baseline differences between arms. We adjusted our analyses for these baseline differences where possible. Participants in the intervention arm had shorter duration of stay in RACFs, though the median was still over two years. It is possible that more recently admitted residents were more prone to medication changes and, therefore, to medication related incidents. There could have been unmeasured differences between intervention and comparison RACFs regarding nursing (e.g., experience, nursing time, reporting rates) or management practices. In addition, over 35% of "administration incidents" were not sub-categorized according to type, and free text descriptions of incidents were not collected.
Conclusions
Medication incident rates were not significantly different among residents who received and did not receive a one-off structured medication regimen simplification intervention. Although the intervention did not result in a significant reduction in incidents, the 30% lower incident rate in the intervention group after eight months suggests regimen simplification may still be worth investigating as a potential strategy to reduce incidents. Given that complex medication regimens are burdensome for residents and staff, it is possible that the benefits of simplification may extend beyond the impact on medication incident rates.
Supplementary Materials: The following are available online at https://www.mdpi.com/2077-0 383/10/5/1104/s1, Table S1: Severity assessment code (SAC) matrix used to classify medication incidents at the participating residential aged care facilities, Table S2 Informed Consent Statement: Written informed consent was obtained from participants or from their guardian, next of kin, or significant other when the resident was unable to provide written informed consent to participate. Data Availability Statement: Participants of this study did not agree for their data to be shared publicly, so supporting data is not available. | 2021-04-04T06:16:23.111Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "63e70efb02a6373461c735d196a8b0b202d5ea62",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/5/1104/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e108c343890c69e6473fdf815e9f499ec812821e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118825363 | pes2o/s2orc | v3-fos-license | Dipole Analysis of the Dielectric Function of Colour Dispersive Materials: Application to Monoclinic Ga$_2$O$_3$
We apply a generalized model for the determination and analysis of the dielectric function of optically anisotropic materials with colour dispersion to phonon modes and show that it can also be generalized to excitonic polarizabilities and electronic band-band transitions. We take into account that the tensor components of the dielectric function within the cartesian coordinate system are not independent from each other but are rather projections of the polarization of dipoles oscillating along directions defined by the, non-cartesian, crystal symmetry and polarizability. The dielectric function is then composed of a series of oscillators pointing in different directions. The application of this model is exemplarily demonstrated for monoclinic ($\beta$-phase) Ga$_2$O$_3$ bulk single crystals. Using this model, we are able to relate electronic transitions observed in the dielectric function to atomic bond directions and orbitals in the real space crystal structure. For thin films revealing rotational domains we show that the optical biaxiality is reduced to uniaxial optical response.
I. INTRODUCTION
For the understanding, design and fabrication of optoelectronic devices, the optical properties of the involved materials have to be known. A well established and powerful method for the determination of these properties is spectroscopic ellipsometry 1,2 . We concentrate here on the dielectric function (DF), which is usually obtained by means of numerical model analysis of the experimental ellipsometry data and then often described by a series of line-shape model dielectric functions in order to deduce phonon properties, free charge carrier concentrations and the properties of electronic transitions (e.g Ref. 2 and 3). For isotropic materials this method is well established. However, in recent years, optically anisotropic materials, as e.g. Ga 2 O 3 4-7 , CdWO 4 8 and lutetium oxyorthosilicate 9 , went into focus of research since they are promising candidates for optoelectronic applications in the UV spectral range. However, the determination of their optical and electronic properties is more challenging compared to isotropic materials since they depend on the crystal orientation. The dielectric function is represented by a (frequency-dependent) tensor and the determination of its components requires a series of measurements for various crystal orientations.
For (non-chiral) optically anisotropic materials, the dielectric function is in general a symmetric tensor consisting of six independent components 10 , i.e. ε = ε xx ε xy ε xz ε xy ε yy ε yz ε xz ε yz ε zz . (1) Due to its symmetry, this tensor can be diagonalized independently for the real and imaginary part at each wavelength separately. In the transparent spectral range, i.e. for vanishing imaginary part, the diagonal elements are the semi-principal axes of the ellipsoid of wave normals and are often called dielectric axes. For materials with monoclinic or triclinic crystal structure, the orien-tation of the dielectric axes depends on the wavelength and is often called colour dispersion. In the spectral rang with non-vanishing absorption, the situation becomes even more complex. Due to the independent diagonalizability of the tensor (1) for the real and imaginary part, the corresponding dielectric axes in general do not coincide which each other. Thus, in general, four dielectric axes are present. For these classes of materials only few reports on the determination of the full dielectric tensor exists, e.g. for α-PTCDA 11 , pentacene 12 , BiFeO 3 13 , CdWO 4 8 , K 2 Cr 2 O 7 14 , CuSO 4 · 5H 2 O 15 and effective anisotropic materials as e.g. slanted columnar films 16 . Most of these works are limited to the determination of the line shape of the dielectric function, treating each tensor component of the DF independently of each other. This can, from a technical point of view, result in large correlations between the individual tensor elements causing non-physical results. More importantly, the deeper nature of polarizabilities in the material, like phonons, excitons, and electronic band-band transitions, cannot be explored this way. Thus, lineshape model dielectic functions (MDF) representing the oscillators properties like energy, amplitude, broadening, and even oscillation direction in a meaningful and physical correct way have to be used.
Facing this, Dressel et al. 12 proposed an approach assuming that the dipole moments are aligned to three polarization axis which should coincide with the crystallographic axes. Taking this model into account, the dielectric tensor is fully described by its three independent principal elements and the known angles between the crystallographic axis. However, as a consequence of this approach the principal axes of the indicatrix (related to the real part of ε) coincide with those of the conductivity tensor (related to the imaginary part of ε) which is not generally valid as shown for instance for CdWO 4 8 and Ga 2 O 3 7 . To overcome this problem, Höfer et al. 14,15 used for the infrared spectral range a model, developed earlier by Emslie et al. 17 , which consists of a sum of damped Lorentz oscillators individually aligned to the axes of their respective dipole moments. For phonons, these axes are related to the atomic elongations and thus to some extent to the crystallographic axes. Further, their dissipative spectral range is usually narrow. Thus the question arises if such a model also can be applied to spectrally widespread excitations like electronic bandband transitions, which consist of numbers of individual dipoles whose axes are connected to overlapping atomic orbitals of various symmetry and therefore not necessarily coincide with crystallographic directions. Further the density of states (DOS) of the electronic band structure is distributed within a wide energy range in a complex manner causing non-symmetric line-shapes of the imaginary part of the dielectric function which spectrally overlap for different contributions and directions.
Here we demonstrate that the sketched approach is generally valid for all kinds of excitations. We demonstrate this exemplarily for monoclinic Ga 2 O 3 (β-phase) single crystals and thin films in the spectral range from infrared to vacuum ultraviolet. We show that this model provides a deep insight in electronic properties of the materials: Comparing the directions of the electronic polarizabilities obtained by modeling the experimental ellipsometry data using lineshape MDF to the real space atomic arrangement in the crystal and considering theoretical calculated electron density distribution as well as orbital-resolved DOS, allows us to assign the observed transitions to individual orbitals.
The paper is organized as follows: In Sec. II, we discuss at first the dielectric tensor for all crystal symmetries and its composition. After that we demonstrate its applicability to the case of β-Ga 2 O 3 single crystals in the infrared and ultraviolet spectral range. Finally, we show by means of a practically relevant β-Ga 2 O 3 thin film which exhibit rotation domains that the approach of using directed transitions explains the effective uniaxial properties of the film and enhances the sensitivity to the out-of plane component of the dielectric tensor.
II. DIELECTRIC FUNCTION
The optical response of a material is determined in first order by dipole excitations, e.g. optical phonons, electronic band-band transitions or excitons which in sum are represented by the dielectric function. For isotropic materials, the corresponding dipole moment or polarization direction of each excitation is macroscopically equally distributed in all spatial directions, resulting in an isotropic dielectric function, i.e. it is a scalar written as with N being the number of excitations/oscillators. The situation changes for materials with crystal structure symmetry lower than the cubic one. In this case the excitations generally differ between the crystallographic directions in energy, amplitude, broadening, and even in the spatial direction of their dipole moment, and thus the DF is a tensor (Eq. (1)). Let ε ′ i being the dielectric response of the i th excitation and the coordinate system is chosen (without loosing generality) in such way that the polarization direction is along the x-axis. The only non-zero component is then given by ε xx , i.e. ε ′ i,xx = ε ′ i,mn = 0. However, the polarization direction of the excitation and the experimental coordinate system do not coincide with each other in general and a coordinate transformation has to be performed, independently for each transition. The entire dielectric tensor then can be expressed by with φ i and θ i being Euler angles, which are in general different for each excitation, and R being the rotation matrix. The advantage of this expression is that the components of the resultant dielectric tensor in the Cartesian coordinate system are not independent of each other but rather composed of the respective projected part of the excitation's line-shape function according to the directions of their individual dipole moment. For the entire dielectric function it follows that, due to the finite broadening of each excitation and by considering Kramers-Kronig relation, the orientation of the principal tensor axes of the real and imaginary parts differ from each other as it is well known and observed in experiments e.g. for CdWO 4 8 and Ga 2 O 3 7 . Equation (3) represents the general case which has to be used for triclinic crystals and can be simplified depending on the crystal symmetry. Crystals with monoclinic structure exhibit one symmetry axes, representing a C 2 rotation axis or the normal of a mirror plane (or both), which we identify in the following with the ydirection. The plane perpendicular to y, the x-z-plane, reveals no symmetry which defines a Cartesian coordinate system preferentially. Therefore, from symmetry arguments, considering dipoles polarized either along y or in the x-z-plane, one can simplify Eq. (3) to with ε i,y and ε ′ j,xz being the contribution of the respective directions. N y and N xz represent the number of excitations with the corresponding polarization directions and as φ we define the angle between the polarization direction and the x-axes within the x-z-plane. This leads to the well known form of the dielectric tensor given by A further simplification can be made for orthorhombic materials containing three orthogonal twofold rotation symmetry axis, leading to a dielectric function tensor which contains only diagonal elements. In the case of uniaxial materials, e.g. those with a hexagonal symmetry, ε i,x = ε i,y and N x = N y holds. For isotropic material, the numbers of oscillators in all three directions is the same and therefore the dielectric tensor reduces to the scalar given by Eq. (2). For practical application Eq. (3) has to be further modified. The real and imaginary parts of the dielectric function are connected with each other by the Kramers-Kronig relation. Contributions of excitations at energies higher than the investigated spectral range to the real part of the DF have to be considered. These contributions are usually described by a pole function. In the case presented here, this means that the identity in Eq. (3) has to be replaced by a real valued tensor with the form given by the corresponding crystal structure where each component is represented by a pole function.
III. EXPERIMENTAL
By using the approach presented in Sec. II and lineshape MDFs, the parametrised dielectric function of β-Ga 2 O 3 bulk single crystals and thin films was determined in the mid-infrared up to the vacuum-ultraviolet spectral range by means of generalized spectroscopic ellipsometry. Ga 2 O 3 crystallizes at ambient conditions in monoclinic crystal structure, the so-called β-phase (Fig. 4). The angle between the non-orthogonal a-and c-axis is β = 103.7 •18 resulting in a non-vanishing off-diagonal element of the dielectric tensor within the Cartesian coordinate system 7,19 . We investigated two single side polished bulk single crystals from Tamura Corporation with (010) and (201) orientation, allowing access to all components of the dielectric tensor. X-ray diffraction (XRD) measurements does not reveal any hints for the presence of rotation domains, twins or in-plane domains. More details can be found in Ref. 7. The thin film was deposited on a c-plane oriented sapphire substrate by means of pulsed laser deposition (PLD) at T ≈ 730 • C. After deposition, the sample was annealed for 5 min at T ≈ 730 • C and a oxygen partial pressure of p O 2 = 800 mbar. XRD measurements confirm the monoclinic crystal structure of the film and the surface orientation was determined to be (201). In contrast to the bulk single crystals, six rotation domains are observed which are rotated against each other by an angle of 60 • . 20 In contrast to bulk single crystals which reveal a smooth surface without atomic steps, the surface roughness of the thin film was determined to be R s ≈ 5 nm In spectroscopic ellipsometry, the change of the polarization state of light after interaction with a sample is determined. In the general case, this is expressed by means of the 4 × 4 Mueller matrix (MM, M) which connects the Stokes vectors of the incident (reflected) light S in (S ref ) by S ref = M S in . In the special case where no energy transfer between orthogonal polarization eigenmodes of the probe light takes place, like for isotropic samples or optically uniaxial samples with the optical axis pointing along the surface normal (as the case for the thin film, cf. Sec. V), the change of the polarization state is expressed by the ratio of the complex reflection coefficients, i.e. ρ =r p /r s . The index represents the polarization of the light polarized parallel (p) or perpendicular (s), respectively, to the plane of incidence which is spanned by the surface normal and the light beams propagation direction.
For the determination of the DF, the experimental data are analyzed by transfer-matrix calculations considering a layer stack model. For the bulk single crystals, the model consists of a semi-infinite substrate (Ga 2 O 3 itself) and a surface layer accounting for some roughness or contaminations. For the infrared spectral range the surface layer can be neglected. For the thin film the model consists of a c-oriented sapphire substrate, the Ga 2 O 3 thin film layer and the surface layer. The dielectric function of sapphire was taken from the literature 21 . The surface layer was modelled using an effective medium approximation (EMA) 22 mixing the DF of Ga 2 O 3 and void by 50% : 50% for the bulk single crystals. 7 For the thin film this fraction was chosen as parameter and the best match between experiment and calculated spectra was obtained for 80% : 20%. In the following we choose our coordinate system in such way, thatê x a-axis,ê y b-axis and e z =ê x ×ê y .
IV. BULK SINGLE CRYSTALS
A. Infrared spectral range The MM in the infrared spectral (250 − 1300 cm −1 (31 − 161 meV)) range was measured at angles of incidence of 30 • , 50 • and 70 • for different in-plane rotations, i.e. rotating the crystal around its surface normal by 30 • , 60 • and 90 • . For selected orientations the recorded spectra are shown in Fig. 1. The non-vanishing block-offdiagonal elements of the MM demonstrate the optically anisotropic character of the sample.
The dielectric function in the infrared spectral range is determined by phonon and free charge carrier oscillations. The bulk single crystals are not intentionally doped, the latter contribution can be neglected for the spectral range investigated here. Therefore, only phonons have to be considered and their contribution is described by Lorentzian oscillators: 23 FIG. 1. Experimental (symbols) and calculated (lines) spectra of the MM elements of a β-Ga2O3 bulk single crystal for an angle of incidence of 70 • . The corresponding orientation of the crystal is given by the Euler angles on top of each column in the yzx notation.
with A, E 0 and γ being the amplitude, energy and broadening of the phonon mode, respectively. The calculated MM spectra are shown in Fig. 1 as red solid lines yielding good agreement with the experimental ones. Note, that a similarly good match is obtained by using a Kramers-Kronig consistent numerical analysis and consider the four components of the DF (Eq. (5)) to be independent from each other.
In the investigated spectral range, 9 of the totally 12 optical infrared active phonon modes are observable. Their properties are summarized in Tab. I. For the modes which have a dipole moment in the a-c plane (B usymmetry) the polarization direction with respect to the a-axis is given by the angle φ, which was found to differ for each phonon mode. This is also in agreement with the results recently reported by Schubert et al. 19 The phonon mode B (4) u was not observable in our experiment. This can be attributed to the weak sensitivity to this mode caused by its low amplitude which is predicted by ab-initio calculations (see below) and to the pronounced noise caused by the low sensitivity of the detector of our setup in this spectral range. Further, for the mode B (3) u only the frequency is given since also the large noise in this spectral range and the probable spectral overlap with B (4) u prohibit the determination of its dipole direction.
For comparison we calculated the phonon modes by ab-initio calculations based on the B3LYP hybrid functional approach implemented in the CRYSTAL14 code 26 . Thereby we used the basis set of Pandey et al. 27 for gallium and of Valenzano et al. 28 for oxygen, which we slightly modified, and 150 k-points in the irreducible Brillouin zone. The truncation criteria defined by CRYS-TAL14 code given by five tolerance set to 8,8,8,8, and 16 for our calculations were used for the Coulomb and exchange infinite sums. Further we used a tolerance of the energy convergence of 10 −11 Hartree. All input parameters and calculation conditions can be found in Ref. 26. The calculated lattice parameters are a = 1.2336 nm, b = 0.3078 nm and c = 0.5864 nm, in reasonable agreement with those reported in the literature 18 . The corresponding phonon mode energies, oscillator strength and the direction of the dipoles are also given in Tab. I and are in excellent agreement with those determined by ellipsometry. The excellent agreement is not restricted to the infrared active phonon modes but is also obtained for the Raman active modes 26 .
B. Ultraviolet spectral range
The numeric DF in the UV spectral range was recently reported by us, obtained by using a Kramers-Kronig consistent numerical analysis 7 . In order to extract the properties of the contributing electronic transitions, e.g. energy and electronic orbitals involved, and to demonstrate the universal applicability of Eq. (3) for electronic transitions we analysed the contribution of each transition to the entire DF by using line-shape model dielectric functions. Symmetry consideration and band structure properties 7 yield that the transitions are polarized either along the y-axes or within the x-z-plane. Thus the DF can be written as in Eq. (4) with a set of excitonic transitions and Gaußian oscillators. We have been shown by density functional theory calculations combined with many-body perturbation theory including quasiparticle and excitonic effects 7 , that the DF in the spectral range from the fundamental absorption edge on up to some eV higher are dominated by excitonic correlation effects. Thus, several excitonic contributions have been considered in modeling and were described by a model dielectric function developed by C. Tanguy for Wannier excitons taking into account bound and unbound states. [29][30][31] The contribution of weakly pronounced bandband-transitions where summarized by using a Gaussian oscillator. A further Gaussian oscillator was included to consider contributions of transitions at energies higher than the investigated spectral range due to their spectral broadening. These contributions together with the pole function were considered for each dielectric tensor com- ponent independently because they may originate from different transitions. The experimentally recorded and the calculated spectra of the MM elements are shown for selected orientations in Fig. 2, yielding good agreement. The difference between the experimental and the calculated spectra for energies E > 7 eV was also observed by using the above mentioned numerical Kramers-Kronig consistent analysis and might be caused by the limitation of the used approach for the description of the surface layer. 7 This can be attributed to the fact that the sensitivity to this layer is strongly enhanced in this spectral range due to the enhanced absorption and therefore reduced penetration depth.
The parameters of the best-match MDF are summarized in Tab. II and III. We extracted a exciton binding energy of about E b X = 270 meV for all contributions. Note that we considered the same exciton binding energy for all excitonic transitions because of the strong correlation between energy of the fundamental bound state and the corresponding binding energy.
The dispersion of the tensor elements for the entire investigated spectral range is shown in Fig. 3. The contributions of excitonic transitions to ε 2 are shown as red solid lines. The orientation of the corresponding dipole moments in the x-z-plane is indicated by the arrows in the inset. In agreement with our theoretical calculations and the numeric MDF, 7 the two energetically lowest transitions (labeled as X 1 and X 2 ) are strongly polarized along the x-and z-direction, respectively. At higher energies, there are transitions along y-axis (b-axis) and within the x-z-plane (a-c-plane).
Based on calculated charge distribution 33 and atomic arrangement within the x-z-plane (a-c-plane), we re- bution of atomic bonds. Please note that the uncertainty in the experimentally determined dipole moment directions amounts to up to 10 • , caused by the simplification due to the used model functions, which summarize spectrally over different individual transitions. As all these transitions reveal no contribution to the dielectric tensor component ε yy , only bonds located solely within the sub-planes of the x-z-plane (a-c-plane) are considered (cf. Fig 4). It is found that all excitonic transitions but the first one, which appears to take place between oxygen atoms, are between differently coordinated gallium and oxygen. In the following discussion we will use the nomenclature given by Geller 18 and label the tetrahedrally and octahedrally coordinated Ga atoms as Ga(I) and Ga(II), respectively, while the three different sites of the oxygen atoms are labeled as O(I), O(II) and (OIII) (cf. Fig. 4).
Band structure calculations reveal that the uppermost valence bands are dominated by oxygen p-orbitals, while the DOS of the lowest conduction bands is composed of almost equal contributions from Ga-s, O-s, and O-p orbitals. 5,33,34 Thus, dipole allowed transitions can take place from O-p orbitals to Ga-s and O-s orbitals. It turns out that the states near the conduction band minimum are preferentially determined by octahedrally coordinated Ga(II). 33 This is reflected by the assignment of the dipole directions to the atomic bonds in Fig. 4. It turns out that the transition X 2 , almost directed along x (a) involves O and Ga(II) and also reveals a high amplitude in the DF. Ga(II) is located between O(II) and O(III). But the dipole direction only fits to the bond Ga(II)-O(III), so it seems that transitions to Ga(II) states in the conduction band only appear when O(III) is involved and are not possible involving O(II). This can be understood considering the coordination of the O-atoms, which is higher (6 bonds) for O(II), suggesting the orbitals to be more s-like compared to O(III) (4 bonds) which dominate the DOS near the valence band maximum. The transitions X 3 and X 4 are assigned to take place between Ga(I) and O(III). The directions obtained from model analysis of the DF does not fit as good as for transition X 2 , maybe caused in correlation effects due to spectral overlap of different contributions to the DF. Finally, transition X 1 , directed almost along c, was assigned to take place either between O(I) and O(III) or between two O(II) atoms, or both. While the first possibility involves differently coordinated atoms suggesting dipole allowed transitions between p-and s-like orbitals, the second possibility involves only highly coordinated atoms (s-like character) and thus should be dipole forbidden. The relatively high amplitude of this transition is not clear at first place, because following Ref. 33, the charge density between the involved atoms and also the DOS of the oxygen orbitals in the conduction band is predicted to be relatively weak.
These results nicely demonstrate the potential of the used model approach for the dielectric tensor to gain deep insight into electronic properties of highly anisotropic materials.
V. THIN FILM
As mentioned above, the PLD grown β-Ga 2 O 3 thin film exhibit (201) surface orientation with 6 in-plane rotation domains, rotated by multiples of 60 • . As their size is much smaller than the optically probed sample area of about 5 × 8 mm 2 , the measured optical response is determined by an average over these domains. For uniform distribution of these rotation domains, the effective dielectric function is given by with φ = (i − 1)π/3 the rotation angle of the i th rotation domain (i = 1 . . . 6) and R(φ) being the rotation matrix around the surface normal. Equation (8) is similar to those of a uniaxial material with ε ⊥ = 0.5(ε ′ xx + ε yy ) and ε = ε ′ zz (⊥ and : perpendicular and parallel to the optical axis) with orientation of the effective optical axis along the surface normal. Note that ε ′ xx and ε ′ zz are the tensor components for the coordinate system with the x-and z axis parallel and perpendicular to the sample surface, respectively.
For such samples with the sensitivity to ε is usually limited due to the high index of refraction of the investigated material resulting in a propagation direction of the wave within the sample with only very small angles off the optical axis. But there is a finite projection of the electro-magnetic field strength onto the optical axis and thus the optical response is determined by ε ⊥ and ε in any case, which have to be considered in order to obtain a physical meaningful dielectric function 35 . However, in contrast to a homogeneous uniaxial material, those effective ε ⊥ and ε are not independent from each other. As shown in Sec. II and demonstrated in Sec. IV the components ε ′ xx and ε ′ zz reflect the same transitions and are determined by the projection A ′ zz /A ′ xx = sin 2 φ ′ / cos 2 φ ′ of their amplitudes A (φ ′ is the angle of the oscillation direction of the individual dipoles with respect to the sample surface). This offers in the present case more sensitivity for determination of the tensor component ε as compared to homogeneous uniaxial materials.
The uniaxial behaviour of the film with the optical axis parallel to the surface normal is reflected by vanishing off-diagonal elements of the MM. Therefore, standard ellipsometry is sufficient for measuring the full optical response (cf. Sec. III). The experimental data are shown in Fig. 5 in terms of the pseudo dielectric function 2 with angle of incidence Φ. Below E ≈ 4.8 eV oscillations due to multiple reflection interferences caused by the interfaces within the sample are observed which vanish with the onset of the absorption at higher energies. For the parametric model of the dielectric function of the thin film we used the same set of model dielectric functions as for the bulk single crystal. The calculated spectra are shown as red solid lines in Fig. 5 and a good agreement between the experimental and calculated data is apparent. The tensor components of the dielectric function of the thin film are shown in Fig. 6. For comparison, the components calculated from DF of the bulk single crystal by using Eq. (8) are shown as dashed lines. For the thin film, we needed to adjust energies and amplitudes of the transitions and even the dipoles' orientation angles φ within the x-z-plane (a-c-plane). Compared to the DF of the single crystal a blue-shift of the transition energies up to 100 meV and a lowering of the oscillator strengths is observed for the thin film. The reduced oscillator strength in the investigated spectral range cannot explain the lowering of the real part of the dielectric constant and therewith of the index of refraction in the visible spectral range alone. Therefore, the reduced refractive index indicates also a reduced oscillator strength of the high energy transitions compared to the bulk single crystal. We relate these changes of the DF properties compared to the bulk single crystal on the one hand to crystal imperfections typically lowering the oscillator strength of electronic transitions by dissipative processes. On the other hand also strain will be possibly present in the thin film, causing changes in the bond length and maybe also torsion of the unit cell causing different dipole moment orientations.
VI. SUMMARY
We have determined the dielectric function of β-Ga 2 O 3 by using a generalized oscillator model taking into account the direction of the dipole moments for each transition. Within this model, the components of the dielec- tric tensor within the cartesian coordinate system are not independent from each other but are determined by the projection of the corresponding dipole direction. In doing so, we could determine the tensor components of the DF of β-Ga 2 O 3 bulk single crystals and thin films.
By means of the determined direction of the dipoles we assign the involved orbitals for the observed transitions. For the thin film we showed that the presence of rotation domains leads to the formation of an effective uniaxial material. The sensitivity to the out-of-plane component of the dielectric tensor is enhanced compared to pure uniaxial materials since it is connected to the in-plane component. This allows a precise determination of this component even if the optical axis is perpendicular to the surface, which is relevant for applications in optoelectronics. | 2016-01-28T20:34:20.000Z | 2016-01-28T00:00:00.000 | {
"year": 2016,
"sha1": "8a7e4e7ab36c51b222f8d5b39792455968d57c81",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1601.07892",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8a7e4e7ab36c51b222f8d5b39792455968d57c81",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
119509876 | pes2o/s2orc | v3-fos-license | Synchrotron X-ray diffraction study of a charge stripe order in 1/8-doped La$_{1.875}$Ba$_{0.125-x}$Sr$_{x}$CuO$_{4}$
Lattice distortions associated with charge stripe order in 1/8 hole-doped La$_{1.875}$Ba$_{0.125-x}$Sr$_{x}$CuO$_{4}$ are studied using synchrotron X-ray diffraction for $x=0.05$ and $x=0.075$. The propagation wave vector and charge order correlation lengths are determined with a high accuracy, revealing that the oblique charge stripes in orthorhombic $x=0.075$ crystal are more disordered than the aligned stripes in tetragonal $x=0.05$ crystal. The twofold periodicity of lattice modulations along the c-axis is explained by long-range Coulomb interactions between holes on neighboring CuO$_{2}$ planes.
The interplay between spin and charge correlations in hole-doped CuO 2 planes is widely believed to be related to the mechanisms of high-T c superconductivity. In La 2−x Ba x CuO 4 , which is a prototypical high-T c superconductor, anomalous suppression of superconductivity has been observed at around a specific hole concentration of x = 1/8, where the Low-Temperature-Tetragonal (LTT) crystal phase (P 4 2 /ncm symmetry) occurs 1,2 . Tranquada et al. have found the incommensurate spin-and charge orders in the LTT phase of La 1.6−x Nd 0.4 Sr x CuO 4 (LNSCO) with x = 0.12 3,4,5 . The results revealed that a strong relation exists between spin/charge ordering, crystal structure, and the suppression of high-T c superconductivity. Based on the stripe model 3,6 , these relationships can be explained by the pinning of dynamical charge stripe correlations by lattice potentials, resulting in the strong suppression of superconductivity. Recently, a systematic neutron scattering study of the incommensurate spin/charge order in La 1.875 Ba 0.125−x Sr x CuO 4 (LBSCO) with 0.05 ≤ x ≤ 0.085 has confirmed that charge ordering only occurs in LTT and LTLO (Low-Temperature-Less-Orthorhombic, Pccn symmetry) phases and competes with superconductivity, whereas the robustness of magnetic order depends weakly on crystal structure and T c suppression compared to charge order 7 . Hence, an understanding of the microscopic nature of charge order is important for clarifying the relationship between charge correlation and superconductivity.
Although charge order is observed as lattice distortions in neutron scattering, X-ray diffraction can, in principle, directly detect charge distributions, which would provide direct evidence of charge order. A recent synchrotron X-ray diffraction study of LNSCO at x = 0.12 has determined the propagation wave vector of the incommensurate charge order, Q ch = (±2ǫ 0 1 2 ) with ǫ = 0.118 r.l.u. (reciprocal lattice unit) 8 . Although the superlattice observed in the X-ray diffraction study was mainly the result of lattice distortions, precise determination of the wave vector Q ch revealed that the lattice distortions are caused by the formation of charge stripe order. In LBSCO systems, neutron scattering measurements have found that the in-plane component of Q ch for x = 0.05 in the LTT structure is different from that for x = 0.075 in the LTLO structure, which suggests a strong relationship between stripe pattern and crystal symmetry 9 . However, detailed information about the three dimensional correlation of the charge order is not available yet because no synchrotron X-ray diffraction measurements have been carried out in LBSCO systems. Synchrotron X-ray diffraction measurements of LB-SCO with x = 0.05 and 0.075 are conducted to study the nature of charge stripe order in detail, and to examine the relationship between charge correlation and crystal structure.
X-ray diffraction experiments were performed at the Crystal Structure Analysis Beam Line (BL02B1) 10 of SPring-8. X-ray energy was tuned to 30 keV using a sagittally bent Si(311) double monochromator. A double platinum mirror vertically collimates the incident beam and completely eliminates higher order harmonics. Single crystals of LBSCO with x = 0.05 and x = 0.075 were obtained from the same batch as crystals used in previous neutron scattering studies 7,9,11 . The cylindrical crystals are about 5 mm in diameter with a height of 1 mm. The reciprocal lattice is defined in the I4/mmm symmetry where the two short axes correspond to the distance between the nearest-neighbor Cu atoms along the in-plane Cu-O bond. At the (2 0 0) point, the longitudinal resolution ( a * -axis) was about 0.014Å −1 and transverse resolutions along the b * -and the c * -axes were ∼ 0.005Å −1 and ∼ 0.046Å −1 , respectively. ous neutron scattering study 11 , the incommensurability of elastic magnetic peaks of x = 0.05 samples was found to be ǫ = 0.120 ± 0.001 r.l.u., indicating that the superlattice peaks observed in the present study indeed correspond to second-order harmonics of magnetic order. The line-widths are clearly broadened with respect to instrument resolution (indicated by bold horizontal lines in the figures), which gives finite in-plane correlation lengths along the a-axis (≡ ξ a ) of 130 ± 20Å and 120 ± 30Å for x = 0.05 and x = 0.075, respectively.
K-scan profiles of superlattice peaks at around Q = (2 − 2ǫ 0 0.5) are shown in Figs. 2(a) and (b) after background correction. Figure 2(c) shows a trajectory of the q-scan and the locations of superlattice peaks in reciprocal lattice space. Note that the K-direction is perpendicular to the propagation wave vector Q ch . The peak for the x = 0.05 crystal is almost at K = 0 (indicated by a dashed line in the figure), whereas for the x = 0.075 crystal, the peak is clearly shifted away from K = 0. The amplitude of the peak shift was found to be 0.007 ± 0.001 r.l.u, the same as found in a previous neutron scattering study 9 . In addition, the x = 0.075 crystal used in the x-ray diffraction was composed of a single domain unlike the crystal from the neutron scattering study 9 , which contained a twin due to the orthorhombic symmetry of Pccn. Hence, the shift of the superlattice peak in the x = 0.075 crystal is clearly not an artifact, with the quartet of superlattice peaks forming a regular rectangular shape in reciprocal lattice space, as shown by the open diamonds in Fig. 2(c). This arrangement of peaks satisfies the orthorhombic symmetry of the LTLO phase, not the tetragonal symmetry of the LTT phase, indicating that the pattern of charge order is closely related to crystal structure. The in-plane correlation length along the b-axis (≡ ξ b ) for the tetragonal x = 0.05 crystal was 110 ± 10Å, and similar for ξ a . In comparison, ξ b of the orthorhombic x = 0.075 (= 70±8Å) is clearly shorter than ξ a and also shorter than the ξ b of the x = 0.05 crystal. Figure 3 shows the L-dependence of the superlattice peaks around Q = (2 − 2ǫ 0 ± 0.5) for (a) x = 0.05 and (b) x = 0.075, corresponding to out-of-plane correlations. The plots show the difference between data at T = 11 K and 45 K. Raw data for the x = 0.05 crystal at each temperature is plotted in the inset of Fig. 3(a). Intensities of both the samples modulate sinusoidally and exhibit broad maxima at L = ±0.5 r.l.u., indicative of a twofold periodicity along c-axis. The line-width is much broader than instrument resolution. Thus a reasonably short out-of-plane correlation length ξ c of ∼ 9Å was ob- tained for both the x = 0.05 and 0.075 crystals, which is shorter than that the next-nearest-neighbor (n.n.n.) distance between CuO 2 planes. The large anisotropy between ξ a,b and ξ c suggests two-dimensional charge correlations. Solid curves in Fig. 3 denote fits to the equation |F (L)| 2 ∝ |1 − e −i2πL | 2 = 4 sin(πL). The good agreement of this equation with the data indicates that there is an antiphase relationship between n.n.n. CuO 2 layers, which are separated by a distance c, which can be explained by a long-range Coulomb interaction between doped holes on the CuO 2 planes. The integrated intensity along L of the superlattice peak is ∼ 10 7 times weaker than that of the fundamental (2 0 0) Bragg reflection of intensity ∼ 10 8 cps. In addition, the relative intensity of superlattice peak to the fundamental peak is ∼ 10 times weaker than found in the neutron scattering study. These results show that lattice distortions are the main contributor to the superlattice intensity and that the relative intensity is qualitatively consistent with a model in which the largest atomic displacement resulting from charge order is oxygen. The amplitude of oxygen displacement along the a-axis can be estimated to be less than 10 −3Å by a simple calculation based on the stripe model and using the measured relative intensities. The temperature dependence of superlattice peak intensity and of the (3 0 0) reflection, which corresponds to the order parameter of structural phase transition into the LTT or LTLO phase, were measured. Results are shown in Figs. 4 (a) The T d2 transition temperatures for the x = 0.05 and x = 0.075 crystals were thus estimated to be 38 K and 34 K, respectively, almost identical to those obtained by neutron scattering 7 . Remarkably, the temperature dependence of the superlattice peak intensity (closed circles) is almost identical to that of the order parameter for the LTT/LTLO phase (open circles), suggesting that the ordering process of charge order is closely related to that of LTT/LTLO structural phase transition. These results are quite a contrast to the LNSCO system, where the superlattice peak evolves gradually as temperature decreases whereas the LTT order parameter exhibits a first order phase transition 4,8 .
High-Q resolution as well as the high-statistics of the present X-ray diffraction study have provided precise propagation wave vectors of the superlattice peaks associated with charge order, giving Q ch = (±0.24 ∓ η 1 2 ) with η = 0 and 0.007 r.l.u. for x = 0.05 and x = 0.075 crystal, respectively. It is remarkable that the incommensurability ǫ = 0.12 r.l.u. of both samples is almost identical to that of LNSCO for x = 0.12 but is inconsistent with hole-doping x = 1/8 of the present samples. As can be seen in Fig. 4, charge order and the LTT structures are strongly coupled, displaying that commensurability with the lattice is essentially important for stabilizing charge order. In this case, one can easily imagine that ǫ should have a commensurate value of 1/8, as predicted theoretically 12 . Tranquada et al. have noted that the incommensurate value of ǫ can be regarded as a disordered stripe in which there is the mixture of distinct stripe periods of 4a and 5a 14 . In scattering intensities calculated under this assumption, the charge order peak is broad-ened whereas the magnetic order peak remains sharp. In fact, in our LBSCO system, the intrinsic line-width of the superlattice peak along H-direction is considerably broader than that of resolution-limited magnetic peaks observed by neutron scattering 7,11 . These results imply that charge stripe order in cuprates is intrinsically disordered in comparison with that of isostructural systems of La 2−x Sr x NiO 4 in which stripe order is mostly stabilized around commensurate positions with ǫ = 1/3 13 . It should be noted that the high two-dimensionality of charge correlation (ξ a,b /ξ c > 6) could make the stripe correlation disordered.
Line-broadening of the superlattice peak is seen along both the H-direction and the K-direction. In particular, these systematic experiments using single domain crystals have revealed that line-widths along the K-axis for orthorhombic x = 0.075 crystals are much broader than for tetragonal x = 0.05 crystals. Based on the stripe model, the line-width along K corresponds to the mosaicity of the charge stripe. In addition, the orthorhombic symmetry of superlattice peaks in x = 0.075 crystals suggests that the charge stripes are oblique. As Fujita et al. have noted 9 , a corrugated pattern in the CuO 2 plane in LTLO phase can easily produce steps or kinks in the stripes, giving rise to the oblique of charge stripe. In this point of view, more oblique stripes could introduce the steps or kinks more randomly, which yields charge stripe mosaicity. Therefore, oblique stripe order becomes more disordered or smectic in comparison with the aligned stripe, consistent with the present results.
In the LTT phase, the tilting pattern of the CuO 6 octahedra, i.e. the lattice potential pattern, is rotated by 90 • with respect to the nearest-neighbor layers. Thus the wave vector of charge order is rotated by 90 • . Furthermore, the phase of charge order is shifted by π from the n.n.n. layer to minimize the energy losses due to Coulomb interactions, giving rise to a twofold periodicity along the c axis. Therefore, the 2c periodicity of the superlattice peaks suggests that the doped holes are indeed arranged one-dimensionally across the two dimensional CuO 2 plane.
In conclusion, the propagation wave vector and three dimensional correlation of charge order in LBSCO systems were determined accurately using high-intensity synchrotron X-ray diffraction. Despite the 1/8-hole doping, the incommensurability of the superlattice peak (ǫ = 0.12 r.l.u.) is clearly shifted away from the commensurate value of 1/8, indicating that charge stripe order in cuprates is intrinsically disordered. The orthorhombic x = 0.075 crystal provided detailed information about the peak shift as well as the line width of the superlattice peak, indicating that the oblique stripes in x = 0.075 crystal are more disordered than the aligned stripes in x = 0.05 crystal. The charge order was also found to be 2c periodic and two-dimensional in nature. A proper determination of the atomic displacement pattern associated with the charge order is required to fully understand the essential nature of the (disordered) charge stripe order. | 2019-04-14T02:02:14.450Z | 2003-03-14T00:00:00.000 | {
"year": 2003,
"sha1": "cdce5fffe49334694b94f55033a67bb863a2ee6e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0303268",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cdce5fffe49334694b94f55033a67bb863a2ee6e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
251077812 | pes2o/s2orc | v3-fos-license | On the Effect of Heterophilic Antibodies on Serum Levels of Cardiac Troponins: A Brief Descriptive Review
Serum levels of cardiac troponins can be increased both with myocardial damage and in the absence of myocardial damage. In the second case, this is due to the influence of false-positive factors, among which heterophilic antibodies play a significant role. Understanding the causes of the formation of heterophilic antibodies, the features and mechanisms of their effect on serum levels of cardiac troponins, is an important condition for interpreting a false-positive result due to the influence of heterophilic antibodies. This brief, descriptive review presents the causes of heterophilic-antibodies formation and discusses their effect on serum levels of cardiac troponins.
Introduction
Cardiospecific troponins T and I (cTnT and cTnI) are, undoubtedly, considered the most effective biomarkers of myocardial infarction (MI), due to the two main criteria of an ideal biomarker: high sensitivity and specificity [1][2][3][4]. At the same time, it is known that, apart from myocardium, troponins are expressed in skeletal-muscle tissue and the walls of venae cavae and pulmonary veins [5][6][7][8].
From the moment the first immunoassays were invented, the methods for detection of cTnT and cTnI in serum have been refined, which led to a revolution in MI diagnostics. First of all, their sensitivity significantly increased, while the limit of detection (LoD) or the minimum detectable concentration (MDC) of the first prototypes was about 100-500 ng/L, in modern immunoassays it can be even less than 1 ng/L [1,9,10]. Therefore, it became possible to detect the extremely low concentration of troponin equal to 0.12 ng/L in healthy people, which is approximately 10 times less than the concentration detectable by standard high-sensitivity methods. Due to such high sensitivity, troponin I was detected in 96.8% of completely healthy people [11].
High sensitivity of new (high-sensitivity and ultra-sensitive) test systems allowed for the development of MI early-diagnostic algorithms. During the first 1-3 h, low levels of cTnT and cTnI, which used to be "invisible" for moderately sensitive test systems, became clearly identifiable by modern immunoassays [1,12,13]. Moreover, it became possible to demonstrate that cardiac troponins in low concentrations are present in oral fluid and urine [14][15][16][17][18][19]. This is a new and very promising direction in the non-invasive diagnosis of both the cardiovascular diseases and pathologies that cause myocardial damage [20].
Taking into consideration the mechanisms of the increase in cardiac troponins, the three groups of the causes for cardiac troponins' increase can be identified (Table 1): (1) Life 2022, 12, 1114 2 of 12 increase in cTnT and cTnI levels associated with myocardial injury in primary cardiac disease, (2) increase in cTnT and cTnI levels associated with myocardial injury in noncardiac diseases, and (3) increase in cTnT and cTnI levels associated with preanalytical and analytical factors. In the latter case, the increase in cTnT and cTnI levels takes place without myocardial injury and is conditioned upon the influence of physical and chemical (hemolysis, lipemia, presence of clots in a sample, etc.) or biological (presence of heterophile antibodies, increase in the level of bilirubin, alkaline phosphatase, rheumatoid factor) factors on the result of the laboratory test [31][32][33][34][35][36][37]. Abbreviations: cTnT-cardiac troponin T, cTnI-cardiac troponin I, MI-myocardial infarction, COPD-chronic obstructive pulmonary disease, CRF-chronic renal failure, PATE-pulmonary artery thromboembolism.
The increase in cTnT and cTnI associated with analytical and preanalytical factors not only bears no diagnostic or prognostic value but also can have an extremely adverse impact on treatment and the diagnostic process. The most significant contribution, with regard to the change of concentration (degree of increase) of cardiac troponins, is made by heterophile antibodies. This paper consistently considers the causes and mechanisms of formation of heterophile antibodies and shows their influence on the concentrations of cTnT and cTnI. The methods for detection and control of this interference are discussed as well.
Influence of Heterophile Antibodies on the Concentrations of cTnT and cTnI: Clinical Data
Heterophile antibodies are endogenous antibodies in human serum/plasma that can interfere with immunoassays leading to false elevation or (rarely) false depression of measured values [38][39][40][41][42]. The incidence of heterophile antibodies is extremely variable and amounts to 0.1-3% in the general population [38][39][40]. The main causes of formation of heterophile antibodies are contact with domestic and wild animals, blood transfusion, autoimmune diseases, hematologic malignancies, dialysis, and pregnancy [40]. Heterophile antibodies have recently become an object of active attention, because they significantly affect the results of laboratory test values. At the same time, such an interference relates to practically all medical fields and is by no means limited to cardiology. In their article "When lab tests lie. Heterophile antibodies", A. Morton notes that heterophile antibodies cause a number of problems in diagnostics of many diseases that imply the use of immunochemical (immunoenzymometric, immunochemiluminiscent, immunofluorescence, radioimmune) methods for biomarkers detection [40]. Heterophile antibodies may influence a wide range of laboratory tests, resulting in false elevation of tumor markers, hormone levels, MI markers, therapeutic-drug-monitoring results, etc. [39][40][41].
The first clinical case of a cTnI false-positive result generated by the influence of heterophile antibodies was described in 1998 by T. Fitzmaurice et al. [42]. A 69-year-old patient underwent an infrarenal aneurysm surgery. The concentration of cTnI after the surgery equaled 106 µg/L, while the norm is 0.5 µg/L. In the 5 h after the first test, the concentration of cTnI increased approximately 1.5 times and reached the value of 146 µg/L, Life 2022, 12, 1114 3 of 12 which made the physicians think of an ischemic myocardial injury. At the same time, the data of clinical and functional methods, including an ECG, did not confirm the development of myocardial ischemia, and the concentration of another marker-creatine phosphokinase MB isoform (CPK-MB)-was within normal limits as well (2.9 µg/L). After treating this sample with a special solution-the heterophile antibody blocking agent (HABA)-the concentration of cTnI decreased to 1.5 µg/L, though it was still above normal. When the patient's blood specimens (the untreated sample and the sample treated with the HABA) were investigated using other immunochemical test systems for detection of cTnI and cTnT, the detected concentrations of these biomarkers were normal.
Two years later, S. Kazmierczak et al. found a false-positive increase in cTnT and cTnI in a 75-year-old woman after a surgery. It is noteworthy that during the period of their stay at the hospital, the levels of cTnT and cTnI repeatedly increased, reaching 40 µg/L, and then sharply dropped, which is indirectly indicative of the influence of some other (non-ischemic) factors on the results. Besides, during the incubation of blood samples with nonimmune mouse serum, the levels of cTnI and cTnT decreased in all the samples obtained from the patient approximately two times [43].
K. Yeo et al. analyzed 200 serum samples with positive values of cTnI. After adding the HABA, it turned out that four blood samples had false-positive results, i.e., the share of false-positive results was 2%. The concentration of cTnI in untreated blood samples and blood samples treated with the HABA changed between 2 and 70 times. Testing of the same blood samples using another test system for detection of cTnI did not demonstrate any signs of heterophile antibody interference [44]. D. Uettwiller-Geiger et al. investigated the levels of cTnI in 101 samples of patient serum via the Access AccuTnI test system (Beckman Coulter) and detected the interference of heterophile antibodies in two samples, which is 2%. The incubation of these samples with the HABA efficiently reduced the concentration of cTnI to reference values [45].
G. White et al. described a case of false-positive elevation of cTnT in a 46-year-old man, who was seeking medical attention and complaining of pains in the chest and left arm. The concentration of cTnT in whole blood measured using the quantitative express test Roche CARDIAC T was equal to 0.59 µg/L, which was 5.9 times higher than the upper reference limit (0.1 µg/L). Due to the suspected MI, the patient had to undergo coronary angiography, which appeared normal. The analysis of cTnT concentration in the serum of the patient via another analyzer (Roche T STAT) also did not show any elevated values. After the treatment of whole blood with normal mouse serum (Sigma-Aldrich Co., St. Louis, MO, USA), the cTnT level dropped to the reference limit (0.1 µg/L) [46]. Italian researchers led by M. Cassin described two cases of the cTnI false-positive increase caused by heterophile antibodies at once (Dade Behring RXL Dimension). In the first case, a 64-year-old woman with PATE had constantly variating elevated levels of cTnI (between 23 µg/L to 0.43 µg/L) in comparison with the reference range (0.13 µg/L). Although the physicians suspected a myocardial injury, which is often typical of PATE [47,48], they were confused by the fact that the ECG data and the CPK-MB values were absolutely normal, therefore, the physicians assumed interference was taking place. After the patient's sample was treated with the HABA, the cTnI level turned out to be lower than the upper-control limit (<0.13 µg/L). The second patient admitted to the emergency department with chest pains also had fluctuations of their cTnI level (Dade Behring RXL Dimension) between 0.19 and 0.36 µg/L, which made the medical team think of MI. However, the normal results of an ECG and the levels of CPK-MB still confused the physicians during the final diagnosis. The treatment of samples with the HABA solution led to the normalization of cTnI values and helped to exclude the diagnosis of MI [49].
Knoblock et al. described a clinical case of false-positive elevation of cTnI in a 53-yearold patient, who had an MI in the past. They were hospitalized for complaints of chest pain and myocardial reinfarction was suspected. The concentration of cTnI (Abbott AxSYM cTnI) in their serum at admission was 6.2 µg/L, which was significantly higher than the reference value (0.4 µg/L). When blood was taken 8 and 24 h from the moment of admission, the Life 2022, 12, 1114 4 of 12 levels of cTnI were 5.5 and 5.1 µg/L, respectively. However, the ECG data and the levels of CPK and CPK-MB were within normal limits. A dipyridamole test also showed no signs of myocardial ischemia, and, according to the data of echocardiography, the ejection fraction complied with the norm. Later, this patient applied to the emergency department several times with similar complaints. The results of a coronarography did not detect any abnormalities of the lumen of coronary heart vessels. In general, within 3 months, this patient had 16 positive results of cTnI using an AxSYM analyzer. Due to the constantly elevated levels of cTnI not corresponding to the clinical picture, the data of the clinical and functional investigation methods, and the laboratory results of the other cardiac markers, a false-positive result was suspected. When detecting cTnT (Roche Elecsys 2010) and cTnI using another test system (Dade-Behring Dimension RxL), the results turned out to be significantly lower than the LoD for these test systems. It is noteworthy that, as opposed to other cases, addition of the HABA in the case of this patient led to an even greater increase in the concentration of cTnI. To eliminate interfering antibodies, the researchers passed the serum sample through a protein A immobilized column (Sigma-Aldrich Co.), after which the concentration of cTnI decreased from 7.9 µg/L to 0.2 µg/L [50].
W. Kim et al. conducted a survey, in the course of which they detected 25 cases of false-positive elevation of cTnI (Dade Behring RXL Dimension) caused by heterophile antibodies of class G (IgG). These patients suffered from different diseases, which in the opinion of the researchers could induce the formation of heterophile antibodies: endocrine disorders, recent surgeries, heart diseases, CRF, lung troubles (COPD, PATE), gastrointestinal diseases, cancer, and connective-tissue diseases. Then, the researchers estimated the efficiency of neutralization of interfering effect of heterophile antibodies using the HABA. In 9 of 13 serum samples, the action of the HABA was effective and resulted in a significant decrease in cTnI levels. At the same time, in four serum samples, the researchers failed to normalize the values of cTnI using the HABA, as well as by using mouse serum containing immunoglobulins IgG1/IgG2a, on the basis of which the authors assumed the presence of heterophile antibodies specific to the analysis components different from heterophile antibodies or mouse immunoglobulin [51].
M. Zaninotto et al. registered a case of cTnI false-positive increase in a 29-year-old woman having a history of infectious myocarditis. The concentrations of cTnI, determined with the help of Dade-Behring RxL, were significantly elevated all the time and varied between 6.0 µg/L and 12.2 µg/L, while the norm is up to 0.15 µg/L. Having suspected a falsely elevated value, due to the inconsistency between the laboratory data and the clinical picture, the researchers measured the levels of cTnI using other test systems; the concentration in this case turned out to be diagnostically insignificant. Having assumed the influence of heterophile antibodies, the researchers treated the serum sample with the HABA, which resulted in a sharp drop of cTnI concentration from 7.73 µg/L to 0.15 µg/L [52].
S. Fleming et al. examined the incidence of false-positive values of cTnI (Access AccuTnI). For that purpose, the researchers systematically incubated all the serum samples that exceeded the diagnostic threshold. The total incidence of falsely elevated troponins conditioned upon the influence of heterophile antibodies equaled 3.1% (95% CI, from 2 to 4.4%) in the general population and 14.8% (95% CI, from 9.9 to 20.9%) in patients with diagnostically significant values of cTnI [53].
Investigating the levels of cTnI (Dade-Behring RXL Dimension) in the serum of 60 patients with legionellosis, M. Garcia-Mancebo et al. discovered that in 47% of cases the concentration of cTnI exceeded the reference limit (0.1 µg/L). The authors noted the following remarkable observation: the regression analysis between the serum antibody titer to Legionella pneumophila and the concentration of cTnI in a sample with interference demonstrated reliable correlation (r = 0.72; p < 0.05) [54].
C. Bionda et al. reported a case of false-positive elevation of the cTnI level (Dade-Behring X-Pand) in a patient hospitalized for asthenia, exophthalmos, and sinus tachycardia. Although the concentration of cTnI was significantly elevated, the data of the clinical picture and an ECG did not correspond to ischemic myocardial injury. After the blood sample was treated with the HABA, the concentration of cTnI dropped from 11.4 µg/L to 0.08 µg/L [55].
Y. Zhu et al. found false-positive cTnI in an 88-year-old patient sent to the emergency department for aspiration pneumonia. The level of cTnI measured with the Siemens ADVIA Centaur test system was 19.99 µg/L, while the norm is up to 0.06 µg/L. Due to the inconsistency between cTnI values and the clinical picture, the decision was taken to repeat the investigation of the blood sample using another test system, Abbott i-STAT cTnI. As anticipated, the concentration of cTnI turned out to be lower than the upper reference limit (<0.09 µg/L). After incubation of the serum sample obtained from this patient, the level of cTnI measured via the Siemens ADVIA Centaur test system significantly decreased to 0.03 µg/L [56].
S. Ghali et al. described a clinical case of falsely elevated cTnI in a 74-year-old patient, admitted to the emergency department with a clinical picture resembling the MI: chest pain and increasing dyspnea [57]. The level of cTnI (Beckman Coulter Access AccuTnI) was significantly elevated to 77.28 µg/L, while the reference range is 0.00-0.04 µg/L for this test system. At the same time, the ECG data did not correspond to myocardial ischemia but were indicative of the right bundle-branch block: hypertrophy of the walls of the left heart chambers. Other cardiac markers detected in this patient, as opposed to cTnI, were within the reference limits: myoglobin (50 ng/mL), CPK-MB (5.2 ng/mL), CPK (74 IU/L), and Ddimers (0.33 µg/L). The data of transthoracic echocardiography revealed hypertrophy of the left ventricle, mild diastolic dysfunction, and normal ejection fraction, without any signs of regional contractility disorders. Notwithstanding the fact that these morphological changes of the myocardial hypertrophy can, to some extent, explain the increase in the concentration of cardiac troponins, in this case the increase level turned out to be too significant and could by no means correspond to the patient's condition. Besides, the level of creatinine reflecting the condition of kidney filtration was also normal, which additionally excluded the reason for the increase in cTnI associated with its elimination from the bloodstream. The levels of cTnI remained elevated during the whole hospitalization period, keeping disproportionality with regard to other cardiac markers (myoglobin, CPK-MB, and CPK), which either stayed normal or were very insignificantly elevated during the whole period. Doubting the results of the cTnI level obtained in their laboratory, the decision was taken to send the patient's serum samples to another hospital's laboratory using another immunological method of detecting cTnI (Siemens ADVIA Centaur). The levels of cTnI in the samples obtained from this patient were constantly lower than 0.01 µg/L. Carrying out further investigation to clarify the influence of heterophile antibodies, the researchers added to the patient's serum 10 various blocking agents manufactured by Scantibodies Laboratory: immunoglobulins of class G (IgG), goat IgG, mouse IgG, rabbit IgG, bovine IgG, Poly Mak 33, Scavenger ALP, AP Mutein, HBR-1, HBR-non murine, and TRU block. The first seven agents listed above are specific agents blocking heterophile antibodies. The last three are agents blocking nonspecific heterophile antibodies. Nine of the ten blocking agents did not influence the resulting level of cTnI. However, adding the nonspecific blocking-agent HBR-1 reduced the result of the troponin by more than 90% of its initial value, which is indicative of successful blockage of heterophile antibodies in this patient. Another interesting observation of the authors was that the hemoglobin-level titer during the whole period of the patient hospitalization was very closely correlated with the concentration of cTnI, which means it actually reflected the influence of heterophile antibodies and could be used as a surrogate marker of the heterophile antibody titer. Thus, in the course of hospital treatment, the patient had a duodenal ulcer hemorrhage caused by anticoagulant therapy, which led to a drop in hemoglobin levels followed by a decrease in cTnI level, caused by the heterophileantibody-titer drop [57].
The and others. The researchers made an assumption that those changes were connected with pregnancy [58].
J. Nguyen et al. also recently reported an interesting clinical case of cTnI false-positive elevation (Access AccuTnI + 3 TM ) in a 52-year-old patient, who requested medical assistance in the emergency department regarding the complaints of chest pain. The level of cTnI was elevated at admission and remained such (neither increasing or decreasing) during the whole period of in-treatment. At the same time, the data of an ECG and the results of surrogate biomarkers of MI (CPK, CPK-MB;, myoglobin) during the whole hospitalization period did not indicate ischemia and/or myocardial-tissue injury. The patient also underwent echocardiography, a heart catheterization, and a computer tomography, however only insignificant cardiac changes were detected (tricuspid regurgitation of the first degree, insignificant pericardial effusion, and the signs of a non-obstructive lesion in the coronary bed). The medical team expressed their doubts about the result of the laboratory test and sent the patient's serum sample for cTnI testing to another laboratory, applying the test system Advia Centaur XP TnI-Ultra assay. Troponin in this patient appeared negative. Carrying out further investigation to identify the causes of interference, the researchers diluted the serum sample two-fold, which under normal conditions should have resulted in the proportional drop of the troponin level, however, the values in that case, on the contrary, increased to 6.13 µg/L, which is one of the signs of heterophile-antibody interference. However, at the same time, the adding of the HABA did not lead to the normalization of troponins, and when the serum was tested for heterophile antibodies, they turned out to be negative [59].
L. Manjunath et al. reported a case of possible influence of heterophile antibodies on the concentration of troponin I in a young patient [60]. The patient was admitted to the emergency department with chest discomfort. The level of cTnI at admission was 0.123 µg/L (while the norm is up to 0.055 µg/L) and later rose to 0.124 and 0.213, which is typical of a classical MI. Besides, the version of MI was supported by an adverse lipid profile of the patient: total cholesterol in the fasted state was 235 mg/dL, low-density lipoprotein was 170 mg/dL, high-density lipoprotein was 38 mg/dL, and triglycerides were 124 mg/dL. However, the laboratory data concerning other MI surrogate markers did not detect any signs of myocardial injury that could lead to such a significant increase in troponin I. When taking the history, the medical team found out that the patient was actively participating in sports, and on the day before the admission they had run several kilometers while preparing for a marathon race, which made the researchers think about the influence of physical exercise [61,62]. Nevertheless, in this case it is highly unlikely that the result was distorted only by the impact of physical exercise. Further follow-up of the patient always showed chronic troponinemia, even independently of physical exercises. Therefore, excluding all the factors known, the researchers came to the conclusion of possible influence by heterophile antibodies [60].
There are few research papers about the influence of heterophile antibodies on the concentration of high-and ultra-sensitive troponins [63][64][65][66][67]. S. Baroni et al. described a case of false-positive elevation of high-sensitive troponin I (TNIH Centaur XPT Siemens) in a 52-year-old man admitted with chest pains radiating to the left arm. The physical examination data were normal. An ECG registered sinus rhythm and the absence of significant changes of the ST segment and the T wave. Arterial-blood pressure was elevated to 170/90 mm Hg. The patient's laboratory values, including creatinine and creatine kinase, were within the reference ranges. Cardiac troponin measured using the standard cTnl-ultra Siemens test (the norm is up to 0.040 µg/L) was negative, both at admission and 3 h afterward (0.012 and 0.008 µg/L respectively). Along with that, the investigation of the blood sample using another high-sensitivity analyzer TNIH Centaur XPT Siemens (the norm is up to 47 ng/L) detected elevated levels of troponin I-129 ng/L and 140 ng/L at admission and after 3 h, respectively. Before that episode, the researchers tested using this kit about 100 healthy patients, and the values of all of them fell within the reference range. In addition to the laboratory examination, the patient also underwent an ECG, Life 2022, 12, 1114 7 of 12 echocardiography, and an ECG stress test, which did not detect any signs of myocardial ischemia that could have led to such a remarkable increase in troponins. Later, the physicians continued observations and upon blood collection after 6 and 12 h the elevated levels were also registered with a TNIH Centaur XPT Siemens analyzer (132 ng/L and 128 ng/L respectively), while the analyzer cTnl-ultra Siemens always, on the contrary, showed negative levels. In order to identify the cause of interference, the patient's serum was serially diluted with a sample of serum with an undetectable troponin concentration, for the purpose of checking the linearity of the results. Serial dilutions demonstrated that the values obtained using the TNIH Centaur XPT kit were non-linear (1:2-55 ng/L; 1:4-33 ng/L; 1:8-21 ng/L; 1:16-12 ng/L; 1:32-9 ng/L), which suggests the presence of an interfering substance in the patient's sample [63].
N. Lacusik et al. presented a clinical case of false-positive cTnI in a 53-year-old woman, hospitalized via the emergency department for complaints of retrosternal discomfort. Upon examination, no peculiarities were found, an ECG showed no ischemic changes, and echocardiography did not detect any areas of contractility disorders, but the cTnI value was equal to 1359 ng/L, which led to the decision to perform a coronarography. The results of the intervention showed borderline 70% stenosis of the left circumflex artery in proximal segment, followed by a drug-eluting stent insertion. The patient was dismissed on the seventh day without complications. Then, in 3 weeks, rehospitalization was required due to complaints of retrosternal pain. An ECG showed no ischemic changes, however, because of a significant increase in cTnI, an emergency coronarography was performed. No stent thrombosis, restenosis, or new stenoses in other arteries were identified. During the whole hospitalization period, the values of cTnI remained elevated. Then, the patient was transferred to the rehabilitation department, where they also complained of retrosternal pains, and the concentration of cTnI was equal to 1111 ng/L. Several repeated measurements showed that the level remained similar. Taking into consideration this fact, the researchers suspected the presence of heterophile antibodies [68].
L. G. Santos et al. wrote about a 57-year-old patient with a stable elevation of cTnI level after a clinically suspected myocarditis. The patient was hospitalized via the emergency department for complaints of retrosternal pain radiating to the left upper extremity, with a duration of more than 3 h. Clinical examination showed no peculiarities, except for the fact that the patient reported they had been sick with the flu two weeks before the hospitalization. An ECG registered negative T waves in V1 and V2 leads and biphasic T waves in leads V3-V5. An increase in creatine kinase, to 380 IU/L (the reference value is 10-195 IU/L), and in cTnI, to 6.24 ng/mL (less than 0.04 ng/mL), was detected. The patient underwent a coronarography that detected no obstructive lesion of coronary arteries. A myopericarditis was diagnosed, and the corresponding treatment was initiated, which was followed by a hospital discharge. In 4 weeks, the patient was seeking medical attention again with similar complaints. The level of cTnI was 10.46 ng/mL. A repeated coronarography was performed and it detected no abnormalities. On the fourth day of hospitalization, the level of cTnI was 26.81 ng/mL. Due to the increase in the concentration of cardiac troponin without the visible clinical picture of MI, the researchers assumed the presence of heterophile antibodies circulating in the blood, which was proven later [39].
Taking into consideration what was said above, almost all test systems used for detection of cTnT and cTnI are subject to the influence of heterophile antibodies. According to the data of some researchers, a certain test system is weakly influenced by heterophile antibodies, while other researchers claim that the same test system can often show falsepositive results. Therefore, it is not possible yet to identify the most reliable test system that is influenced by heterophile antibodies to the least extent. At the same time, nowadays there is no 100% reliable method that could ensure neutralization of the heterophile-antibodies effect. The main mechanisms of influence of heterophilic antibodies are their ability to interact with diagnostic antibodies (antibodies against cTnT and cTnI) (Figure 1). antibodies, while other researchers claim that the same test system can often show falsepositive results. Therefore, it is not possible yet to identify the most reliable test system that is influenced by heterophile antibodies to the least extent. At the same time, nowadays there is no 100% reliable method that could ensure neutralization of the heterophileantibodies effect. The main mechanisms of influence of heterophilic antibodies are their ability to interact with diagnostic antibodies (antibodies against cTnT and cTnI) ( Figure 1). At the first stage, the analytes (molecules of cardiac troponins) being released in case of myocardial injury interact with the first ("capture") antibodies, which results in the formation of an antigenantibody complex. Then, the second ("labeled") antibodies bind with this complex, which leads to the formation of a "sandwich-type" immune complex. The label on the second antibodies causes the generation of a signal, the level of which is directly proportional to the quantity of the antigenantibody complexes formed at the first stage, i.e., the concentration of cardiac troponin molecules in the examined biological fluid sample. (B) Presence of interference. Heterophile antibodies can unspecifically bind with capture antibodies at the first stage in the absence of the analytes of interest (molecules of cardiac troponins) in biological fluid and lead, therefore, to false-positive results. D. Herman et al. suppose that the influence of heterophile antibodies on levels of cardiac troponins can depend upon the type of a sample being examined. For troponin testing, the following types of samples may be used, depending on the type of the applied tube that can contain additives of different compositions: serum-separator tube-with a red or gold top (without additive), plasma-preparation tube-with a green top (contains anticoagulant heparin), and whole-blood tube-with a purple top (contains anticoagulant EDTA, besides, this tube can be centrifuged for preparation of plasma). The most frequently used tubes for quantitative detection of cardiac troponins are serum-separator tubes and heparinized tubes. The researchers have found that the examination of blood collected from the same patient using different tubes can give different troponin levels. It is assumed that the possible difference between the samples of serum (red-top tube) and heparinized plasma (green-top tube) may appear if antibodies are affected by heparin. Since a molecule of cTnI has a high positive charge, it will attract negatively charged molecules such as heparin, which, in turn, can hinder the interaction of antibodies and antigens, thus reducing Life 2022, 12, 1114 9 of 12 their interfering influence [69]. For confirmation of this phenomenon, additional studies are needed.
Methods for Detection and Control of Heterophile Antibodies
Blocking agents are used to control heterophile antibodies. An additional and rather effective way of controlling heterophile antibodies is the several-fold dilution of samples of patient serum and a check of the linearity (proportionality) of the results: i.e., under normal conditions, if a patient's sample with the determined concentration is diluted two-fold, the level of troponins should also decrease approximately two times. In the case of four-fold dilution, the level of troponins should decrease four times, and so on. For a complete check of the linearity, it is optimal to consider the results of 3-4 serial dilutions. If the level of troponins in serum that has been diluted several-fold decreased disproportionally or even increased, the influence of heterophile antibodies should be suspected. Other methods for detection of heterophile antibodies are not supported with a sufficient evidence base, as they did not demonstrate reproducible results.
According to clinical studies, the prevalence of false-positive results due to the influence of heterophilic antibodies is 2-47% [41][42][43][44][45]53,54] (Table 2). The reason for the wide range of false-positive results is definitively unknown. Hypothetically, this may be due to the cause of the formation of heterophilic antibodies (for example, in patients with legionellosis, the frequency of false-positive results is very high [54]), the method of determining cardiac troponins, the characteristics of the examined population, etc. There are very few studies devoted to this issue. More research is needed to clarify this aspect. This will help identify people, who are most at risk of forming false-positive results due to the influence of heterophilic antibodies, justify the need for routine use of HABA solutions.
Conclusions
Thus, heterophile antibodies are an uncommon-yet-significant cause of false-positive concentrations of troponins. Apparently, heterophile antibodies influence all known troponin test systems. The mechanisms of influence on the results are associated with their interference at the stage of immunological interaction between the diagnostic antibodies against the troponin molecule of interest and the antigen (the troponin molecule of interest). The main and the most reliable methods for their detection are blocking agents, the addition of which leads to a decrease in or complete normalization of concentrations. In the absence of this agent in the laboratory, the indirect method for heterophile detection can be applied, which involves serial dilution of the sample. If troponin levels in the sample change in the manner significantly disproportionate to the dilution, the presence of heterophile antibodies may be suspected. | 2022-07-27T15:16:59.336Z | 2022-07-24T00:00:00.000 | {
"year": 2022,
"sha1": "ce0902cae1bb3f798039f1f0752a3fa72c7d1326",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/12/8/1114/pdf?version=1659078628",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "671726ebec94f801f45d38388fc2c7db4dfe95bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237457873 | pes2o/s2orc | v3-fos-license | Usefulness of Diastolic Function Score as a Predictor of Long-Term Prognosis in Patients With Acute Myocardial Infarction
Background: Left ventricular diastolic function (LVDF) evaluation using a combination of several echocardiographic parameters is an important predictor of adverse events in patients with acute myocardial infarction (AMI). To date, the clinical impact of each individual LVDF marker is well-known, but the clinical significance of the sum of the abnormal diastolic function markers and the long-term clinical outcome are not well-known. This study aimed to investigate the usefulness of LVDF score in predicting clinical outcomes of patients with AMI. Methods: LVDF scores were measured in a 2,030 patients with AMI who underwent successful percutaneous coronary intervention from 2012 to 2015. Four LVDF parameters (septal e′ ≥ 7 cm/s, septal E/e′ ≤ 15, TR velocity ≤ 2.8 m/s, and LAVI ≤ 34 ml/m2) were used for LVDF scoring. The presence of each abnormal LVDF parameter was scored as 1, and the total LVDF score ranged from 0 to 4. Mortality and hospitalization due to heart failure (HHF) in relation to LVDF score were evaluated. To compare the predictive ability of LVDF scores and left ventricular ejection fraction (LVEF) for mortality and HHF, receiver operating characteristic (ROC) curve and landmark analyses were performed. Results: Over the 3-year clinical follow-up, all-cause mortality occurred in 278 patients (13.7%), while 91 patients (4.5%) developed HHF. All-cause mortality and HHF significantly increased as LVDF scores increased (all-cause mortality–LVDF score 0: 2.3%, score 1: 8.8%, score 2: 16.7%, score 3: 31.8%, and score 4: 44.5%, p < 0.001; HHF–LVDF score 0: 0.6%, score 1: 1.8%, score 2: 6.3%, score 3: 10.3%, and score 4: 18.2%, p < 0.001). In multivariate analysis, a higher LVDF score was associated with significantly higher adjusted hazard ratios for all-cause mortality and HHF. In landmark analysis, LVDF score was a better predictor of long-term mortality than LVEF (area under the ROC curve: 0.739 vs. 0.640, p < 0.001). Conclusion: The present study demonstrated that LVDF score was a significant predictor of mortality and HHF in patients with AMI. LVDF scores are useful for risk stratification of patients with AMI; therefore, careful monitoring and management should be performed for patients with AMI with higher LVDF scores.
Background: Left ventricular diastolic function (LVDF) evaluation using a combination of several echocardiographic parameters is an important predictor of adverse events in patients with acute myocardial infarction (AMI). To date, the clinical impact of each individual LVDF marker is well-known, but the clinical significance of the sum of the abnormal diastolic function markers and the long-term clinical outcome are not well-known. This study aimed to investigate the usefulness of LVDF score in predicting clinical outcomes of patients with AMI.
Methods: LVDF scores were measured in a 2,030 patients with AMI who underwent successful percutaneous coronary intervention from 2012 to 2015. Four LVDF parameters (septal e ′ ≥ 7 cm/s, septal E/e ′ ≤ 15, TR velocity ≤ 2.8 m/s, and LAVI ≤ 34 ml/m 2 ) were used for LVDF scoring. The presence of each abnormal LVDF parameter was scored as 1, and the total LVDF score ranged from 0 to 4. Mortality and hospitalization due to heart failure (HHF) in relation to LVDF score were evaluated. To compare the predictive ability of LVDF scores and left ventricular ejection fraction (LVEF) for mortality and HHF, receiver operating characteristic (ROC) curve and landmark analyses were performed.
INTRODUCTION
Acute myocardial infarction (AMI) is characterized by regional myocardial injury that may lead to systolic and diastolic dysfunction due to left ventricular (LV) remodeling and dysfunction. Left ventricular diastolic function (LVDF), an aftermath of AMI, is an important predictor for major adverse events (1)(2)(3). The 2009 guidelines for diastolic dysfunction included many parameters and was perceived as overly complex (4). In 2016, the guidelines were revised to simplify the measurement of LVDF, thereby enhancing the usefulness of the guidelines in routine practice (5,6). It recommended two separate algorithms. For patients with maintained left ventricular ejection fraction (LVEF ≥ 50%) and unknown diastolic function, Algorithm A is primarily used to classify normal and abnormal diastolic function, while Algorithm B is designed to estimate LV filling pressure and grade diastolic function of patients with a reduced (<50%) or preserved LVEF and known or suspected diastolic dysfunction. However, if the patient's diagnosis is unknown or the LVEF is marginal (45-55%), there are problems in selecting an algorithm for LVDF evaluation. Therefore, there is a need for an LVDF assessment that can be easily applied to clinical practice by providers with different levels of expertise.
Recently, Oh et al. proposed a simplified and unified algorithm for LVDF assessment (7). This algorithm benefited by simplifying the assessment in clinical practice and avoiding problems with discordance and false calls of diastolic dysfunction to achieve high specificity. To date, the clinical impact of each individual LVDF marker is well-known, but the clinical significance of the sum of the abnormal diastolic function markers and the longterm clinical outcome are not well-known (8). This study aimed to investigate the usefulness of LVDF score in predicting clinical outcomes of patients with AMI.
Patient Population
All patients with AMI registered at Chonnam National University Hospital from 2011 to 2015 were included in the study. Of the initial 3,009 patients, 2,030 patients who underwent successful primary percutaneous intervention (PCI) Abbreviations: AMI, acute myocardial infarction; AUC, area under the receiver operating characteristic curve; eGFR, estimated glomerular filtration rate; HF, heart failure; LAVI, left atrial volume index; LV, left ventricular; LVDF, left ventricular diastolic function; LVEF, left ventricular ejection fraction; LVH, left ventricular hypertrophy; PCI, percutaneous coronary intervention; RCA, right coronary artery; ROC, receiver operating characteristic; RV, right ventricular; STEMI, ST-segment elevation myocardial infarction; TR, tricuspid regurgitation; TTE, transthoracic echocardiography; WMSI, wall motion score index. and transthoracic echocardiography (TTE) were selected. Patients with moderate to severe mitral regurgitation (MR), mitral annular calcification, atrial fibrillation, those who did not undergo PCI, those who underwent suboptimal or failed PCI, those with no echocardiography findings, and those with insufficient TTE imaging or loss to follow-up were excluded (Supplementary Figure 1). AMI is defined as cardiomyocyte necrosis in a clinical setting consistent with acute myocardial ischemia (9). It was diagnosed by clinical presentation, serial changes on echocardiography suggesting infarction, and an increase in cardiac markers, preferably cardiac troponins, with at least one value above the 99th percentile of the upper reference limit. The study complies with the Declaration of Helsinki, and the local institutional review board (IRB) of the study center approved the study protocol (CNUH 05-49). Written informed consent was obtained from each study patient.
Echocardiographic Data and Study Definition
A comprehensive transthoracic echocardiogram was obtained within 48 h of admission for all patients. All TTE measurements were recorded during routine clinical practice according to the current American Society of Echocardiography (ASE/EACVI) recommendations (10). Left ventricular systolic function was assessed by LVEF obtained using the biplane method of disk summation, from the apical 2-and 4-chamber views, according to the modified biplane Simpson's method. To calculate the wall motion score index (WMSI), the LV was divided into 16 segments. Each segment was assessed and scored based on its motion and systolic thickening (1 = normokinesia, 2 = hypokinesia, 3 = akinesia, 4 = dyskinesia). The WMSI was calculated as the sum of the individual segment scores divided by the number of segments (11). Left atrial (LA) volume was assessed using the modified biplane Simpson's method, from the apical 2-and 4-chamber views, at end-systole and indexed to body surface area. In cases in which the Simpson's method could not be used due to missing or poor quality apical views, LA volume index (LAVI) was calculated using the Cube method (12). Peak early diastolic tissue velocity (e') was measured from the septal aspects of the mitral annulus, while mitral inflow velocity was assessed using the pulsed-wave Doppler from the apical 4-chamber view (5). The right ventricular (RV) functional measures were tricuspid annulus systolic tissue Doppler velocity (s') and RV dysfunction, which was defined as s' < 10 cm/s. Peak tricuspid regurgitation (TR) velocity was measured, and pulmonary artery systolic pressure (PASP) was estimated as 4 × (peak TR velocity) 2 + 5 (5).
Clinical Data Collection
Demographic features and cardiovascular risk factors were obtained via patient interviews or review of medical records. During admission, findings of coronary angiography and detailed procedural characteristics of PCI, as well as data on discharge medications were collected. Patient treatment was performed according to current standard practice. After PCI, all patients were recommended to take aspirin indefinitely with clopidogrel or a potent P2Y12 inhibitor, such as prasugrel or ticagrelor, for at least 1 year.
Clinical Outcomes
The incidence of mortality and hospitalization due to heart failure (HHF) in relation with the LVDF score over the 3year study period were evaluated. All causes of death were considered cardiac unless an apparent non-cardiac cause was otherwise stated. Readmission for HF was defined as the patient showing signs and symptoms of HF upon admission and was treated with medications, including diuretic therapy (either intravenous diuretics or augmentation of oral diuretics), vasodilators, inotropic support, or ultrafiltration for HF during admission. All end points followed the definitions of the Academic Research Consortium (13).
Statistical Analyses
Continuous variables are presented as means ± standard deviations or medians with interquartile ranges and compared using an unpaired t-test or Mann-Whitney rank sum test. Categorical variables are expressed as numbers with percentages and compared using Pearson's chi-square test or Fisher's exact test. Mortality and HHF were assessed using Kaplan-Meier curves according to the LVDF score. A multivariate Cox regression model was used for each of the above-mentioned cut-offs, with covariates that had P < 0.05 on univariate analysis or had predictive values [age ≥ 65 years, male sex, previous MI, estimated glomerular filtration rate (eGFR) ≤60%, LVEF, and cardiogenic shock, LV end-diastolic volume index, LV end-systolic volume index, LV geometry]. To compare the predictive abilities of LVDF scores and LVEF for mortality and HHF, receiver operating characteristic (ROC) curve analysis and DeLong's test were performed. In addition, comparisons of allcause mortality between the LVDF score and LVEF according to the exploratory subgroups of interest were assessed using an ROC curve. For ROC curves, landmark analyses were used to compare LVDF scores and LVEF before and after 30 days of follow-up because 30 days following primary reperfusion is a critical period where the greatest degree of cardiac remodeling occurs (14).
All probability values were two-sided, and p-values <0.05 were considered statistically significant. All statistical analyses were performed using R Core Team (2015). R: A language and environment for statistical computing (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project. org/). with follow-up loss (Supplementary Figure 1). The proportion of LVDF scores were as follows: 23.2% (score 0), 35.2% (score 1), 25.7% (score 2), 10.5% (score 3), and 5.4% (score 4) ( Figure 1A). An LVDF score of 1 was the most prominent (35%), with an abnormal septal e' being the most common feature. E/e' > 15 showed an increasing pattern as the LVDF score increased (Figure 1B). The baseline demographics, final diagnosis, and risk factors were found to be significantly varied with the LVDF score Frontiers in Cardiovascular Medicine | www.frontiersin.org FIGURE 2 | Clinical outcomes according to LVDF scores (A) and adjusted HR plot for all-cause mortality and hospitalization due to HF (B). All-cause mortality, cardiac death, and heart failure (HF) rehospitalization rates increased in a stepwise fashion from 2.3% (LVDF score 0) to 44.5% (LVEF score 4) (p < 0.001). Higher LVDF scores had incrementally higher adjusted HRs for all-cause mortality and hospitalization due to HF. CI, confidence interval; HF, heart failure; HR, hazard ratio; LVDF, left ventricular diastolic function.
( Table 1). A total of 2,030 patients with a mean age of 64.6 ± 12.6 years, including 1,471 males (72.5%), were included in this study. Co-morbidities such as hypertension and diabetes mellitus were found in 52.7 and 41.0% of patients, respectively. As the LVDF score increased, eGFR decreased while N-terminal probrain natriuretic peptide levels increased (p < 0.001). Secondgeneration drug-eluting stent was chosen as the most implanted intervention (85.1%), and the total number of stents was 1.4 ± 0.9. Most patients were receiving aspirin, a P2Y12 inhibitor, ACE inhibitor or ARB, beta-blocker, or a statin.
Echocardiographic Characteristics
Based on the left ventricular mass index (LVMi) results and relative wall thickness, LV hypertrophy (LVH) was observed in 34.2% of patients (concentric: 12.9%, eccentric: 21.3%; Table 2). The prevalence of LVH increased as the LVDF score increased. The mean LVEF was 55.0 ± 11.3%. As the LVDF score increased, the LVEF decreased and WMSI increased. The higher the LVDF score of patients, the higher the LAVI and septal E/e' TR velocity but the lower the septal e'.
In landmark analysis, LVEF was the most predictive parameter for all-cause mortality within the first 30 days of follow-up (AUC: 0.801 vs. 0.704, p = 0.045), but between 30 days and 3 years of follow-up, the LVDF score was the better predictor (AUC: 0.739 vs. 0.640, p < 0.001) (Figure 5).
DISCUSSION
This study examined the construct validity of the unified LVDF algorithm, by demonstrating the ability of a simplified LVDF score to outperform LV systolic function in predicting long-term clinical outcomes in 2,030 patients with AMI. Patients with high LVDF scores had a significantly higher risk of all-cause mortality or readmission for recurrent HF than patients with low LVDF scores, which was consistently observed even after adjusting for baseline differences. The LVDF score performed significantly better in predicting all-cause mortality and readmission for recurrent HF compared with LVEF. Subgroup analysis showed that LVDF scores performed significantly better than LVEF in patients with STEMI, LVEF ≥ 50%, Killip class < 3, abnormal LV geometry (LV remodeling or LVH), and non-RCA target vessels. In landmark analysis, LVDF scores were better in predicting allcause mortality than LVEF in the long-term follow-up (30 days ∼ 3 years).
Predicting Clinical Outcome Using the LVDF Score in Patients With Acute Myocardial Infarction
Previous studies have shown that in the 2016 ASE/EACVI guidelines, assessment of diastolic function was a strong independent predictor of outcomes for MI (15,16). However, these studies excluded indeterminate variables and shock groups, had relatively smaller study subjects, and included limitations that were difficult to apply in a clinical setting. Contrastingly, the present study included patients with AMI with cardiogenic shock and indeterminate variables and found that the prognosis worsened as the LVDF score increased. Additionally, in the distribution of diastolic function, the most common in LVDF score 1 was septal e' < 7 cm/s. However, in LVDF score 2, the E/e' > 15 ratio increases, and thereafter, in LVDF score 3, the TR velocity > 2.8 m/s ratio increases. And among individual LVDF parameters, E/e' ratio and TR velocity were the best predictors of all-cause death and HHF. Therefore, clinical outcomes can be predicted simply by evaluating the LVDF scoring system, which is expected to be helpful in routine clinical practice for patients with AMI.
Comparison of LVDF Score to LVEF for Mortality Prediction
Prognosis of LV systolic dysfunction after AMI has been a major research focus for several decades (3). The insights from these studies have led to several therapeutic interventions that have improved outcomes. In addition to depressed systolic function, clinical and radiological evidence of HF is a consistent and powerful predictor of outcomes in patients with AMI (17). However, there have been no studies comparing mortality between the two predictors, namely LV systolic function and current LVDF guidelines, in patients with AMI. In the present study, the LVDF score was found to be superior to LVEF in predicting mortality, especially in patients with AMI with STEMI, preserved LVEF (≥50%), hemodynamic stable state (Killip class <3), abnormal LV geometry, and non-RCA target vessels. LVEF is a strong predictor for clinical outcomes; however, since each LVDF parameter, including septal e' , E/e' , TR velocity, and LAVI, is known as a strong independent prognostic factor in FIGURE 5 | Landmark analyses of the ROC curve to compare the LVDF score and LVEF before and after the 30-day follow-up period. In the landmark analysis, LVEF was the strongest predictor of all-cause mortality during the short-term follow up; however, the LVDF score was a better predictor of mortality than LVEF during the long-term follow-up. AUC, area under the receiver operating characteristic curve; LVDF, left ventricular diastolic function; LVEF, left ventricular ejection fraction; ROC, receiver operating characteristic.
HF and other diseases (18)(19)(20)(21), the intersection of these four can be judged as a more powerful predictor for mortality. Especially, abnormal LV geometry (increased wall thickness and/or reduced end diastolic volume), which is a confounder for LVEF, makes it possible for LVEF to be unaltered despite significantly reduced LV function (22). In this study, as a result of analyzing the LV of patients with AMI, the normal LV geometry was less than half and the total LV mass index was 101.6 ± 26.8 g/m 2 , which was thicker compared with the LV wall of the normal population (69.9 ± 8.9 g/m 2 ) (23); thus, LVDF is considered to be a better predictor than LVEF.
The Importance of Evaluating LV Diastolic Function
There are prognostic reasons why LVDF evaluation is clinically important. From a diagnostic point of view, elevated LV filling pressure is an important cause of HF in patients with AMI (24). There are several studies focusing on optimal non-invasive assessment of left ventricular filling pressures that compared natriuretic peptide levels with Doppler against mean wedge pressure. Studies have shown that Doppler had a stronger correlation with mean wedge pressure, and the E/e' ratio tracked with changes in mean wedge pressure, whereas B-type natriuretic peptide levels did not (25). Similar results were seen in patients with ambulatory HF in which the E/e' ratio successfully tracked with changes in LA pressure (26). In this study, there was no significant difference in MAP in all groups (p = 0.221), but the LVDF score increased with the E/e' ratio (p < 0.001), leading to a decrease in coronary perfusion pressure, which is thought to be the cause of increased mortality and readmission for recurrent HF in the long term.
Interestingly, LVEF was superior to the LVDF score for predicting all-cause mortality during the short-term followup period (<30 days), but the LVDF score was superior to LVEF during the long-term follow-up (30 days ∼ 3 years). As a consequence of AMI, the measurement of changes in LV size, shape, and the thickness of both infarcted and noninfarcted segments of the ventricle, collectively referred to as ventricular remodeling, is important in evaluating ventricular function and prognosis (27); however, several studies have shown that measurement of lesion size and left ventricular systolic function (28,29) or alterations in post-infarction left ventricular remodeling (30) do not explain why patients with AMI have an increased tendency to develop long-term adverse outcomes. Therefore, in the acute stage, assessing prognosis based on LVEF is reasonable, and it is desirable to assess the prognosis using the LVDF score for patients undergoing long-term follow-up.
Study Limitations
This study had several limitations. First, despite its large sample size and granular data, this study had the potential for unmeasured confounders and lack of some data. Second, echocardiography-based estimates of hemodynamic measurements, such as the E/e ′ ratio, were used to measure LV filling pressures and TR velocity for pulmonary artery pressures, which are indirect measures. However, these correlate well with invasive measurements (31), and in clinical practice, diastolic function is evaluated mainly using echocardiography. Third, the 2016 ASE/EACVI guidelines recommended using the average of the lateral and septal velocities to measure LVDF, since these values are significantly different in certain situations such as left bundle branch block, regional wall motion abnormality, or significant right ventricular dysfunction, but only the septal e' velocity was used. However, there is no evidence that the average e' velocity provides a more reliable assessment for diastolic function (7). Moreover, septal E/e' was found to be associated with a poor outcome in TOPCAT trial (32), whereas lateral E/e' did not differ between patients with heart failure with preserved ejection fraction who were and were not hospitalized in I-Preserve (33). In the present study, comparison of the predictive performance of the individual LVDF parameters showed that E/e' ratio was the best predictor of all-cause death and hospitalization due to HF (Supplementary Figure 4). Subgroup analysis was performed for factors that could affect septal e' (WMSI ≥ 2, left bundle branch block or TDI RV s' < 9.5 cm/s) (Supplementary Figure 5). Therefore, consistent results were obtained.
CONCLUSION
The present study demonstrated that the LVDF score is a significant predictor of mortality and HHF in patients with AMI. The LVDF score can be useful in the risk stratification of patients with AMI; thus, careful monitoring and management should be provided to patients with AMI with higher LVDF scores.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the study complies with the Declaration of Helsinki, and the Chonnam national university hospital institutional review board of the study center approved the study protocol (CNUH 05-49). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
SB performed the statistical analysis. SB and HY drafted the manuscript. KK, HK, HP, JC, MK, YK, YA, JC, and MJ reviewed/edited the manuscript and contributed to the interpretation of data. KK conceptualized the overall study design and supervised all aspects of the study and revised the manuscript critically. All authors have read and approved the manuscript.
FUNDING
This study was supported by a grant of Chonnam National University Hospital Biomedical Research Institute (BCRI19039). | 2021-09-10T13:22:57.152Z | 2021-09-10T00:00:00.000 | {
"year": 2021,
"sha1": "3d1b79d0e43a7e847c331b1958e70a90c6a37eb2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.730872/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d1b79d0e43a7e847c331b1958e70a90c6a37eb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240068916 | pes2o/s2orc | v3-fos-license | Landscape-based multifunctional plant forest management
This study aims to analyze the characteristics of land use, land tenure, and the role of stakeholders in each landuse unit and to formulate a multifunctional landscape setting model for the PT Inhutani I Plantation Forest Management Unit Gowa Industrial Forest, which is adaptive to physical conditions, land tenure, and stakeholder interests. Landscape-based multifunctional plantation forest management analyzes in terms of land use, land tenure and stakeholders. Landuse is in the form of physical conditions such as land cover, altitude, soil type, geology, slope and climate, then land tenure is in the form of land ownership rights, namely on the part of the industrial plantation forest manager and the community around the industrial plantation forest area who have interests or activities in the area and stakeholders in the form of institutions or related stakeholders. The results of the study show that the characteristics of landuse in the last 5 years have changed in several places from forested to rice fields or open area, this is also evidenced by the results of land tenure where people have a high dependence on land around the area to meet their daily needs. Based on this, a multifunctional landscape setting model was created that leads to the use of NTFPs that will involve more communities in management.
Introduction
Forests are ecosystems that are still a topic of much attention and study by researchers. This cannot be separated from the role of the forest which is very important for human life as well as being very vulnerable to damage both naturally and due to human activities. The existence and existence of forests until now continues to experience shrinkage. During 2018-2019, there has been 462,458 ha of deforestation in Indonesia (Directorate General of Forest Planology and Forest Governance 2020) while according to FAO (2020) it is stated that in the 2010 and 2020 periods there has been a deforestation of 4.7 million ha/year throughout world [1].
Not all forest areas in Indonesia are forested, as stated by the Ministry of Environment and Forestry in the 2019 Land Cover Recalculation Book, only 46.3% of forest area is forest and the rest is non-forest with a total of 43% conservation and protected forest areas, 22% fixed production forest, 24% limited production forest, and 11% convertible production forest . Meanwhile, the ratio of primary natural forest, secondary forest and plantation forest is 48.1%, 39.7%, and 4.6%, respectively. The presentation of plantation forests is still very small, even though plantation forests have a more flexible and wider designation than natural forests, especially plantation forests that have good potential to be used together with communities around the forest.
The physical condition of the land is the condition or condition of the land which includes soil type, pH, and temperature. Topography itself means the shape or condition of the earth which is usually marked by differences in altitude. The physical condition and topography of the land have an important IOP Conf. Series: Earth and Environmental Science 870 (2021) 012031 IOP Publishing doi:10.1088/1755-1315/870/1/012031 2 influence on the distribution and growth of a plant species. In the development of plantation forests, physical and topographical conditions have an important influence on the selection of types and planting patterns [2].
Land tenure or land ownership is one of the important aspects in the management of a plantation forest, the analysis of this land tenure aspect can reduce conflicts over land in the future. As is known, currently there are many land conflicts between the community and the company. This can happen because the mapping system for land ownership is still poor. Land tenure is a legal term for land tenure rights, and not just a fact of land tenure, because someone may own the land, but does not always have the right to control [3]. According to Kamilah and Yuliana (2016), for example, on customary land, although individual rights are known, the individual does not have the right to transfer the land to someone else freely without interference from the family and community where the land is located [4].
Stakeholders are attachments based on certain interests. Thus, talking about stakeholder theory means discussing matters relating to the interests of various parties [5]. Mardikanto (2014) mentions that the basic premise of stakeholder theory is that the stronger the corporate relationship, the better the corporate business will be [6]. On the other hand, the worse the corporate relationship, the more difficult it will be. Fauziyah (2014) states that in general, stakeholders can be divided into three groups, namely main stakeholders, supporting stakeholders, and key stakeholders [7].
The term landscape is generally understood as a landscape that has a unique character as a result of the action and interaction of various factors, both natural and the influence of human activities, so that this uniqueness needs to be preserved (European convention). Citing from various sources, Arifin et al (2009) put forward the notion of landscape which comes from the words 'land' and 'scape' referring to an area with the totality of its characters [8].
The goal of integrated landscape management is to go beyond this narrow focus and lead to a more holistic way of managing natural resources at the landscape scale to offset competing land uses and manage ecosystems sustainably. The interrelated elements of a landscape can be managed to provide all the goods and services needed. Where the elements of a forest landscape show the relationship between various land uses, and the importance of developing sustainable natural resource management approaches, especially in multifunctional-based plantation forests [3].
Research implementation method
This research was conducted in the HTI area of PT Inhutani I Gowa Plantation Forest Management Unit, in Block IV in Belapungranga Village, Belabori, Borisallo and Lanna Village, Parangloe District, Gowa Regency, South Sulawesi Province. The research method used is a combination or mixed methodology, which is a combination of qualitative and quantitative approaches. The qualitative approach is carried out by exploring data and information on social aspects related to HTI landscape management, including aspects of land tenure and the role of stakeholders in each landuse unit in the management of HTI areas. A qualitative approach is used to complement the data from the quantitative approach.
Stakeholder analysis
Stakeholder analysis aims to identify stakeholders who are directly or indirectly involved in the management of the landuse and landscape of the selected HTI block. The analysis includes: land users, land management institutions, and local community institutions in forest management. The results of the stakeholder analysis will be used in formulating a multifunctional landscape setting model for the HTI area of PT Inhutani I Plantation Forest Management Unit Gowa, Makassar which is adaptive to the social conditions of the local community. The condition of areas other than plantation forests, which continues to experience a decline in area and stagnant water bodies, is inversely proportional to the condition of land cover of shrubs, gardens, rice fields, open area and settlements whose area has increased significantly from year to year. As in the area of rice fields and gardens, which significantly increased between 2018 and 2019. Rice fields from 127.63 Ha to 134.97 Ha and gardens from 7.33 Ha to 14.64 Ha, this proves that in the span of that year the community entered into the area to work the land (Okuvasi) into rice fields and gardens. In terms of residential land cover, the condition increased but not as significantly as changes in the cover of paddy fields and gardens, a significant increase occurred in 2017 from 38.86 Ha to 40.19 Ha, while the rest of the conditions the increase was not too significant. For open area cover at the time of harvesting by PT. Inhutani I UMHT Gowa in RKT 2016 and RKT 2017, where the range from 2016 to 2018 did not experience significant changes because Inhutani I UMHT Gowa only cut certain types with a selective logging system. Meanwhile, from 2019 to 2020, the land cover of open area experienced a very significant increase to 72.25 Ha. The condition of the open area is likely due to the clearing of land that will be converted into rice fields or gardens to meet the needs of the people living around the area.
Physical condition of land cover each unit
a. Conditions in unit 1 are relatively the same for each land cover, such as the altitude ranging from 25 -200 masl with slopes from slightly flat (1-3%) to wavy (8 -5% the form of PT. Inhutani I UMHt Gowa, so the best condition is suggested in the form of an agroforestry pattern. b. Conditions in unit 2, such as the altitude of the plantation forest land cover ranges from 50-200 masl, while in addition to plantation forests it ranges from 50-150 masl. Slopes range from slightly flat (1 -3%) to bumpy (8 -15%). In unit 2 there is a water body at an altitude of 100 -125 masl in the area of the Bantimurung Parangloe waterfall which has the potential to be used as a tourist destination, but due to uncontrolled water discharge, a controlling DAM is still needed. Land cover Gardens, Settlements, Rice Fields and open area have soil types Typic Eutrudepts, Aquic Haplustepts, and Typic Haplustepts. Based on the existing physical conditions in unit 2, it is advisable to use an agroforestry pattern for areas that have been occupied. c. Physical condition in unit 3, has an altitude of 150-250 masl with slopes from slightly flat (1-3%) to steep hills (25-40%) with complete geology and soil types. From the physical conditions obtained, unit 3 is very suitable for agroforestry patterns, sylvopastura to planting staple crops such as eucalyptus and acacia, agathis, eucalyptus and other types that are suitable at altitudes in this unit. d. Unit 4 has a slightly different physical condition from the previous 3 units where the altitude in unit 4 ranges from 200-450 meters above sea level with slopes that are almost dominated by steep hills (25-40%) and bumpy (8-15%) with geology BYN is a soil type including Typic Eutrudepts, Typic Hapludalfs, and Aquic Haplustepts. Based on these conditions, there are not too many agroforestry patterns in this unit, especially on land cover in the form of rice fields located at an unsuitable height, it should be moved or converted to land cover that is more suitable in terms of conservation. e. The physical condition in unit 5 is almost the same as in unit 4 but the altitude is starting to increase, namely the altitude from 225 -550 masl, the slopes are almost dominated by steep hills (25-40%) and bumpy (
Communities around the industrial forest Area
The community participates in using and managing the land even though they already know that the land is an HTI area of PT Inhutani I Gowa Plantation Forest Management Unit due to economic factors. They only work the land into gardens, rice fields or shop buildings to fulfil their daily lives and earn a living. Besides that, the lack of other business alternatives makes it difficult for the community to improve their welfare.
The results of interviews with 55 respondents from the community in Belapungranga, Belabori, Borisallo, Bontokassi and Lanna Villages, Parangloe District, Gowa Regency, South Sulawesi Province, found that 52 respondents participated in using and managing the land in the concession area. If divided for each unit, there are 12 respondents in unit 1, 13 respondents in unit 2, 9 respondents in unit 3, 8 respondents in unit 4, each 5 respondents in units 5 and 6. manage land at all in the concession area.
The people who use and manage the land in unit 1 have started working on it since the 1980s, starting from land that was initially intercropped with PT. Inhutani I UMHT Gowa, some are inherited from parents and then continue to this day. It is also known that as many as 12 respondents in unit 1 who participate in utilizing and managing land in the concession area all have proof of ownership in the form of PBB. The communities in units 2 to 6 only started to use and manage land in the 2000s, especially those in units 3 and 4, some of which have only started to use and manage land around 2019 to January 2021. In addition, most communities in units 2 to 6 do not have proof of land ownership. As in unit 2, out of 13 respondents, only 3 respondents had proof of PBB, while in unit 6 out of 5 respondents, none of them had proof of ownership even though it was only proof of PBB. Figure 2 also shows that the lower the number of people who participate in using and managing land in the concession area, the lower the number of people participating in the use and management of the concession area. Generally, these communities already know that the land they manage is an area that Based on the diagram above, it can be seen that the people who use and manage the land in the concession area do not have a certificate, but only proof of payment of land tax (PBB). There are even people who have absolutely no proof of land ownership. A total of 21 respondents claimed to regularly pay PBB for the land they cultivate every year, while 31 respondents admitted that they did not have any proof of land ownership in any form. Based on the existing respondent's data, it is known that the people who work in the unit 1 area all manage the land into gardens and there is one respondent who manages it at the same time as gardens and rice fields. As for the community in the unit 2 area, there are only three respondents who manage land into gardens, while ten other respondents use land on the side of the main road to construct buildings. This building is used to open a shop or a selling kiosk. Meanwhile, people in units 3 and 4 generally manage land into gardens and rice fields and some build buildings as houses on the same land they manage their gardens. In the unit 5 area, it was found that people were working on rice fields and gardens and there was one respondent who managed the land into gardens and at the same time used the land as a place for grazing livestock. Herding of livestock is also carried out by the community in the unit 6 area, but there are no more plantations there, but only the management of land into rice fields. The lack of employment opportunities that match the level of education and skills is also the cause of the low welfare of the community. Meanwhile, due to ignorance about the boundaries of the concession area, different perceptions or unclear boundaries are also factors that cause people to work on the land.
Stakeholders
Stakeholders are groups that have a concern and interest in an issue that is determined by considering their important position and influence. A total of eleven stakeholders were identified, involved and played a role in the management of PT. Inhutani I UMHT Gowa as listed in Table 3. is licensed for the use of forest products by PT Inhutani I. A total of 38 respondents answered that they already knew the boundaries of the HTI area of PT Inhutani I of the Gowa Plantation Forest Management Unit, while 14 respondents were still do not yet know where the boundaries of the concession area are (Figure 4). On the other hand, there was less community involvement in HTI management, as many as 13 respondents claimed to have been involved in activities by PT. Inhutani I such as boundary demarcation activities to planting. Play a role in the management of Industrial Plantation Forests related to the implementation of field operational activities in the activities of structuring work areas, inventory of stands, clearing of forest areas, procurement of seeds, harvesting, marketing, environmental management and development and empowerment of communities around the forest. There is not any
Multifunctional plantation forest management
Based on the division of units on the map, before setting the landscape, the land cover in the form of Figure 5. Landscape-based multifunctional plantation forest management map rice fields was almost evenly distributed in each unit, but in the landscape setting it was more focused on areas that were slightly flat to wavy with an altitude below 400 meters above sea level which was very suitable for units 1, 2 and 3 as well as some in units 4 and 5. As for unit 6 because it is an upstream area and has an altitude of up to 700 meters above sea level with a slope dominated by >40%, the suitable | 2021-10-28T20:10:29.209Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "c6941313ec145529292d2e7adb2035104aa2cdc8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/870/1/012031",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c6941313ec145529292d2e7adb2035104aa2cdc8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
225573351 | pes2o/s2orc | v3-fos-license | Bilateral post traumatic facial nerve palsy presenting as Dysarthria: A case report
Bilateral traumatic facial nerve palsy (FNP) is rare and can present with distressing features. We report a 23-year-old male final year medical student with a 10-day history of speech difficulty following a passenger motorcycle road traffic accident. Physical examination showed a fully conscious young man whose only neurological deficit was bilateral lower motor neuron facial nerve palsy (House and Brackmann grade IV) and difficulty pronouncing plosives. A high-resolution temporal bone CT showed a right longitudinal temporal bone fracture. There was no temporal bone fracture on the left side. Brain MRI was normal. He had complete recovery of facial nerve function on conservative management 6 months after the injury.
Introduction
Bilateral facial nerve palsy (FNP) is rare, representing less than 2% of all cases of FNP [1]. Bilateral traumatic FNP is even rarer and can cause a diagnostic challenge due to lack of facial asymmetry as seen in unilateral FNP. Affected individuals may be devastated due to emotional and psychological distress as well as social limitations associated with it [2]. It can present with distressing features such as facial asymmetry, drooling of saliva, incomplete eye closure, exposure conjunctivitis, corneal ulceration, synkinesis, feeding difficulties, and speech difficulties, which will lead to functional and aesthetic disability especially if full recovery is not achieved [3].
We report a rare case of bilateral facial nerve palsy presenting with dysarthria.
Case report
We present a 23-year-old final year male medical student with a 10-day history of speech difficulty three days following a road traffic accident. He had transient loss of consciousness following the accident. There was no history of bleeding from any craniofacial orifice, seizures, limb weakness, hearing or swallowing difficulties, tinnitus or vertigo. There was no associated impaired lacrimation in both eyes, hyperacusis and taste disturbance in anterior two-third of tongue. However, there was a history of dryness of the tongues. He received initial care at a local hospital and was later referred to our facility for evaluation and treatment.
Physical examination showed a young man with vital signs within limits of normal. He was fully conscious. He had dysarthria with difficulty in pronouncing plosives. His memory and other higher cerebral functions were preserved. He had bilateral lower motor neuron FNP (H&B grade IV) as evidenced by expressionless face and inability to close both eyelids. Hearing was grossly preserved bilaterally using Weber's and Rhine's tests. Pure tone audiogram was not done.
A high-resolution CT (HRCT) of the temporal bone showed a right longitudinal temporal bone fracture. There was no temporal fracture on the left side ( Figure 1). Brain MRI showed no parenchymal brain injury/space occupying lesion or cerebello-pontine angle lesion. Electrodiagnostic studies were not done due to unavailability. Hematological investigation results were within limits of normal.
He was managed with high dose steroids and corneal protection measures. Complete improvement in FN function with significant improvement in speech has been observed 6months after the injury (Figure 3), with the left recovering faster than the right side (Figure2).
Discussion
Bilateral FNP presenting as dysarthria has rarely been reported in literature. The aetiologies of facial nerve palsy range from infections, tumours, head injuries, degenerative diseases, vascular diseases, and idiopathic [4]. Trauma is a very rare cause of bilateral FNP. Traumatic FNP is often associated with temporal bone fractures: longitudinal (70-90%) or transverse [1] according to their orientation with respect to the long axis of the petrous pyramid. However, the incidence of FN trauma is higher with transverse fractures than with the longitudinal ones, 40-50% vs 15-20% [3]. FNP without petrous temporal bone fracture may be explained by FN edema [1] as there was no fracture seen on the left in the index case ( Figure 1).
There are several factors contributing to FN nerve susceptibility to trauma including its length, course, and vascular supply [5]. The FN emerges from the brainstem and enters into the internal auditory canal to follow a long route through the petrous bone, traversing the narrow fallopian canal. The FN is subdivided into three segments within the fallopian canal: labyrinthine, tympanic, and mastoid. The labyrinthine segment is the narrowest and shortest portion of fallopian canal, where the FN is highly vulnerable to trauma. Also, this portion is a transition watershed zone between blood supply from the vertebral and carotid artery systems. In addition, the FN occupies more than 80% of the available space in the labyrinthine location, where it is most likely that edema can compromise facial nerve function [3].
Traumatic FNP can be immediate or delayed (>48h). It has been widely accepted that immediate FNP results commonly from a direct nerve impingement or transection from temporal bone fracture, and surgical exploration appears to be the treatment of choice in such instance. Delayed FNP is usually due to the pressure effect from edema within the fallopian canal and is best managed medically [6].
A HRCT of the temporal bone, which was done in the index case (Figure 1), is a useful diagnostic tool for traumatic facial nerve palsy, as it can visualize the fracture line and its relationship to the fallopian canal. The role of MRI should no longer be considered merely complementary in the management strategy. MRI is useful for direct visualization of the injured FN, enabling the detection of neural ischemia or edema or an intraneural hematoma [7,8]. MRI may also be useful to detect neoplasms compressing the seventh cranial nerve or cerebello-pontine angle tumours. MRI in this case did not detect any lesion to explain the FNP other than direct trauma from the fractured bone and/or facial nerve edema.
Electrodiagnostic studies may be used in FNP to assess prognosis [9]. However, this was not used in the index due to unavailability.
Although the management of post-traumatic facial nerve palsy is still controversial, it is widely accepted that surgical intervention is indicated for patients with acute onset and complete palsy. However, patients with delayed-onset or incomplete palsy are usually treated with systemic steroids. Steroid, which was used in the management of the index case, has also been reported to have the potential of hastening the facial nerve recovery in the traumatic cases. Many researchers advocate no exploration for non-penetrating trauma for intra-temporal facial nerve palsy. However, late surgery has been advocated by some authors in cases of non-recovery within six months after trauma [10]. However, there is still lack of a high-level, evidence-based study to back-up this recommendation.
Important prognostic factors are the severity of FNP and timing of onset, with the degree of palsy having a greater influence on recovery of function than the time of onset. Pre-intervention H&B's grade more than IV, though backed by low-level evidence, has been shown to have worse prognosis6. The index case had a H&B of IV and was still able to achieve complete recovery of FN function with medical management (Figure 3).
Conclusion
Bilateral FNP can be trauma-induced and may rarely present with disabling dysarthria even in the absence of extensive skull base fractures. Conservative management with steroids can result in return of complete facial nerve function. | 2020-07-30T02:03:24.196Z | 2020-07-18T00:00:00.000 | {
"year": 2020,
"sha1": "c553684f952555a219706cddf371895bed096af6",
"oa_license": "CCBY",
"oa_url": "https://www.sciencerepository.org/articles/bilateral-post-traumatic-facial-nerve-palsy-presenting-as-dysarthria_GSCR-2020-1-107.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "65136ffd51b039173fc02aa26d81ada8e689990f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235228183 | pes2o/s2orc | v3-fos-license | Doctor-patient communication during the Corona crisis – web-based interactions and structured feedback from standardized patients at the University of Basel and the LMU Munich
Background: Due to the pandemic-related restrictions in classroom teaching at the medical faculties of the LMU Munich and the University of Basel, teaching methods with standardized patients (SPs), were shifted to a digital, web-based format at short notice as of April 2020. We report on our experiences with the WebEncounter program, which was used for the first time in German-speaking countries. The program enables one-to-one encounters between SPs and students. Students receive an invitational email with brief instructions and background information on the case. SPs use case-specific criteria that are compliant with the learning objectives for digital evaluation during the encounter. A feedback session takes place immediately following the encounter. The SPs address the didactically relevant sections and can illustrate them with the corresponding video sequences. Finally, the students receive the links to the video recordings of the encounter and the feedback unit by email. Project description: The aim of this pilot study was to analyze the practicability of the program and its acceptance by students and SPs. In addition, we examined whether the operationalization of the learning objectives in the form of assessment items has an impact on the content and thematic development of courses in the area of doctor-patient communication. Methods: To implement the program, patient cases previously tested in communication seminars in Munich and Basel were rewritten and case-specific evaluation criteria were developed. SPs were trained to use the program, to present their patient figure online and to give feedback. The experience of those involved (faculty, SPs and SP trainers, students) in implementing the program was documented at various levels. The frequency and causes of technical problems were described. Student results on the patient cases and on the feedback items were collected quantitatively and, where possible, supplemented by free-text statements. Results: Data from 218/220 students in Basel and 120/127 students in Munich were collected and evaluated. Students were very satisfied with the patient cases, the encounter with the SPs and their feedback: 3.81±0.42. SPs experienced the training as an increase in their competence and the structured feedback as particularly positive. The training effort per SP was between 2.5 and 4 hours. The results show predominantly normally-distributed, case-specific sum scores of the evaluation criteria. The analysis of the individual assessment items refers to learning objectives that students find difficult to achieve (e.g. explicitly structuring the conversation). Problems in the technical implementation (<10 percent of the encounters) were due mainly to the use of insufficient hardware or internet connection problems. The need to define case-specific evaluation criteria triggered a discussion in the group of study directors about learning objectives and their operationalization. Summary: Web-based encounters can be built into the ongoing communication curriculum with reasonable effort. Training the SPs and heeding the technical requirements are of central importance. Practicing the virtual consultation was evaluated very positively by the students – in particular, the immediate feedback in the protected dialogue was appreciated by all involved.
students) in implementing the program was documented at various levels. The frequency and causes of technical problems were described. Student results on the patient cases and on the feedback items were collected quantitatively and, where possible, supplemented by free-text statements. Results: Data from 218/220 students in Basel and 120/127 students in Munich were collected and evaluated. Students were very satisfied with the patient cases, the encounter with the SPs and their feedback: 3.81±0.42. SPs experienced the training as an increase in their competence and the structured feedback as particularly positive. The training effort per SP was between 2.5 and 4 hours. The results show predominantly normally-distributed, case-specific sum scores of the evaluation criteria. The analysis of the individual assessment items refers to learning objectives that students find difficult to achieve (e.g. explicitly structuring the conversation). Problems in the technical implementation (<10 percent of the encounters) were due mainly to the use of insufficient hardware or internet connection problems. The need to define case-specific evaluation criteria triggered a discussion in the group of study directors about learning objectives and their operationalization.
Introduction
In the course of the "Corona crisis", face-to-face encounters with students in university classroom settings, particularly with standardized patients (SPs), were prohibited. We take this opportunity to report on our experiences with a program, not previously used in the Germanspeaking countries, which despite such bans enables the use of SPs via a web-based platform: WebEncounter [https://enhancedlearn.azurewebsites.net/]. WebEncounter was developed at Drexel Medical School in Philadelphia, USA, and enables face-to-face encounters between students and SPs via the internet. Several publications from the English-speaking countries demonstrate the benefits of this and similar learning and teaching aids in the training of students and in the further training of residents and nurses [1], [2], [3], [4], [5], [6]. The program belongs to the group of teaching aids, which are usually used as part of blended learning, combining direct interactions among students in small groups or with real patients and internet-based computer-aided learning. The use of SPs and their benefits in medical teaching is well-documented [7], [8], [9], [10] and common at most German-speaking universities [11]. One particular feature of using SPs, especially in the area of training communication and social skills, is the possibility of immediate "patient" feedback after a learning unit [12], [13]. The effectiveness of feedback depends on how closely it relates to the behavior the student just demonstrated and whether it refers to a standard of desirable behavior that is familiar to the student. The first point relates to the implementation of the feedback and the second to its content. With the aim of a close temporal succession of behavior and feedback, a procedure was developed under the term "Rapid Cycle Deliberate Practice" that uses the advantages of structured, timely and concrete feedback on the learning success (e.g. [14], [15]). In order for the feedback recipient to know what he or she should do differently next time, it must be clear what the desired behavior is; the student -and the teacher -must be familiar with such a standard. Monica de Ridder et al. [16], emphasize this when they write: "Feedback is specific information about the comparison between a trainees' observed performance and a standard, given with the intent to improve the trainees' performance." Determining desirable behaviors as specifically as possible is not trivial [17]. First, the faculty is required to define and teach learning objectives in the communication curriculum so clearly that students (can) know what is expected of them. These learning objectives must then be applied to the specific case, i.e. formulated in such a way that they are represented in the conversation with the SP and that they can be assessed, if they occur. The persons who are to give the feedback must be trained according to these guidelines. When implementing these requirements, particularly in the face-to-face encounter between students and SPs, a further factor plays a decisive role. If the feedback does not come from a third party (e.g. an expert or other student present), but is provided by the SP, then it is the SP's responsibility to not only act like a credible patient, but also to simultaneously detect whether and when the given learning goals are achieved in the conversation. This double burden often means that the feedback does not refer to the concrete behavior of the student in a certain phase of the conversation, but more likely summarizes an overall impression. The platform we are using for the first time in the Germanspeaking countries addresses these difficulties and counteracts them: • The buttons for assigning the evaluation criteria on the screen are arranged in a way that they are easy for the SP to use during the encounter without losing contact with their role. • It records the conversation and the time stamps that refer to moments in which learning objectives were more or less well achieved. • It includes a feedback unit immediately following the encounter, in which SPs can import the video segments that they have marked with time stamps as didactically valuable ("teachable moments" [18], [19]).
Project description
The aim of this pilot project is to evaluate the practicability of the new online platform and to obtain an impression of its acceptance by students and SPs. In addition, it seems interesting to get to know the particular challenges involved in implementing such a program and to find out whether the results can be fed back into the learning and teaching process by way of a feedback loop. A classic implementation study in the strict sense (e.g. [20]) could not be carried out due to the urgency of finding ad hoc alternatives to classroom teaching. At the two institutions involved, the following tasks had to be completed within 4 or 6 weeks: training of persons as administrators (for data management; appointments for SP training and student and SP encounters), training of SPs, formulation of instructions for all involved; revision of the role scripts for patient cases and redefinition of the assessment criteria to facilitate structured feedback Participation in the online encounter with SPs replaced the obligatory encounters with SPs in traditional classes planned in the curriculum. One online consultation hour in WebEncounter was agreed upon for each student. The analysis and the evaluation of the pilot test was based on the project directors' observations, the anonymized technical and content-related feedback from students and SPs retrieved from WebEncounter, as well as the students' performance.
Study participants
The investigation was conducted at both locations between April and June 2020. Data from students in the 3 rd year of study at the LMU in Munich (N=122; 73 f, 59 m) and from the 2nd year of study in Basel (N=220; 154 f, 66 m) were considered. Students at both locations were randomly assigned to the individual case situations.
Data collection, data anonymization, consent to study participation Medical students in Basel are informed at the beginning of their studies that video recordings are a part of training and need be treated just as confidentially as patient information. In Bavaria, article 10, paragraph 3, clause 2, clause 1 BayHSchG allows the collection of evaluation data for the purpose of quality control in teaching. In addition, at the beginning of the summer semester while booking their courses for the summer semester 2020, the students were informed about the collection and use of data and video recordings. Students signed and gave their consent before booking the class. At the University of Basel, research and publications that serve to improve teaching are permitted. In this particular case, the local ethics committee (EKNZ) decided that the study was not subject to the human research act art 2 and therefore no formal approval/assessment was necessary.The video files are saved under a randomly generated name (example: nejzh3aGquqK_1br4yt7qyr.mp4), to ensure that the real name of the video file including the user information cannot be read out. The feedback from the students evaluated as part of the evaluation and the scoring-related evaluations are anonymized by WebEncounter for the respective query period or for each "patient" case as descriptive statistical statements summarized without personal data.
Development of the case descriptions
At both universities, the teams of actors and trainers selected those patient vignettes deemed most suitable for an online consultation from existing patient vignettes. The criteria were predominantly verbal references to the underlying diagnoses or processing styles of the patients, clear assignment of the case histories to specific learning goals and playability within 8 to 15 minutes. In Basel, the following two cases were selected: suspicion of flour dust allergy and suspicion of sexually transmitted disease, including exclusion of HIV. Three cases were used in Munich: critical adherence to therapy for Hashimoto's thyroiditis, notification of an HIV infection and diagnosis of colon carcinoma. A short case description (door instruction) was developed for each case. In traditional classes with SPs this is usually distributed in advance as brief information or attached to the door to the consultation room and informs the students about who they are going to meet and what their tasks are. In addition, the students received technical instructions as well as medical background information on the case scenario in advance, in order to prepare themselves.
Standardized patients
Twelve SPs took part in the course at each location (University of Basel: 8f /4m; age range 21 to 51 years; LMU Munich: 10f /2m; age range 25 to 65 years). 9/12 SPs in Basel and 6/12 SPs in Munich had previously worked with the cases used in WebEncounter in face-to-face lessons, the other SPs were trained in the content of the cases.
In the training of the SPs, special features of case presentation via camera and with a limited field of view were discussed. Due to the limited field of view in the representation of the patient, non-verbal messages to the student which are expressed, for example, in changes in tension of the entire body had to be transferred to other non-verbal channels (e.g. tension of the upper body, covering face with both hands, etc.), facial expressions or verbal utterances (see figure 1). During the conversation, the SP sees the student and herself in the upper right corner, as well as the assessment criteria. The use of the assessment items was especially practiced in order to be able to give structured feedback. SPs already trained by the study leaders or the SP trainers took on the role of more or less talented students and then discussed their assessments with the SPs. If they had the impression that the feedback correctly reflected the different degrees of achievement of the learning objectives and that the SPs were also able to express themselves constructively to the "students", the SPs were licensed for their respective case. During their initial training and while working with the students, the SPs were continuously supported in the event of problems in handling the program. A debriefing was offered after a day of teaching. The average training In addition to the case-specific information, instructions for login into WebEncounter were sent out for each location. The SPs also received written explanations for evaluating the listed scoring items with text examples. The module secretariat was responsible for coordinating the appointments. At the University of Basel, the two interlocutors were invited directly via the WebEncounter platform and at the LMU Munich -due to the initially high frequency of exchange -the invitations to the actors and students were sent by the module secretariat.
Implementation
The students received an invitation email from the program 2 to 5 days before their appointment with a link to the platform and background information as well as brief information on their case. The day before their appointment, they were reminded of the appointment either via WebEncounter or by the administrators At the agreed time, students could use the link to join "their" SP who greeted them, explained the process, checked the technology, made sure that the students had read the case information and then had the consultation. Afterwards, feedback was offered in which the SPs gave specific feedback based on the assessment criteria, which they could back up with the corresponding video sequences. This option was rarely used by the students and the SPs. The discussions and the feedback units each lasted 12-15 minutes in Basel and 8 minutes in Munich. At the end, the students rated the case and the SP, and received a link to the recording of their consultation and their own feedback session. They were asked to make a note of any technical problems encountered during the session.
Description of the results
We report on the type and frequency of technical problems, the distribution of the total scores by the students per case, the distribution of the values in the individual scoring items and the mean values of the assessment of cases and SPs by the students. If possible, the sum scores are supplemented with examples from the students' and SPs' comments.
Results
Observations from the implementation Results were obtained from 218 out of 220 participating students in Basel. In the two missing cases, it was not possible to record a meaningful video. In about 5% of the consultations (19/338) students experienced technical problems. The SPs reported a total of 9 events with technical problems (approx. 2.5%). In these cases, either a new appointment was made or the recording was repeated immediately, e.g. after students had switched from a poor WiFi connection to a local hotspot via their cell phone. When checking the entries and the login data, the impression was confirmed that problems were mainly due to poor connection quality. This was partly due to the fact that some students and SPs had ignored the instruction to connect to the program via a wired network access (LAN), or had dialed in with insufficient peripheral devices. In Basel, 12 SPs took part with between 15 and 23 consultations with students each and representing one case only.
Results are available from 120 of the 122 participating students in Munich. In the 2 missing cases, the recording of the consultation failed. From the student perspective, there were technical problems in 8 consultations, the SPs reported a total of 4 incidents with technical problems. In these cases, either a new appointment was made or the recording was repeated immediately. When checking the entries and the connection data, it was found that the problems in Munich were due mainly to the connection quality and the failure to follow the instructions. As in Basel, connection and operation problems could be explained by the fact that instructions for using a LAN connection were not followed or that unsuitable peripheral devices were used. In Munich, 12 SPs took part in classes with WebEncounter. The individual SPs completed between 3 and 20 WebEncounter consultations (median 7) with students, with between 3 and a maximum of 6 WebEncounter consultations being carried out in the individual appointments. The vast majority (9/12) of the SPs only represented one case; three SPs represented two cases. The SPs' feedback at the beginning of the training made it clear that they had to get used to the new "expert role", i.e. to using the evaluation criteria; questions were clarified in the evening feedback sessions with the program managers and SP trainers. In the end, the feedback from the SPs on the use of the feedback criteria and on their own role and function was markedly positive. They particularly praised the intimacy of the one-on-one encounter, which made it easier for them to give personal feedback. In Basel in particular, it was emphasized that the students were much more open than in previous years, they actively requested detailed feedback and thanked them for it explicitly. The change in the role at the beginning of the encounter, from the "organizer", who asks the student whether they had read the information, to 'the patient', was never considered to be a problem.
Sum scores of the individual cases
The individual evaluation items are assigned a score of 1 (not fulfilled), 3 (partially fulfilled) and 5 (fully fulfilled). The sum of all items results in the sum score (overall evaluation of the consultation as a percentage of points achieved) the distribution of which is shown in figure 2 and figure 3, separated by cases. In most cases, there is an almost normal distribution of the range of variation with some significantly poorer students in the case of communicating a sexually transmitted disease (STD). The somewhat right-skewed distribution in the Munich scores of the case on communicating bad news (colon carcinoma) is also striking. As this study is a first attempt to use this program in German-speaking countries, all videos of the encounters in which students achieved below 30 percent of the possible score were viewed in order to rule out technical problems or unhelpful behavior on the part of the SPs. Neither could be verified. The responses from the SPs were correct and, in the perception of those responsible, corresponded to the behavior of the students. In the spirit of "closing the loop", students in Basel -as has been the case in the last six years -whose score was two SDs below the class average, were invited to a refresher course in which the obvi-
Sum scores of the individual items
With regard to the individual items, it became clear that the students had difficulties with very specific items. This concerns, on the one hand, the items in which the explicit addressing of the conversation structure is depicted (clarifying the agenda for the conversation, explicit change from the patient-centered to the doctor-centered communication phase, announced summaries) and on the other hand, items concerning the systematic narrowing down of symptoms (see figure 4).
Student feedback on the SPs and the cases
As figure 5 shows, both the cases and the SPs were rated very positively by the students. There was no negative feedback. However, of 120 students in Munich and 218 students in Basel, only around 35% completed the voluntary questionnaire at the end of the WebEncounter encounter. The qualitative feedback from the students confirms the positive overall impression. Students especially emphasized the constructive feedback (see figure 6).
Discussion
With regard to the technical implementation, the results available so far are predominantly positive. Especially "in times of Corona", in which the internet has often been at the limit of its capacity, <10 percent technical problems, i.e. mostly connection-related, are a good result. To interpret the left-skewed results, in the case of suspicion of sexually transmitted diseases, we can draw on the feedback that the students gave in the voluntary zoom-meeting refresher course (100 of 220 students took part).
They found it very difficult to have to talk about sexuality with an older man. For us, this aspect was important in terms of didactics and content. It gave rise to a discussion with the students about the fact that the special situation of a doctor allows or even requires different "access rights" to a person's life than the situation in a private contact. With regard to the right-skewed distribution of the sum scores in the "colon carcinoma case", we assume that the e-learning consultation, which preceded the online consultation, in which aspects and concepts of delivering bad news were discussed and refreshed, could have had an impact on the results. The feedback from students on this form of learning is very encouraging. The SPs report that they also had the impression that students had benefited from the intimacy in the one-on-one setting. In doing so, they refer to their experiences in recent years, in which discussions with SPs occurred in a small group setting of five students in Basel. One of the students held the conversation and the others were supposed to try to identify "teachable moments". SPs had criticized this teaching format because students in the second year course in Basel were often not ready or able to give each other concrete and constructive feedback, let alone discuss the SP's feedback.
In the discussion rounds with experts, in three to four groups of five after the SP encounters, the main criticism students expressed was that the exposure in the group during role-play with SPs and the feedback from the experts in the presence of the others was uncomfortable and embarrassing. Since the online consultation with WebEncounter was not tested against a face-to-face event at which students could speak to an SP on their own, it remains unclear whether the positive feedback regarding the intimacy of the situation was due to the relocation to the internet or the change from (small) group lessons to the one-on-one setting. In principle, the different teaching formats should not be played off against one another, but rather used according to their particular strengths and weaknesses and to meet the needs of the students in achieving different learning goals [21]. It was striking for us, those responsible for teaching, to what extent the special features of WebEncounter revealed inconsistencies with regard to the specific learning objectives in the area of medical interviewing within the faculty. This is because feedback items can only be formulated reliably and in a manner that is manageable for SPs if all participants agree that, for example, "explicit structure" is an essential element of a conversation and how explicit structure is addressed or implemented in a patient-centered manner in conversation. When the group responsible for the content within a faculty has agreed on the specific learning objectives, the evaluations of the SPs clearly indicate which learning objectives require further training. This localization of critical items enables "closing the loop" [22], [23], if -as is usual in Basel -a refresher course is offered, in which the critical items are discussed in depth. This year, 100 of the 220 students in Basel took part in this voluntary course offer as a virtual lecture.
Within our group and in discussions with SP trainers and SPs, there was critical objection in advance that this program could undermine the classic SP presence programs. In our opinion, however, this fear is not substantiated. The introduction of web-based teaching units does not mean less SP participation, but an upgrading of the work of SPs, whose area of competence is expanded to include the ability to give concrete and constructive feedback -and this without experts in the background. An obvious point of criticism concerns the possible loss of depth of a real encounter if it is relocated into virtual space. We need to consider that SP-based teaching units take place in the presence of other students and an expert at most universities. The potential of atmospheric condensation by the intimacy of the direct encounter could be endangered by the publicness of this contact in small group lessons. Even if certain elements of the oneon-one encounter in reality are undoubtedly missing in web-based contacts, it remains to be seen whether this shortcoming is not offset by the clearly dyadic nature of the encounter. Whether the definition of assessment criteria does not lead to the loss of the range of possible feedbacks is a further critical question. Under the best possible conditions, this will certainly be the case if, for example, the SPs are excellently trained in identifying behaviors that correspond particularly to the learning goals or in recognizing the personal strengths and weaknesses of the students in establishing and maintaining an empathic relationship. At the request of the SPs in Munich, we therefore included an open feedback criterion ("key moment") that they could use, if necessary, if they noticed particularly conspicuous behavior. However, this feedback item was rarely used, which indicates that the existing criteria were sufficient. A fundamental caveat concerns the validity of feedback from the personal perception of experts or SPs: the literature shows that those not directly affected -including SPs or experts -do not perceive the peculiarities of the relationship between patients and professionals in the same way as those actually affected themselves [24]. This has just been substantiated again in a recent study on the perception of empathy [25], which showed that patient perceptions predict a reduction in fear and satisfaction through conversation with a high degree of accuracy, while expert judgments have no predictive quality and, moreover, are not related to patient perceptions.
In summary, previous experience with WebEncounter shows that this program is perceived by students and SPs as an enrichment to previous forms of teaching. In Basel, it will be used in the second year course in the next academic years. In Munich, WebEncounter is to be used as a supplement to face-to-face teaching in the 2020/2021 winter semester. Other possible uses are planned in courses on taking a medical history, when delivering bad news, when dealing with problematic explanatory concepts and in OSCE exams.
In order to be able to further examine and analyze the benefits of this type of teaching and the possible uses of WebEncounter, comparisons of online one-on-one encounters and typical small group formats with the help of experts should follow.
Dedication
We dedicate this article to our program developer and colleague Christof Daetwyler, MD, who passed away unexpectedly last December. He was a wonderful example of student-and user-centered communication. His readiness to respond to our wishes and his patience in dealing with members of the working groups in Munich and Basel, who are not always tech-savvy, impressed us all.
Acknowledgements
Special thanks are due to the actresses and actors who were our standardized patients and the trainers, who quickly adapted to the new situation with great enthusiasm. Furthermore, we wish to express our gratitude for Claudia Steiner and Kuno Steiner for the English translation and the list of references.
Funding
• Basel: The project was supported with CHF 4,000 as part of the "promotion of innovative teaching". • Munich: The additionally required license costs and some technical devices were financed through the special Corona budget of the central university administration. | 2021-05-29T05:18:07.920Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "631b6031c32689670408d3d156df278d13a8019a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "631b6031c32689670408d3d156df278d13a8019a",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219021652 | pes2o/s2orc | v3-fos-license | Using Human Rights Law to Inform States' Decisions to Deploy AI
States are investing heavily in artificial intelligence (AI) technology, and are actively incorporating AI tools across the full spectrum of their decision-making processes. However, AI tools are currently deployed without a full understanding of their impact on individuals or society, and in the absence of effective domestic or international regulatory frameworks. Although this haste to deploy is understandable given AI's significant potential, it is unsatisfactory. The inappropriate deployment of AI technologies risks litigation, public backlash, and harm to human rights. In turn, this is likely to delay or frustrate beneficial AI deployments. This essay suggests that human rights law offers a solution. It provides an organizing framework that states should draw on to guide their decisions to deploy AI (or not), and can facilitate the clear and transparent justification of those decisions.
Court, and an appeal is currently pending. 5 Public backlash has produced calls for a moratorium or ban on the use of this technology. 6 In the United States, a number of cities have put in place just such a ban.
Although derived from human rights law, this approach should be of practical use to all states, irrespective of their status of treaty ratification or level of human rights engagement. By examining why a deployment is "necessary," and what alternative approaches are available, states can more clearly explain their intentions, act more transparently, and better engage with any subsequent challenges and debates, legal or otherwise.
Obligation to Respect and Non-Arbitrariness
Two core human rights law components are relevant when states consider how to approach the decision to deploy AI. First, the law establishes an "obligation to respect," requiring states to refrain from taking action that will result in a human rights violation. 7 Second, a central objective of the law is to protect individuals against arbitrary rights interferences. 8 This requires clarity and certainty vis-à-vis the scope of state authority. To protect against arbitrariness and determine the legitimacy of any deployment, states should typically conduct a three-part test. The measure in question should: (a) be in accordance with the law, (b) pursue a legitimate aim, and (c) be necessary in a democratic society. 9 These features in turn require states to conduct a pre-deployment impact assessment. This is not an explicit human rights law requirement, but it is implicit: if states must ensure that their activities do not result in human rights violations, they must identify the potential impact of those activities. 10 This essay focuses on the "necessary in a democratic society" test. Case law is derived primarily from the European Court of Human Rights, as these issues have been addressed in detail there, but the conclusions remain broadly relevant both to the International Covenant on Civil and Political Rights and other regional human rights treaties.
Determining Whether an AI Deployment Is "Necessary in A Democratic Society" The "necessary in a democratic society" test is intended to ensure the overall rights compliance of any measure. It addresses the "competing interests" arising in particular contexts. For example, a particular measure-such as AI-assisted surveillance-may be useful for the prevention and detection of crime, but pose risks to privacy, including of individual stigmatization. 11 These are the "competing interests" at play. In resolving these interests, the state must identify both the potential utility and the potential harm of any deployment, in light of the constraints of a democratic society. 12 Application of the "necessary in a democratic society" test involves a number of different elements. An interference may meet this test if it remains faithful to democratic principles, 13 "if it answers to a 'pressing social need', if it is proportionate to the legitimate aim pursued and if the reasons adduced by the national authorities to justify it are [']relevant and sufficient.'" 14 Applied to a state's decision to deploy an AI tool, these may be distilled into two central criteria: First, why is a deployment required, and second, what alternative mechanisms are available?
Why Is an AI Deployment Required?
A number of factors are relevant to clarifying why a particular AI deployment is required: (a) identifying the objective underpinning the deployment, (b) demonstrating why achieving that objective is necessary, and (c) specifying how the technology will be deployed. These components are a means of establishing purpose, thereby facilitating identification of utility and harm, and are central to ensuring foreseeability and protecting against arbitrary rights interferences. It is necessary that states undertake this process prior to any potential deployment-and that a record be maintained-so that a "pressing social need" can be demonstrated, and "relevant and sufficient" justifications for deployment presented and "convincingly established." 15 It is also a means of protecting against "mission creep," whereby a tool is deployed for a particular purpose, but is then used to achieve other objectives over time. 16 Adaptation of objectives will require fresh analysis, limiting creep.
Identifying the objective underpinning an intended deployment is a first step. 17 This should be done at a granular level, rather than in the abstract. 18 Using the LFR example, an objective of "preventing crime and protecting public order"-a legitimate aim-is overly broad: it is essentially reflective of all policing activity, and so does not provide any foreseeability as to the specific activities that state actors will undertake. 19 Examples of more focused objectives may include the identification of individuals suspected of belonging to proscribed terrorist organizations at border posts so that they may be stopped or questioned, or the identification of individuals subject to outstanding arrest warrants as they pass through a particular part of a city.
Once the state identifies the objective, it must then demonstrate why achieving that objective is necessary. This is relevant to determining specific utility, and demonstrating "a pressing social need." In the LFR context this may relate to the nature of the crime, or the threshold for initiating surveillance: the social needs associated with preventing murder will be much higher than those associated with detecting petty theft. 20 "Relevant and sufficient reasons" are critical. Building on the arrest warrant example, it may be necessary to specify whether it is difficult to contact individuals subject to an arrest warrant, and whether this applies generally to all warrants or is restricted to specific offences. A next step is to specify the circumstances of deployment. This is essential to evaluating impact-both in terms of utility and harm-and gives effect to the previous two steps. A number of factors are potentially relevant. For instance, will the AI deployment run for a set period/at particular intervals, or on a more continuous long-term basis; will the data produced be subject to further AI-driven analysis; and who has access to the resulting data, and under what circumstances? Clarity around the intended circumstances of use is important both to understand how a particular deployment will run (facilitating foreseeability) and what the potential human rights-related impact of that deployment may be. In the terrorism example, the definition of a proscribed terrorist organisation is likely to be established in law, thereby reducing the scope for arbitrariness. However, specificity may be necessary with respect to the criteria used to enroll individuals on the associated watchlist. For instance, if police intend to stop individuals on the basis of membership-or suspected membership-in such an organization, will this occur following a specified process with a required intelligence or evidentiary threshold, or on the basis of some other arrangement?
Identifying Alternative Mechanisms
The other element to demonstrating the utility of an AI deployment is a consideration of alternative, or preexisting, mechanisms. This element speaks to the "why AI" question, and helps to determine whether the state could use other, less invasive, approaches to achieve the same-or sufficiently similar-objectives. This assessment contributes to the proportionality assessment, which must evaluate "whether it is possible to achieve the aims by less restrictive means." 21 The examples presented previously are helpful in unpacking some of the issues. In the first example, LFR technology was used at border ports in order to identify individuals suspected of belonging to proscribed terrorist organizations. Determining the availability of alternative mechanisms in this context is not straightforward. All individuals passing through a border post undergo an identity check, which may also involve initial questioning. At this point, border officials may check an individual's identity against a database and raise an alert in the event of a match. Equally, border officials may be briefed to monitor for particular behavioral or travel patterns, which may also be used to flag an individual for more detailed questioning. It is possible, however, that a member of a proscribed organization may travel on falsified papers and be trained not to raise suspicions on initial questioning. In these circumstances, LFR technology may be particularly useful, as it has the potential to counteract these two techniques. "Necessity" (and proportionality) will accordingly turn on the specific added value of LFR compared to traditional mechanisms. Relevant considerations may include whether sufficiently high-quality pictures of suspected individuals are available, or whether such persons are typically tracked on the basis of visual identification, known aliases, or patterns of movement.
In the second example, LFR is used to identify individuals subject to outstanding arrest warrants. In this case, frequently used alternative mechanisms also exist. These include, for example, identity checks when individuals come into contact with law enforcement, visits to places typically frequented by the individual, or interviews with 20 See Cases C-203/15, C-698/15, Tele2 Sverige AB v Post-och telestyrelsen and Secretary of State for the Home Department v. Watson and Others, ECLI: EU:C:2016:970, para. 102 (Dec. 21, 2016. 21 Zakharov v. Russia, App. No. 47143/06, para. 260 (Eur. Ct. H.R., Dec. 4, 2015).
associates and family members. In considering the effectiveness of these alternative mechanisms, a number of factors are likely to be relevant, such as the nature of the underlying offence, existing success rates and time frames regarding apprehension of individuals subject to an arrest warrant, and rates of re-offending during that time period and the gravity of that offense. In determining the added value of LFR in this context, states must also consider the likelihood that a wanted individual will pass through a facial recognition camera system. An evaluation of alternative mechanisms demonstrates whether-in any given deployment-AI technology represents a continuation of preexisting police capability by other means, or whether it represents a step-change in capability. This is relevant to the determination of potential human rights-related harm. For instance, using LFR to confirm an individual's identity at a border crossing arguably represents a continuation of an existing capability, where a single border agent checks an individual against her documentation. On the other hand, deploying LFR across city-wide CCTV networks and integrating data analysis tools may facilitate the tracking of individuals' movements, the identification of patterns of life and personal/professional networks, and the flagging of unusual or suspicious behavior. This arguably constitutes a step-change in capability, as this would not have been possible absent LFR, even with significantly increased resources.
A step-change in capabilities is a useful indicator that more in-depth analysis and impact assessments are required. It is also useful when considering whether a state may cite resource efficiencies to justify an AI deployment. There is a strong argument that where AI represents a continuation of existing capabilities, resource efficiencies should be taken into consideration, given the positive impact this may have on states' ability to fulfil rights in other areas. If, however, AI represents a step-change in capabilities, then resource savings should arguably not play a role in justifying an AI deployment: the powers of the state (and the human rights impacts) are altered significantly and so a like-for-like cost comparison is not possible.
Conclusion
This essay has attempted to identify some of the steps that states should undertake when deciding to deploy an AI tool (or not), in order to facilitate human rights compliance. The focus has been on demonstrating utility. An assessment of potential harm is equally important but constitutes the next step in the analysis. Importantly, the measures outlined above will help to set the parameters of deployment, thereby establishing the framework within which potential harm can be evaluated. 22 Once potential utility and potential harm are identified, efforts may be made to resolve any "competing interests," and it is here that appropriate safeguards, or restrictions on circumstances of use, may be identified. Hopefully, this essay also demonstrates how taking a human rights-based approach to decision-making will advance states' interests. | 2020-04-30T09:09:28.004Z | 2020-04-27T00:00:00.000 | {
"year": 2020,
"sha1": "df11c2688d95b6fe1b3e36d825d4a80e8568810e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/44C4808E7E42F172E0125497CF1096E2/S2398772320000306a.pdf/div-class-title-using-human-rights-law-to-inform-states-decisions-to-deploy-ai-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "333691dea040e5cdb400bfcef0565e5095974622",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
237299262 | pes2o/s2orc | v3-fos-license | Screening of minor psychiatric disorders and burnout among a sample of medical students in St. Petersburg, Russia: a descriptive study
Background Despite the general interest of researchers around the world, there are few studies on the psychological wellbeing and burnout among medical students in Russia. The aim of this study was to perform screening for minor psychiatric disorders, burnout, problematic alcohol use, and quantify the psychological issues and stress among a sample of medical students in St. Petersburg, Russia. Results According to the GHQ-12, screening for minor mental disorders was positive in 140 students (85%). Screening for burnout using the OLBI showed positive results in 121 (73%) students for disengagement and 132 (80%) students for exhaustion. Screening with the CAGE tool identified a risk of alcohol consumption in 33 students (20%). Most students reported academic studies as the main source of stress in their life (n = 147; 89.1%). Conclusions This study identified very high levels of stress, burnout, risk of minor mental disorders, and problematic alcohol use among medical students in St. Petersburg, Russia. These findings suggest more attention is needed to the poor mental wellbeing and health in medical students in Russia.
Background
The psychological adaptation of medical students and young doctors is of interest in many countries [1][2][3][4][5][6]. Research consistently shows a high prevalence of mental disorders and psychological stress among medical students [3,[7][8][9], significantly higher than in the general population [10,11]. The incidence of depression and anxiety in a sample of Russian medical students was 4 and 6 times higher, respectively, than in students of other disciplines [12]. Also, of particular concern is the increasing prevalence of suicidal ideation in medical students [6,9,13,14].
Medical students report pressure from their professional environment and academic studies are the main source of stress [3,5,15]. These stressful conditions may lead to high burnout rates [16][17][18][19][20][21]. Other sources of stress include psychosocial issues and environmental stressors [15,22], financial problems, housing, and relationships [3]. The consequences of psychological adaptation difficulties in students are also shown in social life, leading to substance misuse [12,23] and reduced learning achievements [24,25].
The medical-training period is recognized as a crucial phase for the onset of mental disorders among doctors [10]. It is therefore relevant to study this phase as well as to develop prevention and psychohygiene strategies. Medical-training currently requires significant emotional and financial investment, so it is critical that faculty and managers provide help and support to these future physicians [9]. Mental health promotion for medical students calls for evidence-based interventions and psychosocial support [4]. Researchers strongly recommend widespread screening for symptoms of burnout and mental disorders in medical students in order to provide timely and appropriate interventions [6]. Despite the general interest of researchers around the world, there are few studies on psychological wellbeing and burnout among medical students in Russia [12,22].
The aim of this study was to perform screening for minor psychiatric disorders, burnout, problematic alcohol use, and quantify the psychological issues and stress among a sample of medical students in St. Petersburg, Russia.
Procedure
An anonymous online survey of medical students of St. Petersburg State University Medical Faculty was conducted in May-June 2020. The research employed the online platform questionpro.com. The invitation to participate in the research has been sent through the mailing list of students' council of the faculty and was also published in social networks groups for students. Reminders were sent at second and fourth week from the launch of the survey. Each student could complete the survey just one time and participation was anonymous and voluntary. Data were password-protected and answers were confidentially treated. Individual respondents could not be identified. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committee on human experimentation with the Helsinki Declaration of 1975, as revised in 2008. The study protocol was approved by the ethics committee of the St. Petersburg State University.
Participants
A total of 174 students completed the survey (response rate 43.2%). We calculated that the sample size 161 or more respondents are needed to have a confidence level of 95% that the real value is within ± 6% of the surveyed value (the margin of error is 0.06). Since we expected that there might be difficulties in recruiting respondents for the online survey, we took into account a possible minor downward deviation from the common target values. Only nine students did not answer all the key questions in the survey, and these students were excluded from the final analysis. The final sample included 165 respondents who answered all the key questions of this survey.
Measures
The survey was conducted in Russian and collected basic demographic information and structured questionnaires with proposed answers and the item "other" in those cases when the questions were not quantitative. Choosing "other" allowed respondents to provide their personalized answer. The first part of the survey consisted of a structured questionnaire that included questions about students' mental health and psychological wellbeing before and during their studies at the university, including any history of mental health problems, use of prescription (and non-prescription) medications, drug and alcohol use, main sources of stress experienced by students, etc. The full text of the questionnaire is available from the first author upon request.
Statistical analysis
Data were entered and analyzed using IBM SPSS Statistics (Version 24). Research data are presented as the arithmetic mean and standard deviation (M ± SD). We used chi-square (χ 2 ) tests for categorical variables and to compare proportions. The correlation between the indices was studied by means of a linear correlation analysis, the Pearson test. Correlation coefficient (r s ) from 0.3 to 0.7 means a moderate positive; negative r s corresponds to inverse correlation.
Results
Sociodemographic characteristics of respondents are shown in Table 1. No students selected the item "others" in the question about gender. Most participants were in the 5th year of study, which may reflect the fact that these students were trained in "psychiatry, medical psychology". Information regarding the educational achievements of parents was obtained from each student, with 1 (0.6%) indicating high school or below, 8 (4.8%) indicating GCSE (General Certificate of Secondary Education), 13 (7.9%) indicating A-Level or equivalent, 117 (70.9%) indicating undergraduate, and 26 (15.8%) indicating postgraduate education.
Assessing students' mental health in the period before entering the university, 27 (16.4%) reported that during that period they had visited a general practitioner, psychologist, psychiatrist, psychotherapist, or other specialist in the field of mental health for reasons related to psychological issues (including reduced mood, anxiety, eating disorders, or obsessions). Ten individuals (6.1%) reported they had been diagnosed with mental health disorders before entering medical school, of which four students (2.4%) received an attention deficit hyperactivity disorder (ADHD) or autism spectrum disorder diagnosis. 10.3% of students (n = 17) also indicated that they had been prescribed medications for a mental disorder (including depression, anxiety, psychosis, ADHD) during that period.
Twenty-five students (15.2%) reported they had been diagnosed with a mental disorder while at university whereas 18 students (10.9%) indicated that they were having care by a general practitioner, psychologist, psychiatrist, psychotherapist, or other mental health professional during their participation in the study. The same number of students (n = 18; 10.9%) reported they were on a maintenance treatment during the survey.
Students reported academic studies as the main source of stress in their life (n = 147; 89.1%). Other sources of stress included social relationships (intimate or family; n = 84; 50.9%), financial wellbeing (n = 63; 38.2%), work (n = 53; 32.1%), and housing problems (n = 34; 20.6%). Ten students (6.1%) also identified other sources of stress such as low self-esteem, social problems, anxiety about their own health or health of their relatives, career after graduation, and existential issues. Most respondents reported having two (n = 60; 36.4%) or three (n = 43; 26.1%) main sources of stress. One source of stress was reported by 30 (18.2%) respondents, while four and five sources were reported by 15 (9.1%) and 10 (6.1%) students, respectively. Only seven people (4.2%) did not report any stress in their lives.
More than one-third of respondents (n = 63; 38.3%) reported that they had taken a psychoactive substance in the last year before the course in order to improve their concentration or academic performance (excluding caffeine or other energy drinks). Forty-five students (27.3%) reported taking non-prescription substance or medications outside their intended use to feel better or uplift their mood.
According to the GHQ-12, 140 students (84.8%) had a total score of 2 or higher, indicating a high risk of minor mental disorders in the sample. The mean value of the total GHQ-12 score in the study group was 5.05 ± 3.04. The screening for burnout using the OLBI showed positive scores in 121 (73.3%) students for disengagement and 132 (80.0%) students for exhaustion. A positive statistically significant correlation has been found between the overall GHQ-12 score and OLBI disengagement and exhaustion scores (Table 2), as well as between individual OLBI scores. The correlation between training course and OLBI disengagement was less than moderate. No differences were found in the frequency of positive screening of the survey's techniques when dividing respondents by gender (Table 3).
Discussion
This survey collected further and relevant evidence regarding the wellbeing and health issues of medical students in St. Petersburg, Russia. Young people's health has a potential impact on future population health and global economic development unless timely and effective strategies are adopted [35].
Some differences in the frequency of mental disorders were found in respondents who reported being diagnosed before entering university (6.1%) or during their own training (15.2%), while GHQ-12 screening for general (non-psychotic) mental health problems was positive in 84.8% of respondents. In a previous study conducted in Russia, clinically significant symptoms of social phobia and generalized anxiety were found in 16% of medical students, while symptoms of depression (according to the depression anxiety stress scale-21) were observed in 34% of medical students [12]. The frequency (27.3%) of use of non-prescription substance or medications outside prescription over the past year is alarming, indicating a high probability of self-treatment in the study group. Although a screening survey is not sufficient to make a diagnosis, data obtained clearly indicated a high probability of common mental disorders in the study sample. Academic stress and the pressure of the professional environment were rated as the leading global sources of stress in medical students [3,5,15]. This was confirmed in our survey, where academic studies were the most frequently cited source of stress (89% of respondents). Further evidence of the importance of academic stress among our respondents was the widespread taking of medications aimed at increasing concentration or improving academic performance (38.3%) over the past year. In another study from Russia, 26.0%, 69.1%, and 4.9% medical students reported low, moderate, and high perceived stress respectively [22]. According to the literature, perceived stress in medical students was higher among older groups and final year medical students [15], but this was not confirmed in our study.
There is no doubt that study or work places affect our mental health and wellbeing [1]. The burnout of health workers is an important contributory factor in medical errors and reduced quality of medical care [7]. Thus, it is very important to focus on burnout prevention during the training period. Our study found high frequencies of both disengagement (73.3%) and exhaustion (80.0%). The relationship found in the study between the frequency of burnout symptoms and the GHQ-12 score appears to confirm the potential link between burnout and the risk of developing mental disorders, particularly depression [36]. These indicators are discouraging and should be treated as a call for direct action to improve the psychological wellbeing of students. It should be noted, however, that the reported frequency of emotional burnout symptoms among Russian medical students is lower than in many other countries [3]. Further studies are required to assess the possible causes of these cultural differences, as well as the socio-cultural factors potentially associated with them.
One in five students has shown signs of alcohol problems using the CAGE questionnaire, significantly higher than in other countries [3]. Our data are consistent with literature from Russia. In fact, in a previous study on alcohol use, heavy drinking, and problem behavior among Russian Federation university students, heavy alcohol use was revealed in 20.4% of them [37]. Another study found that heavy drinking among university students in Russia was common for 37.1% of men and 39.6% of women [38], and those were the highest rates among 24 countries. Alcohol use was the leading risk factor for death among young people aged 15-19 and 20-24 in both 1990 and 2013 Global Burden of Disease Study reports [35]. However, a recent study showed a clear trend toward a decline in alcohol consumption among adolescents and young adults under 25 in Russia [39], which may explain the differences in frequencies between our and previous studies. In order to determine potential risk groups for burnout, problematic alcohol use and risk of general (nonpsychotic) mental health problems, the results of the study were compared according to participants' gender. No statistically significant differences were found in the studied items. Moreover, no statistically significant correlation between the studied indicators and the students' course of study was obtained. The reason for this may be the relatively small sample in the study.
This study was conducted during the period of social restrictions imposed in St. Petersburg to combat the spreading of COVID-19. Although there was no official ban on leaving home, movement around the city and students' social contacts were significantly limited and all university classes were converted to remote learning. Not surprisingly, research in Russia over the past year has confirmed that during the COVID-19 pandemic lockdown has led to emotional disturbance, depression, irritability, insomnia, anger, and emotional exhaustion among other things [40,41]. Young people have been particularly exposed to psychological stress during the period of social isolation in Russia [41]. Although we did not assess the direct association between the results of the study and the finding of respondents in selfisolation, the authors report that at the time of the study all students participating in the study were at least switched to distance learning and were subject to general instructions from the St. Petersburg and Russian governments with recommendations for self-isolation. These additional external factors may have affected the level of stress and burnout in the study sample, so it may be advisable to compare our findings with further studies after the end of pandemic.
In summary, this research sheds some light on the problem of psychological wellbeing and health of medical students in Russia. Academic schedules and load of medical students should be balanced to prevent educational stress, anxiety, and depression [12]. Administrative measures should focus on developing preventative strategies for stress management to improve students' psychological wellbeing [22]. We also hope that our study motivated the participating medical students to self-reflect and try to optimize their psychological state.
Strengths and limitations
The main strength of this research was the employment of reliable screening tools, extensively and internationally used as in some previous studies on the psychological wellbeing of medical students. Despite its originality, this study has some limitations.
It was based on an online survey which guaranteed confidentiality, but respondents were self-selected and theoretically it is possible that those who were experiencing problems may have been more likely to respond. Diagnoses of mental disorders are also self-reported and not clinically confirmed. The study was conducted during the COVID-19 pandemic quarantine measures and lockdown, which may have affected stress levels, burnout, and current mental health problems in the sample. Therefore, it would be useful to conduct the study in dynamics after the removal of all social restrictions. Also, it would have been interesting and useful to conduct a comparison with students from other disciplines such as psychology, social workers students, and dentistry students.
Conclusion
This study reported high levels of burnout, stress, problematic alcohol use, and risk of minor mental disorders in medical students in St. Petersburg, Russia. It may suggest more attention to the mental wellbeing and health in medical students in Russia. Also, these findings might suggest strategies to improve mental health, contrast stigma, and discrimination, and prevent mental disorders among medical students. | 2021-08-26T13:45:29.731Z | 2021-08-26T00:00:00.000 | {
"year": 2021,
"sha1": "c692eab5434e0e6321c0fef780c5a00776f67fc0",
"oa_license": "CCBY",
"oa_url": "https://mecp.springeropen.com/track/pdf/10.1186/s43045-021-00118-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c692eab5434e0e6321c0fef780c5a00776f67fc0",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
232307198 | pes2o/s2orc | v3-fos-license | Methods for Extremely Sparse-Angle Proton Tomography
Proton radiography is a widely-fielded diagnostic used to measure magnetic structures in plasma. The deflection of protons with multi-MeV kinetic energy by the magnetic fields is used to infer their path-integrated field strength. Here, the use of tomographic methods is proposed for the first time to lift the degeneracy inherent in these path-integrated measurements, allowing full reconstruction of spatially resolved magnetic field structures in three dimensions. Two techniques are proposed which improve the performance of tomographic reconstruction algorithms in cases with severely limited numbers of available probe beams, as is the case in laser-plasma interaction experiments where the probes are created by short, high-power laser pulse irradiation of secondary foil targets. The methods are equally applicable to optical probes such as shadowgraphy and interferometry [M. Kasim et al. Phys. Rev. E 95, 023306 (2017)], thereby providing a disruptive new approach to three dimensional imaging across the physical sciences and engineering disciplines.
While the resulting images are difficult to relate directly to the fields in the plama [19], theoretical work from Kasim et al. [20] and Bott et al. [2] present algorithms which are able to recover transverse magnetic field components, path-integrated along the directions of proton probing. More recently, Kasim et al. [21] derived a statistical approach to compensate for lack of information regarding the transverse profile of the proton beam prior to interaction with the plasma. Chen et al. [22] investigated the application of machine learning methods to the problem of proton radiography inversion, noting the degeneracy involved in interpreting path-integrated measurements, and suggested taking proton radiographs from multiple view angles as a method for resolving field structures spatially. While some experiments-for example those of Li et al. [3] and more recently Tubman et al. [8]-have probed similar interactions along different axes, the first full exploration of the possibility of recovering spatially resolved magnetic field structures from proton radiographs using standard tomography techniques is presented here. * benjamin.spiers@physics.ox.ac.uk In Section II a brief summary of proton radiography is presented, along with the theory of inverting proton radiographs and recent advances in radiographic inversion techniques. The reader is then introduced to the subject of tomography in Section III, along with the filtered backprojection algorithm (FBP), which is one of the most important and widely-used in tomography applications. In Section IV an analytic approach to tomography using Fourier decomposition in the angular variable is derived and a method of implementing it using filtered backprojection is presented for the first time. When implemented in this way the new approach is realised by an interpolation in observation angle. Section V presents another new method, which improves reconstruction quality of functions with much larger extent in one dimension than the others by making them appear 'squashed' into a more uniform aspect ratio before the FBP algorithm is used. In Section VI it is shown how these modifications improve the quality of reconstruction for a function representing the magnetic field of a plasma channel. Section VII summarizes the results, illustrates areas for further research, and concludes the article.
II. PROTON RADIOGRAPHY
Proton radiography, in the limit of paraxiality and small in-plasma deflections, measures line-integrated magnetic fields, and is sensitive to field components transverse to the direction of probing (this is easily seen from the Lorentz force, which does not depend on the magnetic field component parallel to particle velocity). Protons launched from a distance z s behind the target plasma are deflected by electromagnetic fields in the plasma and then travel a further distance z i before being recorded on a detector screen. Momentum deflections ∆ p experienced by protons launched in the z-direction, under the often-employed assumption that deflections are caused solely by a magnetic field B and employing the paraxial and thin-target approximations within the plasma, are given by represent transverse coordinates at the plasma and e represents the fundamental unit of charge. The result is simplified by introducing the magnetic vector potential defined by B = ∇ × A As a result of these deflections the protons reach positions X on the detection screen given by Introduced here are the magnification M = 1 + z i /z s and the "geometric focal length" parameter f g = z −1 s + z −1 i −1 = z i /M which conveniently separate the effects of source divergence from those of deflections caused by the plasma. Image-plane structures depend on f g as a geometric parameter, with M only affecting the overall scale of the image. The effect of the position deflections is to increase the fluence of protons in some regions and reduce it in others. The image-plane proton fluence is given by where J is the Jacobian matrix for the transformation from plasma-to image-plane coordinates: whose determinant is given by This result can be expressed in terms of the longitudinal components of the MHD current j and the Hessian determinant of the vector potential A: As argued by Bott et al. [2], in the limit of small deflections Equation 6 may be taken to first order in f g and used to recover the integrated longitudinal MHD current. In regimes of stronger deflection-quantified by Bott et al. [2] using a contrast parameter equivalent to µ = fg∆ p p0 for plasmas with typical transverse spatial scale of variation -this is not feasible and the resulting images are not a simple mathematical function of the measured fields, though iterative numerical algorithms based on solving the Monge-Ampère problem are available [23] which may be used to recover these fields from a proton fluence distribution.
Even an ideal reconstruction can only produce information about line-integrated fields. Symmetry assumptions may be used to make conclusions about the three-dimensional distribution of fields (for example, using Abel transform inversion), and Chen et al. [22] proposed that the additional information available when taking proton radiographs from multiple different probe directions could enable full reconstruction of threedimensional fields. This has the form of a transverse vector tomography problem.
A. Basic Theory of Tomography
In this work we consider parallel-probe tomography (i.e. tomography in which each observation is made using a collimated beam). While this is not strictly true due to the nature of proton beams and the processes by which they are produced in laser-plasma experiments (target normal sheath acceleration and capsule implosion both produce divergent proton beams with small virtual source located close to the point from which protons are accelerated), the distance between proton source and plasma is usually significantly larger than the transverse size of the plasma. The variation in incident proton angle over the transverse extent of the plasma is therefore sufficiently small that they may be treated as collimated for the purposes of tomographic reconstruction.
A three-dimensional scalar function f (x, y, z) is defined in Cartesian coordinates. A tomographic projection of this function is parametrised by the probe angle θ. For a given θ we define a new Cartesian coordinate system (q, s, t) related to (x, y, z) by rotation about the z axis: Taking projections along the local q direction produces for each θ a function of s and t: This equation defines the Radon transform R θ , the integral transform that is the theoretical basis of tomographic analysis. It is important to note that, due to our assumption of parallel probing, this is effectively a 'stack' of two-dimensional tomographs, one for each value in t. The values of F θ (s, t) at fixed t are only influenced by the two-dimensional slice of the original function for which z = t: f (x, y, t), for all s and θ. This allows for application of two-dimensional inversion algorithms to the three-dimensional tomography problem.
The function F θ (s) (whose dependence on t has been dropped following the previous argument) is often visualised as a 'sinogram': the parameter θ is promoted to a variable and the resulting two-variable function F (s, θ) is plotted as an image. An example of a sinogram is shown in Figure 1. This sinogram was computed from the Shepp-Logan phantom, a function often used to test tomographic techniques [24]. A modified version of the Shepp-Logan phantom with both positive and negative values is employed in the following sections, as this better represents the nature of magnetic field structures.
B. The Filtered Back-Projection Algorithm
A canonical algorithm for the recovery of tomographic data sets is the filtered back-projection (FBP) method. In short, this method filters each projection with a kernel proportional to |k| in Fourier space, then 'smears' the filtered projections across their probe directions and sums the resulting 'back-projected' functions to recover an approximation of the original function. FBP converges to an analytically correct result in the limit of many projections (and can be derived as a discretisation of the Fourier projection-slice theorem discussed in Section IV A), but where samples are few and sparse in the angular dimension it suffers from severe 'streaking' artefacts. This behaviour is demonstrated in Figure 2.
C. Tomography of Vector Functions
In this section, consider a vector function g(x, y, z). Tomographic projections of this function are taken using the (q, s, t), rather than the (x, y, z), components-i.e. the components measured rotate with the angle of probing rather than being fixed in the background coordinate system.
Longitudinal vector tomography, realised for example by Doppler tomography of fluid velocity fields, measures the q component of a vector field. The value measured is It has been shown that these techniques may only be used to recover the (two-dimensionally) solenoidal part of a vector field [25], i.e. that which satisfies Proton radiography represents a measurement of magnetic field components transverse to the probe direction, so may be used to implement a transverse vector tomography. By this scheme the s and t components of a vector field are measured. These are given by Only the irrotational part of the in-plane field is recoverable, which can be seen by following similar reasoning to the recoverability of the solenoidal part of the q component. This is unhelpful, when considering magnetic fields-the part of the field sourced by out-of-plane currents is undetectable, which can be seen from the relevant component of Ampère's Law: The component g z , in contrast, transforms as a scalar under the rotation which defines the probe geometry and is therefore accessible in full to traditional, scalar tomography techniques.
The following analysis will therefore focus on this out of plane component of the magnetic field-by rotating around a chosen axis, the component parallel to that axis is recoverable in full. Rotations about three orthogonal axes are therefore sufficient to recover the full three-dimensional vector field. This protocol reduces the problem of three-dimensional transverse vector tomography to a series of two-dimensional scalar tomography problems, allowing the use of well-developed algorithms and numerical techniques from this field.
The Shepp-Logan phantom f (x, y) and its sinogram F (φ, s) (pane d). Panes b) and c) are projections of f along the y and x axes respectively, corresponding to F π 2 , s and F (0, s). Note that F is 2π-periodic and has the parity property F (φ + π, s) = F (φ, −s).
A. The Fourier Projection-Slice Theorem
The Fourier Projection-Slice theorem states that the one-dimensional Fourier transformF θ (k s ) of the projection F θ (s) is equal to the two-dimensional Fourier transformf (k x , k y ) of the original function f (x, y), evaluated on a one-dimensional slice through the origin of frequency space normal to the probe direction: Promoting the parameter θ to a variable, it is clear thatF (k, θ) is in fact nothing more than a representation of f in plane polar coordinates, albeit with θ differing from the conventional polar angle variable by a quarter-cycle.
All that is in principle required for reconstruction of tomographic data is therefore a two-dimensional inverse Fourier transform of the one dimensional Fourier transform of projected data. This procedure is analytically exact, but complications arise due to the discrete sampling of real data. How should the inverse Fourier transform of data sampled discretely on a polar grid be computed?
Firstly, it is possible in general to represent a function discretely sampled in polar coordinates as a Fourier series in the angular variable (we shall now use θ to denote angles in real space and φ for those in the Fourier domain, to avoid confusion further into the derivation): for a function sampled at M equally-spaced angles.
It is now necessary to evaluate the inverse Fourier transform integral in two-dimensional polar coordinates. The integral is: is also expanded as a Fourier series, analogously to Equation 17: Expanding both f andF into Fourier modes, defining ψ = φ − π/2 − θ and matching like terms in exp(−imθ) we obtain 1 2π allows the simplification: The individual azimuthal Fourier modes of a twodimensional function and its Fourier transform are related by a Hankel transform whose order is given by the Fourier mode number. This is a generalisation of the "FHA cycle", which states that the Fourier transform of the Abel transform of a one-dimensional function is equivalent to the function's Hankel transform of order 0. As the Abel transform corresponds to a Radon transform of a circularly symmetric function, the FHA cycle is the m = 0 case of the above relation.
This suggests a procedure for recovery of functions from their projections F (s, θ). First, a Fourier transform is performed along the s−axis and the result is decomposed into a Fourier series in θ. Then, a Hankel transform of the appropriate order is applied to each angular mode, yielding Fourier series components of the original function. This Fourier series is resolved with arbitrary angular resolution (as the angular dependence is given in terms of known functions cos(θ) and sin(θ)), allowing the reconstruction of the original function to be displayed in a smooth, visually appealing manner even when only a few sampling points are used.
B. Discretisation and The Hankel Transform
The procedure of the previous section involves the computation and resolution of Fourier series as well as the computation of Fourier and Hankel transforms. For discretely sampled data, the computation of both Fourier series and Fourier transforms are often efficiently carried out using the well-known Fast Fourier Transform (FFT) algorithm. The algorithm is therefore only missing one detail-computation of the Hankel Transform-before it may be implemented numerically. Numerous algorithms have been proposed to carry out this computation, including direct numerical quadrature; conversion of the Hankel transform to a convolution using a logarithmic change of variables; and methods using series expansions of the Bessel functions or of the function to be transformed [26,27].
Methods relying on series expansion of the transformed function, for example into a sum of Bessel functions, require unevenly-sampled data so are inapplicable to the case at hand. Others involve adaptive Gauss quadrature, which requires knowledge of the analytic form of the function to be transformed. For discretely sampled data these methods are therefore also inapplicable. The remaining algorithms are direct (e.g. trapezium rule) integration of the Hankel transform integral Eq. 22, and projection/back-projection methods.
The method for calculating integer-order Hankel transforms proposed by Higgins and Munson [28] can be seen to be closely-related to the standard filtered backprojection (FBP) algorithm, in the limit where the discrete sum in FBP becomes a continuous angular integral (high sampling rate) and the angular dependence of the integrand/summand is given by exp(imθ) (an angular Fourier mode). Therefore, the method of Higgins and Munson is approximated for any of the modes in Eq. 22 by passing a filtered back-projection algorithm a set of 'virtual projections' with the desired angular dependence, sampled with arbitrary angular density. The sum over all modes of these virtual projections matches the true projections exactly at the original sampling points and their values will be C ∞ smooth between sample points. The filtered back-projection of the virtual projections of each mode then returns the corresponding Fourier series component of the original, real-space function f m , including the exp(imθ) angular dependence.
Resolving the Fourier series of the real-space function is therefore simple: By linearity of the Radon transform (which is inherited by filtered back-projection) the summation in Equation 23 may be moved inside of the FBP computation: This equation provides a useful interpretation ofF : it is a version of F 'enhanced' by interpolation in φ using its Fourier series. As noted above, the sum over modes m of the Fourier series agrees with the original samples F at the sampled angles and is 'optimal' in the sense that it is C ∞ -smooth and explicitly possesses the same 2π-periodicity that the true function must exhibit.
The observation that the Fourier series is resolved immediately by 'pushing' the final summation inside the filtered back-projection has several advantages.
Firstly it allows a simplification of the algorithm presented in the previous section, by facilitating use of existing implementations of filtered back-projection. As FBP acts directly on tomographic observations rather than their Fourier transforms the FFT calculation along s is no longer needed. This FFT may happen inside the FBP algorithm to achieve its filtering of samples, but it could also be implemented using a direct convolution. The FBP algorithm is free to choose dynamically how to perform it, using speed heuristics for example as seen in the scipy function scipy.signal.convolve [29].
Secondly, the interpretation of the resulting algorithm as the application of a standard tomographic reconstruction algorithm to data enhanced using trigonometric interpolation implies that algorithms other than FBP could be used to implement that stage of the process.
C. Demonstration of Fourier Interpolated
Tomographic Reconstruction Figure 2 was recreated using the Fourier interpolation technique to pre-enhance the set of all projections. The result is shown in Figure 3. It can be seen that in all cases the properties of the interpolated reconstruction are improved versus the non-interpolated. For example, while the naïve application of filtered backprojection in Figure 2 results in functions featuring streaks that reach the edges of the image (and would continue arbitrarily far if the reconstruction was carried out on a larger domain), the reconstruction of Fourier-interpolated images is compactly supported on the smallest disk that completely contains the support of the original function. Effectively, the linear streaks seen in the FBP reconstructions correspond to circular streaks of constant radius in the reconstructed image, and these circular streaks willfor many functions-represent less of a deviation from the original function than the linear streaks characteristic of FBP in the sparse sampling regime. This correspondence reflects the relationship between standard FBP and the polar-coordinates approach taken in the derivation of this FIG. 3. Demonstration of the improvement attainable by using Fourier interpolation techniques. Images were produced identically to Figure 2, except that the set of projections was enhanced by interpolation to an angular frequency of 1024 view angles prior to reconstruction. Noise and streaking are significantly reduced, though performance is still quite poor for very small numbers of projections. Linear streaking has been replaced by circular streaking, reflecting this method's polar-coordinates formulation as opposed to the Cartesian formulation of standard FBP. method: artefacts in FBP arise from 'smearing' the observations along the direction of observation; artefacts in Fourier-interpolated FBP represent a 'smearing' in the polar angle.
Further, the most noticeable artefacts in, for example, the reconstruction of Figure 3, panel c) appear in the region where the elliptical support of the original function does not fill the circular support of the reconstruction. It is therefore likely that the reconstruction would be improved further by employing a virtual transformation to the computational domain of the algorithm to ensure the best possible overlap between the original function's region of support and the disk containing that region. In the next section, this conjecture is investigated and a method is derived to achieve the necessary transformation.
V. ASPECT RATIO COMPENSATION: ELLIPTICAL TOMOGRAPHY
Many realistic functions are not best described as being supported on a disk, but have some aspect ratio not equal to unity. These functions, represented by their projections as a sinogram, have a width that oscillates with angle, an effect seen for example in the right panel of Figure 1. Altering the procedure used such that the object appears to have aspect ratio closer to unity may be expected to improve the reconstruction quality. This may be achieved by applying a single-axis scaling between the physical and computational domains of the problem. There are three things which must be consid- ered when implementing such a scaling: first, the angular separation of observations in physical space becomes nonuniform in order to maintain uniform angular sampling in computational space; second, an individual scaling of the s-axis must be applied to projections, accounting for the stretching or shrinking of the axis perpendicular to the projection; and third, this must be compensated for using an inverse scaling of the function values to maintain equality of all projections' integrals along s. The modified Shepp-Logan phantom used in Figures 2 and 3 is defined on the two-dimensional domain [−1, 1] × [−1, 1] and composed of several ellipses of differing parameters. The support of this function is defined by the largest ellipse, which entirely contains all other ellipses and has minor and major semi-axis lengths in the ratio A = 3/4. We now detail the procedure for tomography of this phantom using aspect ratio compensation between the physical domain with coordinates (X, Y ; R, Θ) and the computational domain with coordinates (x, y; r, θ).
The relation between Cartesian coordinates is such as to equalise the aspect ratio of the function under observation. Keeping x = X, this implies y = Y /A. Angles of projection are uniformly spaced in the computational domain: Using the relationship between Cartesian coordinates it is easy to derive the corresponding relationship for the angular variables Θ and θ: This has the effect of reducing angular spacing when the probe direction is close to the major axis and increasing angular spacing when close to the minor axis.
The transverse extent of the physical-space object varies with viewing angle, and this must also be compensated for. The physical transverse width of the ellipse is w(Θ) = w 2 max cos 2 Θ + w 2 min sin 2 Θ = w max cos Θ 1 + A −2 tan 2 Θ = w max cos Θ 1 + tan 2 θ = w max cos Θ cos θ .
The s-axis of each projection is re-scaled by a factor w(Θ)/w min = A cos Θ/ cos θ to account for the transverse stretching caused by aspect ratio correction, and the magnitudes of each projection's values are scaled by the inverse value (Equation 27 is used preferentially as the limit Θ = θ = π/2 is not problematic in this form). This has the effect of eliminating the oscillation of the sinogram's width as a function of Θ. The techniques detailed above in Section III may then be applied to this modified sinogram and the end result of the reconstruction is stretched to apply the physically correct aspect ratio. The result of this procedure is shown in Figure 5.
VI. APPLICATIONS
We now turn our attention to the important example of imaging laser-plasma interactions with very large aspect ratios, such as channelling processes [30], jets in laboratory plasma astrophysics experiments [31] and zpinches [32]. To demonstrate its utility for the first of these applications, magnetic fields have been extracted from a particle-in-cell simulation of a high-intensity laser pulse propagating into a plasma with a pre-formed density gradient. A representation of these fields is shown in figure 6.
The results of reconstructing this field with and without both Fourier interpolation and 10:1 aspect ratio compensation are shown in Figure 6, using a range of angular sampling rates to show the deterioration in reconstruction quality for each method as the number of observations is reduced. The highly elongated nature of the field displayed in Figure 6 causes severe problems in the absence of aspect ratio compensation, though Fourier-series interpolation improves the appearance of the final result. Even with aspect-ratio compensation applied, without Fourier interpolation the result still suffers from streaking artefacts which can obscure the true field. At all sampling rates tested, applying both techniques together performs better than either individually, demonstrating that the noise-and artefact-reduction properties of Fourier-series interpolation complement the more efficient sampling of Fourier space allowed by aspect ratio compensation.
VII. SUMMARY AND CONCLUSIONS
Proton radiography has found many applications for probing magnetic field structures in plasma. However, FIG. 6. Above: Z-component of magnetic field extracted from a particle-in-cell simulation of a laser-plasma channelling interaction. The field is shown in two dimensions, averaged over the z-axis of the simulation to improve legibility. All of the methods presented here are however applicable to three-dimensional data sets as well as two-dimensional ones. Below: Demonstration of this paper's proposed techniques at a range of sampling rates. N denotes the number of angular observations included in the reconstruction. Column a) presents the results of applying naïve filtered back-projection, b) includes aspect ratio-compensation, c) includes Fourier-series interpolation and d) includes both. Aspect ratio compensation uses a ratio of 10:1. While each technique struggles with this field when used individually, the composition of both techniques produces results of good quality even for very sparse sampling. All reconstructions are plotted using the same colour scaling as the original field.
its extension to three dimensional reconstruction remains a significant challenge. To this end, two novel preprocessing techniques for improving the performance of a standard tomographic reconstruction algorithm-filtered back-projection-have been explored in this article.
First, Fourier decomposition of observations in the angular parameter was proposed as a method for exact inversion of the Radon transform, based on the generalised Fourier-Hankel-Abel cycle of integral transforms which derives from the Fourier projection-slice theorem. By approximating the calculation of the general integer-order Hankel transform using back-projection, one observes that a single filtered back-projection of interpolated data is able to replace the calculation of a different integerorder Hankel transform per angular mode, greatly reducing the computational complexity of the method.
Second, based on the properties of Fourier-interpolated reconstructions, this method of tomography has been shown to achieve better accuracy for small numbers of observations when the aspect ratio of the function being observed is close to unity. To benefit from this observation, relations linking physical space and a computational space which differ by a non-uniform scaling have been derived, and these relations allow aspect ratios far from unity to be compensated for. The effectiveness of this compensation technique has been demonstrated using a modified Shepp-Logan phantom, which is supported on an ellipse of aspect ratio 3:4.
The effectiveness of these new proposed pre-processing enhancement steps, both individually and in combination has been compared to 'pure' filtered back-projection. It has been shown that in the case of the magnetic field of a simulated laser channel in dense plasma, each new preprocessing method improves the quality of reconstruction, and that combining them produces the best results of all. This significantly improves the prospects of a tomographic approach to proton radiography being implemented.
Finally, one notes that the methods presented here are also applicable to other path-integrated plasma probe diagnostics [20] that have applications across the natural sciences and engineering.
a. Acknowledgements This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2019-2020 under grant agreement No 633053, with support of STFC grant ST/P002048/1 and EPSRC grants EP/R029148/1 and EP/L000237/1. The views and opinions expressed herein do not necessarily reflect those of the European Commission. BTS acknowledges support from UKRI-EPSRC and AWE plc. PAN acknowledges support from OxCHEDS for his William Penney Fellowship. The simulations presented herein were carried out using the ARCHER2 UK National Supercomputing Service. The authors gratefully acknowledge the support of all the staff at the Central Laser Facility, UKRI-STFC Rutherford Appleton Laboratory and the ORION Laser Facilty at AWE Aldermaston (particularly discussions with Gavin Crow) while undertaking this research. | 2021-03-23T01:15:54.117Z | 2021-03-20T00:00:00.000 | {
"year": 2021,
"sha1": "1759934451b299e8a2917cf1e8642f3e4ac15983",
"oa_license": null,
"oa_url": "http://purl.org/net/epubs/manifestation/50793720/STFC-APV-2021-087.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1759934451b299e8a2917cf1e8642f3e4ac15983",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
6142185 | pes2o/s2orc | v3-fos-license | Redetermination of β-Ba(PO3)2
In comparison with the previous structure determination of the β-modification of barium catena-polyphosphate that was based on Weissenberg film data [Grenier et al. (1967 ▶). Bull. Soc. Fr. Minéral. Cristallogr. 90, 24–31], the current CCD-data-based redetermination reveals all atoms with anisotropic displacement parameters, standard uncertainties for the atomic coordinates, and the determination of the absolute structure. Moreover, a much higher accuracy in terms of the bond-length distribution for the polyphosphate chain, with two shorter and two longer P—O distances, was achieved. The structure consists of polyphosphate chains extending parallel to [100] with a periodicity of two PO4 tetrahedra. The Ba2+ cations are located between the chains and are surrounded by ten O atoms in the form of a distorted coordination polyhedron, with Ba—O distances ranging from 2.765 (3) to 3.143 (3) Å, also reflecting the higher precision of the current redetermination.
In comparison with the previous structure determination of the -modification of barium catena-polyphosphate that was based on Weissenberg film data . Bull. Soc. Fr. Minéral. Cristallogr. 90,[24][25][26][27][28][29][30][31], the current CCDdata-based redetermination reveals all atoms with anisotropic displacement parameters, standard uncertainties for the atomic coordinates, and the determination of the absolute structure. Moreover, a much higher accuracy in terms of the bond-length distribution for the polyphosphate chain, with two shorter and two longer P-O distances, was achieved. The structure consists of polyphosphate chains extending parallel to [100] with a periodicity of two PO 4 tetrahedra. The Ba 2+ cations are located between the chains and are surrounded by ten O atoms in the form of a distorted coordination polyhedron, with Ba-O distances ranging from 2.765 (3) to 3.143 (3) Å , also reflecting the higher precision of the current redetermination.
Comment
Polymorphism of Ba(PO 3 ) 2 with three modifications has been reported by Grenier & Martin (1975): The stable β-form transforms to the high-temperature α-form at 1058 K, and the γ-form transforms at 978 K to the β-form. Structure determinations were carried out for the γ-form (Coing-Boyat et al., 1978) and for the β-form . The crystal structure of α-Ba(PO 3 ) 2 is yet unknown. Comparative discussions of the structural set-up of the β-and γ-form of Ba(PO 3 ) 2 and of other divalent long-chain polyphosphates were given by Durif (1995).
During experiments intended to isolate crystals of α-Ba(PO 3 ) 2 by quenching the reaction product from the recrystallized melt at temperatures above the indicated transition point, high-quality crystals of β-Ba(PO 3 ) 2 were obtained instead. Since the first structure refinement of this modification was based on Weissenberg film data and converged with a relatively high residual R = 0.1, with atoms refined only with isotropic displacement factors and without indication of standard uncertainties for the fractional atomic coordinates, a re-refinement of the structure with modern CCD-based data seemed appropriate. The results of this re-refinement are reported here, confirming in principle the results of Grenier et al. (1967), however, achieving bond lengths and angles with much higher accuracy and precision, as exemplified by a comparison of the P-O bond length (Table 1).
The catena-polyphosphate chain has a periodicity of two PO 4 tetrahedra and extends parallel to [100] (Fig. 1). In comparison with the previous structure refinement , the determined bond lengths of the present refinement are in much better agreement with the usually observed bond length distribution in such long-chain polyphosphates (Durif, 1995), with two shorter and two longer P-O distances, each with similar values (Table 1).
The Ba 2+ cation is located between the chains and is surrounded by ten oxygen atoms in an irregular coordination sphere with Ba-O distances in the range from 2.765 (3) to 3.143 (3) Å (Fig. 2).
Experimental
Stoichiometric amounts of BaCO 3 and (NH 4 ) 2 HPO 4 (molar ratio 1:2) with a 3% excess of the phosphate precursor were finely ground, heated in a platinum crucible to 1173 K and slowly cooled to 1073 K at a rate of 2 K h -1 . Then the crucible was quenched in a cold water bath. Colourless fragments of the title compound were cut from the clear, transparent reaction product.
Refinement
In contrast to the previous structure refinement with a = 4.510 (2), b = 13.44 (2) c = 8.36 (5) Å, the reduced cell setting was chosen for the current refinement. Structure data were finally standardized with STRUCTURE-TIDY (Gelato & Parthé, 1987 | 2016-05-12T22:15:10.714Z | 2014-01-11T00:00:00.000 | {
"year": 2014,
"sha1": "37d4c0d7853bb978fa47d7d48dac336b70adc134",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2014/02/00/pj2008/pj2008.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37d4c0d7853bb978fa47d7d48dac336b70adc134",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
264948102 | pes2o/s2orc | v3-fos-license | Anesthetic considerations in supravalvular aortic stenosis: A case series
presents either as a localized narrowing at the sinotubular junction or as a diffuse form with additional involvement of the ascending aorta, aortic arch, and its branches. Associated lesions of the aortic valve, coronary artery narrowing, and pulmonary artery stenosis can further complicate the disease process. These patients are inherently at risk for developing myocardial ischemia, particularly in the setting of anesthesia or sedation. The left ventricular hypertrophy, secondary to the obstruction, results in increased left ventricular wall tension and myocardial oxygen consumption. Associated anatomic factors in the coronary arteries can further impair coronary blood flow. Any anesthetic drug that further increases oxygen consumption or decreases the coronary blood flow will result in an imbalance and increase the risk of cardiac arrest. We present a series of three patients with SVAS who were operated at our institute and subsequently discharged with good outcomes. The hemodynamic goal during the perioperative should aim to balance the myocardial oxygen supply-demand ratio. Extreme vigilance and aggressive resuscitative measures are needed to prevent any adverse myocardial event that can happen immediately after anesthetic induction or during periods of intense sympathetic stimulation such as laryngoscopy, sternotomy, aortic cannulation, or during emergence from anesthesia.
INTRODUCTION
Supravalvular aortic stenosis (SVAS) is a rare congenital cardiac abnormality due to elastin gene degeneration, characterized by an exaggerated narrowing of the aorta at the sino-tubular junction (STJ) (Figure 1).Sometimes, it can present as a diffuse form with additional involvement of the ascending aorta, aortic arch, and its branches. 1Associated lesions such as valvular aortic stenosis, bicuspid aortic valve, coronary artery narrowing, pulmonary artery stenosis, and coarctation of the aorta can further complicate the disease process.Surgical repair is usually performed at the earliest to prevent the progression of the disease.Management poses a significant challenge to anesthesiologists as sudden cardiac deaths owing to myocardial ischemia have been reported frequently after anesthesia in these patients.We present here an exemplary case series of three children with SVAS who were operated in our institute and subsequently discharged with a good outcome.
CASE 1
A 5-year-old second-born male child presented with a history of breathlessness and palpitations associated with a slight limitation of ordinary activity for 2 months.There was no history of syncope, cyanotic spells, or chest pain.Birth history revealed uncomplicated term pregnancy, spontaneous vaginal delivery with a good Apgar score and no post-natal intensive care unit stay.Immunization and milestones were at par with age.In addition, he had a history of removal of esophageal foreign bodies under GA at 4 years of age, which was uneventful.Family history revealed similar symptoms in the elder brother and paternal aunt.
On clinical examination, the patient was conscious and oriented, with intact higher mental function, typical facies with a bulging forehead, broad nose, broad lips, increased interdental distance, and no speech problems.Baseline vitals were normal, with no evidence of coarctation.Systemic examination revealed an ejection systolic murmur, grade 3/6, predominantly in the aortic area, radiating to the left sternal border, right and left supraclavicular region, and back.Electrocardiogram (ECG) showed sinus rhythm and left ventricular hypertrophy (LVH).Chest X-ray (CXR) was suggestive of an increased cardiothoracic ratio of 0.7, normal bilateral lung fields with clear costophrenic angles.An echocardiogram (ECHO) revealed severe SVAS (Peak gradient of 90 mmHg), normal valves, and normal biventricular function (Figure 2).Cardiac computer tomogram and angiography confirmed the diagnosis.There was brachiocephalic trunk ostial narrowing of 20% and left common carotid artery (CCA) occlusion of 15% cardiopulmonary bypass (CPB).
CASE 2
The 7-year-old first-born male child came with a history of palpitations and chest pain associated with slight limitation during ordinary activity for 3 months.He did not have any other complaints.Birth and developmental histories were normal.Family history revealed similar symptoms in the younger sibling and paternal aunt.
He had intact higher mental functions, normal facies, and a normal airway.All the baseline vitals were within acceptable range.Systemic examination revealed ejection systolic murmur, grade 3/6 in the right upper sternal border with radiation to the right cervical region.ECG showed a sinus rhythm with LVH and strain pattern in chest leads and CXR showed an increased cardiothoracic ratio of 0.65.ECHO revealed severe SVAS (Gradient of 120 mmHg), severe concentric LVH, asymmetrical septal hypertrophy, left ventricular outflow tract obstruction with the gradient of 60 mmHg, mild MR (Figure 3).Additional findings of ostial stenosis involving the brachiocephalic trunk and left CCA with small left superior vena cava were evident on CECT.
CASE 3
A 7-year-old child presented with a history of exertional dyspnea for 4 years.The only other relevant history was of maternal death soon after delivery due to a brain hemorrhage.No similar complaints were present in the family.Systemic examination revealed ejection systolic murmur, grade 3/6, in his right upper sternal border with radiation to the neck region.ECG showed LVH in chest leads and CXR showed a CT ratio of 0.6.ECHO revealed moderate SVAS (Peak Gradient of 77 mmHg, mean gradient: 30 mm Hg), concentric LVH (Figure 4).Cardiac CT revealed additional findings of left CCA origin narrowing and bilateral segmental pulmonary artery narrowing.
MONITORING AND ANESTHESIA
All three patients were planned for aortoplasty (Brome's procedure) with aortic branchpasty.The monitoring and anesthetic management was performed in a similar manner in all three cases.In the operating room, ASA standard monitors were attached, intravenous access was secured, and maintenance fluid was initiated.The patient was preoxygenated with 100% oxygen and intravenous induction with 3 mic/kg of fentanyl, 2 mg/kg of etomidate, and 0.1 mg/kg of vecuronium in titrated manner was given, and the airway was secured with appropriately sized ETT.Drug-induced hypotension was immediately treated with fluids and phenylephrine boluses.The right femoral arterial line and central venous line were secured post-induction.Additional monitoring included core temperature with nasopharyngeal and rectal probes, urine output, near-infrared spectroscopy (NIRS), transesophageal echocardiography (TEE), activated clotting time, and arterial blood gas analysis.
SURGERY
Brom's aortoplasty procedure was performed on CPB under moderate hypothermia (28°C) and myocardial protection was done with anterograde cold blood cardioplegia (4°C).In case 2 myomectomy was also performed for the septal hypertrophy.During the repair of the innominate artery and CCA, cerebral protection was done using deep hypothermic circulatory arrest with standard techniques.Patients were cooled to a temperature of 18-20°C and topical cooling of the head was done with ice packs.Injection thiopentone 15 mg/kg and methylprednisolone 30 mg/kg were added on the pump and NIRS monitoring was done.After the completion of the repair, rewarming was initiated, deairing was done and the aorta was unclamped.Weaning from CPB was done after checking all vital and metabolic parameters and with TEE guidance, the inotropic infusion was started before coming off CPB.Decannulation, protamine administration, and hemostasis were done as per the standard approach.Post-repair transesophageal echocardiographic assessment was performed to see the surgical adequacy and cardiac function.
The post-operative echo in all three cases showed a mean gradient of <10 mmHg across the LVOT, and good mobility of valves without any reflux.During 1-month follow-up, all of them were found to be in functional class I, without any cardiovascular symptoms.
DISCUSSION
SVAS was first described in 1930 and has an incidence of 1:20,000 live births. 1,2It is characterized by systemic elastin (ELN) arteriopathy due to a spontaneous or inherited microdeletion in the elastin gene located on chromosome 7. 3 This leads to an irregular, pathologic deposition of elastin fibers in the aortic wall combined with reduced elastin content leading to abnormal, excessive collagen deposition in the aortic media and hypertrophy of smooth muscle cells, causing obstructive arteriopathy.Most cases show a characteristic hourglass narrowing of the aorta that develops at the STJ, while the remaining cases have a diffuse tubular narrowing of the ascending aorta, which may extend into the aortic arch and the origin of brachiocephalic vessels.The aortic valve may also be pathologically involved, which can become an additional source of obstruction.Partial adhesion of the valve leaflet hinge-points to the hypertrophied STJ can restrict coronary blood flow into the sinus of valsalva, affecting the myocardial perfusion. 4,5AS can be non-syndromic or syndromic, as in Williams-Beuren syndrome (WS).WS is a complex developmental genetic disorder presenting with neurobehavioral (low intelligence), craniofacial (dysmorphic facies), and cardiovascular and metabolic (hypercalcemia) abnormalities.Non-syndromic patients have normal intelligence and lack dysmorphic features.
Patients usually present with a systolic murmur and become symptomatic before the age of 20 years.Symptoms such as dyspnea, angina, and syncope similar to that of valvular aortic stenosis are seen.If left untreated, they can develop cardiac failure, eventually leading to death.The usual workup consists of echocardiography 2D/3D/Doppler, ECG (signs of LVH with strain pattern, ST-T changes), magnetic resonance imaging, or CT aortography.Angiography gives information on associated vascular anomalies in the coronaries, aortic arch, arch vessels, or other distal branches and pulmonary arteries.The identification of the genetic defect is essential for a definitive diagnosis and is done by fluorescence in situ hybridization, direct sequencing, multiplex ligation probe amplification, and Real-time quantitative polymerase chain reaction. 6rgical correction should ideally be performed in infancy to prevent early aortic valve degeneration, coronary artery pathology, and LVH.The overall perioperative mortality risk is about 3-7%.8][9] No technique is considered the gold standard for SVAS repair, with each having its pros and cons.
The management presents a significant challenge for anesthesiologists due to different grades of severity of obstruction, pediatric age group, possible multi-system involvement, and lack of standard anesthetic management guidelines.A thorough pre-anesthetic assessment, preferably 1-2 weeks before the planned procedure, is recommended, which should focus on the pathophysiological effects of SVAS as well as other clinical manifestations of WS. 10 Screening for active myocardial ischemia, at-risk patients for ischemia and other systemic involvement should be done.WS children can exhibit neurocognitive developmental delays and significant procedural anxiety, which can make even painless procedures difficult without sedation.An airway assessment should screen for mandibular hypoplasia and dental anomalies, which might cause difficulty in airway management.
][13][14][15] Patients with significant SVAS (gradient >40), biventricular outflow tract disease, documented coronary anomalies, or a combination of any of the three; WS with QT prolongation, recently operated cases are categorized as high risk with increased propensity for myocardial events.The risk of the surgical procedure should also be taken into account.High-risk patients should be anesthetized only in a setting with the availability of extracorporeal membrane oxygenation (ECMO).
Rapid hemodynamic deterioration, unresponsiveness to resuscitation, and sudden deaths have been reported to occur at a high rate while undergoing procedures under sedation or anesthesia.This is of concern, especially because they often have to undergo several of such procedures during their lifetimes. 16,17Myocardial ischemia is implicated as the cause for the majority of these reported sudden deaths.Sudden death happened mainly with associated coronary arteriopathy/ostial stenosis or when there was biventricular outflow tract obstruction.Burch et al., described a series of nineteen pediatric patients who suffered cardiac arrest during the procedures, and the suspected cause was myocardial ischemia caused by reduced coronary blood flow. 16Significant left ventricular outflow tract obstruction in SVAS can result in compensatory LVH, increasing the wall tension and thus increasing the propensity for subendocardial ischemia. 18n associated coronary arteriopathy further impairs the coronary blood flow which aggravates this insult.
Hemodynamic goals aim at maintaining the myocardial oxygen supply-demand balance and are as follows (1) maintain adequate preload, overloading in a noncompliant LV can result in pulmonary venous congestion, whereas underfilling will reduce the LV stroke volume (2) maintain a sinus rhythm and a heart rate around 60-80/min, avoid tachycardia which can increase oxygen consumption and reduce diastolic time (3) maintain contractility (4) maintain afterload as any fall in blood pressure can affect the coronary perfusion (5) avoid increase in pulmonary vascular resistance.In addition to this, any factor that will affect the oxygen content and delivery such as anemia, hypoxemia, and hypothermia should be avoided.
Hemodynamic management is usually done with fluids and alpha agonists, but if LV function is poor, inotropes may be needed.Anesthetic agents used routinely have varied effects on the heart, some are known to cause myocardial suppression, reduce afterload, or increase myocardial oxygen consumption, any of which can cause ischemia in these vulnerable groups.Apart from the drug effects, patients become vulnerable to ischemia during periods of intense sympathetic activity such as laryngoscopy, sternotomy, aortic cannulation, or emergence from anesthesia.
An opioid-based induction technique obviates the vasodilatation and negative inotropy, which can occur with thiopentone, propofol, or inhalational agents.Even in patients at risk for ischemia high-dose opioids have been used safely. 11There are reports of cardiac arrests with even low to incremental usage of sevoflurane, so its use has been limited to low and moderate-risk cases where intravascular access is not possible preinduction. 16or obtaining venous access preinduction intramuscular ketamine is a good choice as it maintains contractility and SVR but may produce some amount of tachycardia, increasing myocardial oxygen consumption.Etomidate, due to the cardio-stable nature, can be an alternative to opioids in high-risk cases.Drugs that prolong the QT interval, like 5HT3 inhibitor ondansetron, are best avoided in patients of WS. 18,19 Regardless of the technique, there is always a risk of myocardial ischemia and arrest in high-risk patients; quick resuscitative measures, including early institution of ECMO, may be lifesaving in such cases.
Post-operative monitoring is recommended in all patients.Moderate-high-risk patients should be admitted for prolonged observation to a location with continuous monitoring.For high-risk cases, ECMO backup should be available.1][22][23][24] Early post-operative complications of surgical correction are bleeding, tamponade, arrhythmias, and heart block.Resuscitation after sternotomy should follow cardiac intensive care guidelines and, in addition to excluding airway and breathing problems, should focus on early defibrillation or pacing and early reopening.
CONCLUSION
Patients with SVAS continue to challenge anesthesiologists as the risks associated with anesthesia and sedation are high in this population.Optimal management involves a good understanding of the pathophysiology, planning, and patient preparation and a multidisciplinary team approach among anesthesiologists, cardiologists, and surgeons.
Figure 2 :Figure 1 :Figure 3 :
Figure 2: Deep transgastric long-axis view on transesophageal echocardiography showing a peak gradient of 91 and a mean gradient of 48 mm hg (case 1)
Figure 4 :
Figure 4: Deep transgastric long-axis view showing a peak gradient of 77 and a mean gradient of 31 mm Hg (case 3) | 2023-11-03T15:09:49.293Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "d91850aade64d9b56c7f7499ca21f8aeb1160060",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/AJMS/article/download/56880/44327",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "47502d70ad3eddaad58bfffe4666233a5ee8ffec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119628396 | pes2o/s2orc | v3-fos-license | Generic irreducibilty of Laplace eigenspaces on certain compact Lie groups
If $G$ is a compact Lie group endowed with a left invariant metric $g$, then $G$ acts via pullback by isometries on each eigenspace of the associated Laplace operator $\Delta_g$. We establish algebraic criteria for the existence of left invariant metrics $g$ on $G$ such that each eigenspace of $\Delta_g$, regarded as the real vector space of the corresponding real eigenfunctions, is irreducible under the action of $G$. We prove that generic left invariant metrics on the Lie groups $G=\operatorname{SU}(2)\times\ldots\times\operatorname{SU}(2)\times T$, where $T$ is a (possibly trivial) torus, have the property just described. The same holds for quotients of such groups $G$ by discrete central subgroups. In particular, it also holds for $\operatorname{SO}(3)$, $\operatorname{U}(2)$, $\operatorname{SO}(4)$.
A classical result by K. Uhlenbeck [4] says that for a generic Riemannian metric g on M , all eigenvalues of ∆ g are simple (i.e., have multiplicity one). At the other extreme, if g is a homogeneous metric, i.e. the group of isometries acts transitively on M , then every nonzero eigenvalue is necessarily multiple; this is a consequence of that each eigenspace is invariant under pullback by isometries. An interesting question in this context, raised by V. Guillemin, is whether on a a compact Lie group G there always exists a left invariant metric g such that G acts irreducibly on each eigenspace of ∆ g . In other words, the question is whether for metrics g which are "generic" within the set of left invariant Riemannian metrics on G, the eigenvalues of ∆ g have no higher multiplicities than necessitated by the prescribed symmetries.
For left invariant metrics on G, the associated Laplacian can be expressed via the right regular representation of G on C ∞ (G, C). Note that the case of biinvariant metrics on simple compact Lie groups represents the most "nongeneric" case here: For such metrics the Laplacian corresponds to a scalar multiple of the Casimir operator, and thus has only one eigenvalue on each isotypical component in C ∞ (G, C); since the isotypical components are not irreducible (by the Peter-Weyl theorem), the eigenspaces are certainly not irreducible for a biinvariant metric.
Using the explicit description of the isotypical components of the right regular representation from the Peter-Weyl theorem, one quickly arrives at a tentative reformulation for irreducibility of the eigenspaces of ∆ g for a given left invariant metric g: Roughly speaking, for each irreducible representation ρ V : G → GL(V ) the eigenvalues of the operator ∆ V g := − n k=1 ((ρ V ) * (Y k )) 2 (where {Y 1 , . . . , Y n } is a orthonormal basis of g = T e G) should be simple, and two nonisomorphic representations should not share a common eigenvalue (see Remark 2.4(ii)). However, these properties can never be satisfied if G admits irreducible representations of so-called quaternionic type (on which all eigenvalues will have even multiplicity) or of complex type (on which the eigenvalues will be the same as on the -nonisomorphic -dual representation); see Remark 2.5.
Fortunately, it turns out that when one considers real-valued eigenfunctions, then these complications no longer form an obstacle to irreducibility of the eigenspaces. Rather, the latter then becomes equivalent to the following three conditions being jointly satisfied: Simple eigenvalues of ∆ V g on each irreducible representation V of real or complex type; eigenvalues of multiplicity precisely two on each irreducible representation of quaternionic type; no common eigenvalues of ∆ V g , ∆ W g whenever V, V * ∼ = W (Corollary 3.3). Expressing these conditions in terms of certain resultants or discriminants of the characteristic polynomials of the operators ∆ V g (or of their derivatives) being nonzero leads to the description of the set of left invariant metrics with the desired property as the intersection of the complements of the zero sets of countably many polynomials on Sym 2 is an open subset of Sym 2 (g) ⊂ g ⊗ g, it simplifies the discussion to regard these polynomials as defined on all of Sym 2 (g). Summarizing, existence of a left invariant metric with irreducible real eigenspaces is equivalent to the condition that none of certain countably many polynomials on Sym 2 (g) is the zero polynomial; see Proposition 3.7. In that case, the intersection of the complements of the zero sets will not only be nonempty, but even residual.
We apply this general description to prove that the Lie group SU(2) and also products of the form SU(2) × . . . SU(2) × T , where T is a torus, do have the property that generic left invariant metrics on these groups have irreducible real eigenspaces; see Theorems 4.1 and 4.7. For SU (2), the key of the proof consists in showing that for those of its irreducible representations V which are of real type, the eigenvalues of ∆ V g are generically simple; the other conditions of Proposition 3.7 are almost obvious here. For products SU(2) × SU(2), the main difficulty is showing generic simplicity of eigenvalues on irreducible representations of real type of the form V ⊗ W , where V and W are irreducible representations of SU(2) of quaternionic type; see Remark 4.6 and Lemma 4.8(ii).
Finally, we observe that if a compact Lie group G satisfies the conditions of Proposition 3.7, then so do its quotients by discrete central subgroups; see Lemma 4.9. Therefore the result extends, for example, to SO(3), U(2), and SO (4).
Note that all of the operators ∆ V g are hermitian with respect to a G-invariant hermitian inner product on V . It is well-known that for analytic 1-parameter families (although not for analytic multiparameter families) of such operators, the eigenvalues are analytic functions of the parameter. This fact and methods from perturbation theory as in [3] might be useful when examining the problem for other groups. However, the fact that operators of the form ∆ V g lie in a quite small subset of all hermitian operators on V constitutes a major difficulty. Our proofs for SU(2) and SU(2) × . . . × SU(2) × T n do actually not use any general perturbation theoretic arguments. This paper is organized as follows: In Section 2, we state some basic facts about complex irreducible representations of compact Lie groups G and describe how the Laplace operator ∆ g associated with a left invariant metric g on G acts on the isotypical components of the right regular representation of G on C ∞ (G, C).
In Section 3, we establish representation theoretic criteria for the existence of a left invariant metric g on G such that each real eigenspace of ∆ g is irreducible (Proposition 3.7). We observe that in case of existence, generic left invariant metrics on G have the same property. As an illustration, we discuss the case G = T n (where the said property of generic left invariant metrics is well-known).
In Section 4, we first prove that the Laplace operators ∆ g associated with generic left invariant metrics on SU(2) have irreducible real eigenspaces (Theorem 4.1). After examining which of the criteria of Proposition 3.7 are, resp. are not, easily seen to be inherited by products of two Lie groups from their factors, we extend the above result to products of the form SU(2)×. . . ×SU(2)× T n (Theorem 4.7) and, as a corollary, to quotients of these groups by discrete central subgroups.
The author would like to thank to thank Carolyn S. Gordon, David L. Webb, and Victor Guillemin for inspiring discussions, and the latter especially for first drawing our attention to the topic. Moreover, she would like to thank Dartmouth College for its hospitality during a stay where this research was initiated.
(i) Throughout the paper, we let G be an n-dimensional compact Lie group with Lie algebra g.
By ℓ x : G → G (resp. r x : G → G) we denote left (resp. right) multiplication by x ∈ G. By L (resp. R) we denote the left regular (resp. right regular) unitary representation of G on L 2 (G, C), given by for f ∈ L 2 (G, C). Of course, the two regular representations are isomorphic to each other via f → f • inv. (ii) If ρ is a representation of G on some real or complex vector space V , then V together with the action of G by ρ is called a G-module. We choose sets Irr(G, C) and Irr(G, R) of representatives of isomorphism classes of irreducible real, resp. complex, G-modules.
Since G is compact, all irreducible G-modules are finite dimensional. (iii) For an irreducible complex G-module V , denote by I(V ) ⊂ L 2 (G, C) the V -isotypical component with respect to the right regular representation R on L 2 (G, C). (iv) A complex irreducible G-module V is called of real type (resp. of quaternionic type) if there exists a conjugate linear G-map J : V → V such that J 2 = Id (resp. J 2 = −Id); V is called of complex type if it is of neither real nor quaternionic type.
Lemma 2.2 (see, e.g., [1], section II.6). Irr(G, C) is the disjoint union of Irr(G, C) R , Irr(G, C) C , and Irr(G, C) H , where these denote the subsets consisting of those elements which are of real, resp. complex, resp. quaternionic type. For V ∈ Irr(G, C) these properties can be characterized as follows: Thus, on V * ⊗ V one has In particular, I(V ) is not only the V -isotypical component with respect to R, but also the V * -isotypical component with respect to L.
Remark 2.4. Let g be a left invariant Riemannian metric on G.
(i) The Laplace operator ∆ g associated with g acts on C ∞ (G, C) by . . , Y n } is a g-orthonormal basis of g. This well-known formula follows from unimodularity of G and the fact that for each y ∈ G, the initial velocity vectors of the curves t → ye tY k constitute a g-orthonormal basis at y.
(ii) For each V ∈ Irr(G, C), the isotypical component I(V ) is invariant under ∆ g by (i) and Remark 2.3. More precisely, by (2): In particular, each eigenvalue of the restriction of ∆ g to the complex vector space I(V ) has multiplicity at least dim V * = dim V , and irreducibility of the eigenspaces of ∆ g| I(V ) w.r.t. the left regular representation L is equivalent to these multiplicities being precisely dim V and the eigenvalues of ∆ V g being simple. (iii) In the context of (ii), note that ( Thus, the dual basis of an eigenbasis of ∆ V g is always an eigenbasis of ∆ V * g with the same eigenvalues. is of complex type, i.e., V ∼ = V * , then the two isotypical components I(V ) and I(V * ) do not coincide. However, by Remark 2.4(ii) and (iii), the eigenvalues of ∆ g on I(V ) are the same as those on I(V * ), for any left invariant metric g on G. In particular, the corresponding eigenspaces are not irreducible w.r.t. L.
Since the eigenvalues of ∆ V g are real, this invariance together with J 2 = −Id implies that each eigenspace is of even dimension. In particular, it follows by 2.4(ii) that the eigenspaces of ∆ g| I(V ) itself are never irreducible w.r.t. L if V is of quaternionic type.
The situation just described changes if one shifts attention to irreducibility of real eigenspaces, as we will see in the following section.
Irreducibility conditions for real eigenspaces
V of real or quaternionic type, Obviously, E V is invariant under L and R; moreover: (ii) For any left invariant metric g on G, the following conditions are equivalent: , and I(V ) = I(V * ) sinceV and V * are isomorphic. In particular, I(V )+I(V * ) is invariant under the projections to the real and imaginary parts of functions. The statement now follows by recalling that (2) and (3), C µ V is invariant under L and, as a complex G-module, satisfies (Recall from Remark 2.5(ii) that m is necessarily even if V is of quaternionic type.) On the other hand, we clearly have C µ V = E µ V ⊗ C by (i) and since E V is invariant under ∆ g . Regarded as a real G-module, C µ V is thus isomorphic to E µ V ⊕E µ V on the one hand, and to U ⊕m ⊕U ⊕m , resp. U ⊕m/2 ⊕ U ⊕m/2 , on the other hand. Since U is irreducible, we conclude In particular, E µ V is irreducible if and only if m = 1 for V of real or complex type, resp. m = 2 for V of quaternionic type. Notation and Remarks 3.4.
In fact, if we identify Sym 2 (g) with the space of real symmetric n × n-matrices by fixing a basis of g and the corresponding canonical basis of Sym 2 (g), then Sym 2 + (g) corresponds to the subset of positive definite matrices.) for Y, Z ∈ g, and by linear extension. Note that any endomorphism in the image of D V is diagonizable and has only real eigenvalues because it is a hermitian map with respect to any G-invariant hermitian inner product on V .
(i) Let be the map sending s ∈ Sym 2 (g) to the characteristic polynomial of D V (s). (ii) By res : we denote the resultant; see, e.g., [2]. For two polynomials p, q ∈ C[X] the number res(p, q) is given by a certain polynomial in the coefficients of p and q, and res(p, q) = 0 ⇐⇒ p and q have no common zeros.
For a polynomial p and its formal derivative p ′ , res(p, p ′ ) is the discriminant of p (up to some nonzero scalar factor) and vanishes if and only if p has a zero of multiplicity at least two. (iii) For V, W ∈ Irr(G, C) we define the following C-valued polynomials on Sym 2 (g): denote the formal derivatives of p V ∈ C[X] with respect to the variable X. (iii) If V is of quaternionic type then all eigenvalues of ∆ g have at least multiplicity two by Remark 2.5(ii). In this case, c V = 0 is equivalent to the existence of a left invariant metric g on G such that all eigenvalues of ∆ V g are of multiplicity exactly two. Proposition 3.7. Existence of a left invariant metric g on G such that ∆ g has irreducible real eigenspaces is equivalent to the following conditions being jointly satisfied: (a) a V,W = 0 for any pair V, W ∈ Irr(G, C) with V ∼ = W and V * ∼ = W , (b) b V = 0 for each V ∈ Irr(G, C) of real or complex type, (c) c V = 0 for each V ∈ Irr(G, C) of quaternionic type. In this case, the orthonormal bases for left invariant metrics g with the property that ∆ g has irreducible real eigenspaces constitute a residual set in g ⊕n = g ⊕ dim g .
Proof. That the conditions are necessary is obvious from Corollary 3.3 and Remark 3.6. Conversely, assume that (a), (b), (c) are satisfied. Write Then N is the union of the zero sets of countably many nonzero polynomials. Thus, g ⊕n \ N is a residual set (i.e., an intersection of countably many sets with dense interiors). Now let Then B is still residual in g ⊕n , and for any b ∈ B the Laplace operator ∆ g associated with the left invariant metric g on G with orthonormal basis b has irreducible real eigenspaces by Corollary 3.3 and Remark 3.6. Example 3.8. Let G := T n = R n /Z n . It is well-known that for generic left invariant metrics g on T n , the Laplace operator ∆ g has irreducible real eigenspaces. In fact, let , be a euclidean inner product on R n and g be the corresponding left invariant metric induced on T n . Let Λ := (Z n ) * ⊂ (R n ) * . For λ ∈ Λ, we denote the induced function on T n again by λ. The character χ λ : T n ∋ x → exp(2πiλ(x)) ∈ C is a complex eigenfunction of ∆ g with eigenvalue µ λ := 4π λ 2 , where . denotes the norm induced on (R n ) * by , . For generic , , one has µ λ = µ λ ′ if and only if λ ′ = ±λ. In this case, the corresponding real eigenspace E µ λ is two-dimensional if λ = 0 (otherwise, one-dimensional) and is spanned by Re(χ λ ) = cos 2πλ( . ) and Im(χ λ ) = sin 2πλ( . ). Obviously, E µ λ is then irreducible under the action of T n .
In the following, let G, g be as in the previous section, let G ′ be another compact Lie group, and let g ′ be its Lie algebra.
with the sign of its square depending on the signs of J 2 = ±Id and (J ′ ) 2 = ±Id. So one has: is of real and V ′ is of quaternionic type, or vice versa.
Proof. (i) If a V,W = 0 choose s ∈ Sym 2 (g) with a V,W (s) = 0; that is, D V (s) and D W (s) have no common eigenvalues. By Remark 4.3 it follows that D V ⊗V ′ (ι(s)) and D W ⊗W ′ (ι(s)) have no common eigenvalues either. Thus, a V ⊗V ′ ,W ⊗W ′ (ι(s)) = 0. The case a V ′ ,W ′ = 0 is analogous.
(iii) Here V ⊗ V ′ is of quaternionic type by Remark 4.2. Similarly as above, we choose s ∈ Sym 2 (g) and s ′ ∈ Sym 2 (g ′ ) such that all eigenvalues of D V (s) are simple and all eigenvalues of D V ′ (s ′ ) are of multiplicity exactly two. Then again, choosing ε > 0 small enough, all eigenvalues of D V ⊗V ′ (ι(s) + ει ′ (s ′ )) = D V (s) ⊗ Id + Id ⊗ εD V ′ (s ′ ) will have multiplicity exactly two; hence c V ⊗V ′ (ι(s) + ε(ι ′ (s ′ )) = 0. Proof. In the following, let V, W ∈ Irr(G, C), V ′ , W ′ ∈ Irr(G ′ , C). Condition (c) for V ⊗ V ′ of quaternionic type follows from Remark 4.2 and Lemma 4.4(iii). The condition of (i) is necessary for condition (a) of Proposition 3.7 because for V, For the converse direction, assume that the condition of (i) is satisfied; we have to show that this already implies condition (a) of Proposition 3.7 for G×G ′ . Note that if V ⊗V ′ ∼ = W ⊗W ′ and (V ⊗V ′ ) * ∼ = W ⊗W ′ then one of the following three conditions holds: Since G and G ′ satisfy the conditions of Proposition 3.7 by assumption, case 1. implies that a V,W = 0 or a V ′ ,W ′ = 0. By Lemma 4.4(i) we then have a V ⊗V ′ ,W ⊗W ′ = 0. In case 2., V and V ′ are both nonisomorphic to their duals and hence are of complex type. Moreover, W ⊗ W ′ ∼ = V × (V ′ ) * , so a V ⊗V ′ ,W ⊗W ′ = a V ⊗V ′ ,V ⊗(V ′ ) * = 0 by assumption. Case 3. is analogous. Thus, condition (a) of Proposition 3.7 is satisfied for G × G ′ .
The condition of (ii) is necessary for condition (b) of Proposition 3.7 by Remark 4.2(i), (ii). For the converse direction, assume that the condition of (ii) is satisfied; that is, b V ⊗V ′ = 0 whenever both of V and V ′ are of quaternionic type, or if one is of quaternionic and one is of complex type. By Lemma 4.4(ii) we know b V ⊗V ′ = 0 if none of V or V ′ is of quaternionic type. By Remark 4.2, these were all possible cases for V ⊗ V ′ of real or complex type. So condition (b) of Proposition 3.7 holds for G × G ′ . Remark 4.6. Note that the conditions in (i), (ii) of Proposition 4.5 are far from trivial, in spite of the assumption that G and G ′ individually satisfy the conditions of Proposition 3.7 separately. For example, if V and V ′ are both of quaternionic type, then for generic s, s ′ , all eigenvalues of D V (s) and D V ′ (s ′ ) will be of multiplicity exactly two, which results in D V ⊗V ′ (ι(s) + ι ′ (s ′ )) having all of its eigenvalues of multiplicity exactly four. But V ⊗ V ′ is of real type, so condition (b) of Proposition 3.7 requires generic D V ⊗V ′ (s) to have all eigenvalues simple. Thus, in order to establish this condition, it will not suffice to work with elements of the form ι(s)+ι ′ (s ′ ) ∈ Sym 2 (g⊕ g ′ ). We will succeed in solving this problem in the case G = G ′ = SU(2); see Lemma 4.8(ii) below.
We now state our main result: The following Lemma will be the key to the proof of Theorem 4.7. We continue to use the notation from Example 3.8 and from the proof of Theorem 4.1 concerning the irreducible representations of T n and SU(2), respectively. Recall that all nontirival irreducible representations V λ of T n are 1-dimensional and of complex type, and that the (m + 1)-dimensional representation V m of SU(2) is of quaternionic type if m is odd, and of real type otherwise.
Each of the operators
has the eigenvalues ±(ik ± εik ′ ), k ∈ {1, 3, . . . , m}, k ′ ∈ {1, 3, . . . , m ′ }. Due to the choice of ε, all of these eigenvalues are simple (and nonzero). So each of the operators has the eigenvalues (k ± εk ′ ) 2 , all positive and of multiplicity exactly two. Although multiplicity two is already better than multiplicity four (recall the considerations in Remark 4.6), showing that the multiplicities become simple for generic s requires a little more work. For α ∈ R, let We are going to show, specifically, that for all α in some dense open set O ⊂ R, D α has only simple eigenvalues. Let x := exp( π 2 B) = 0 −1 1 0 . Note that this is the same matrix as B, but this time regarded as an element of SU(2), not its Lie algebra. Let Recalling the definition of the basis elements v m,ℓ of V = V m and the definition of the action ρ V , note the following facts: 1.) T is an involution; i.e., T 2 = Id. In fact, note that x 2 = −1 0 0 −1 ∈ SU(2) acts on both V and V ′ as −Id. Thus, This follows from the fact that the matrix x has real entries, and from the definition of ρ V . 3.) T anticommutes with ϕ. In fact, Ad , and similarly for V ′ . 4.) T commutes with ψ. This is obvious from the definitions, since x = exp( π 2 B). Let W + , W − ⊂ V ⊗V ′ denote the 1-, resp. (−1)-eigenspace of the involution T . Both are invariant under ϕ 2 and ψ 2 by 3.) and 4.), hence under each of the maps D α .
Since ϕ anticommutes with T , it interchanges W + and W − . Moreover, ϕ is invertible and preserves eigenspaces of ϕ 2 . Since all eigenvalues of ϕ 2 were of multiplicity two, this implies that D 0| W + = −ϕ 2 | W + has only simple eigenvalues, and so does D 0| W −. It follows that there is a dense open set O 0 ⊂ R such that D α| W + and D α| W − both have only simple eigenvalues.
On the other hand, ψ commutes with T , so ψ preserves both W + and W − . By 2.) above, W + and W − are spanned by their intersections with the real vector space R. Also note that ψ leaves R invariant, being the initial derivative of the family of operators ρ V ⊗V ′ (t(B, εB)) which clearly preserve R. Since the eigenvalues of ψ are purely imaginary and nonzero, it follows that the eigenvalues of ψ on W + ∩ R come in conjugate pairs; therefore, all eigenvalues of ψ 2 | W + have multiplicity at least two. Analogously, the same holds for ψ 2 | W − . However, we already saw above that all eigenvalues of ψ 2 are of multiplicity exactly two. Therefore, D 1| W + = −ψ 2 | W + and D 1| W − = −ψ 2 | W − can have no eigenvalues in common. It follows that there is a dense open set O 1 ⊂ R such that D α| W + and D α| W − have no eigenvalues in common.
Consequently, for all α ∈ O := O 0 ∩ O 1 the operator D α has both of the above properties, and can therefore have only simple eigenvalues. So b V ⊗V ′ (αs H + (1 − α)s B ) = 0 for these α, which shows b V ⊗V ′ = 0, as desired.
Proof of Theorem 4.7. We can treat both types of products simultaneously by admitting n = 0, in which case the torus is the trivial group {e} (possessing only the trivial irreducible representation V 0 ).
Since SU(2) has no irreducible representations of complex type, SU(2) × . . . × SU(2) has no such representations either, by Remark 4.2. Using Theorem 4.1 and Example 3.8, and applying Proposition 4.5(i) repeatedly, we conclude that G := SU(2)×. . .×SU(2)×T n satisfies condition (a) of Proposition 3.7. It remains to show conditions (b) and (c). Let k be the number of factors equal to SU(2) in G. Using the same notation as before for the irreducible representations of SU(2) and T n , let V := V m 1 ⊗ . . . ⊗ V m k ⊗ V λ be an arbitrary irreducible representation of G. Let ℓ denote the number of factors V m j of quaternionic type. Whether or not V satisfies b V = 0, resp. c V = 0, does obviously not depend on the ordering of the first k factors. We order them such that the product starts with ℓ ′ := 2⌊ℓ/2⌋ factors of quaternionic type, continues with factors of real type, and V m k is of either real or quaternionic type, depending on whether ℓ is even or odd. By Lemma 4.8(ii), b Vm 1 ⊗Vm 2 = 0, . . . , b Vm ℓ ′ −1 ⊗Vm ℓ ′ = 0. We also have b Vm j = 0 for the V m j of real type (see Theorem 4.1), and b V λ = 0 (see Example 3.8). In the case that ℓ is even, one immediately concludes b V = 0 by using Lemma 4.4(ii) repeatedly. Now let ℓ be odd. We still have b Vm 1 ⊗...⊗Vm k−1 = 0, and V m k is of quaternionic type. If λ = 0 then b Vm k ⊗V λ = 0 by Lemma 4.8(i). By Lemma 4.4(ii) we again obtain b V = 0. If λ = 0, then V λ is the trivial represenation (of real type), hence V is of quaternionic type. Using c Vm k = 0 (see Theorem 4.1) and applying Lemma 4.4(iii) twice, we obtain c V = 0. Proof. Note that Irr(Ḡ, C) can be considered as a subset of Irr(G, C), consisting of precisely those irreducible representations of G which restrict to the trivial representation on Γ. It is easy to see that this inclusion respects the different types of irreducible representations (real, complex, quaternionic) from 2.1. Moreover, the Lie algebras of G andḠ coincide. Therefore, the conditions of Proposition 3.7 forḠ amount to just a certain subset of the conditions for G, which are satisfied by assumption. Proof. This follows immediately from Theorem 4.7 and Lemma 4.9; also note that SO(3) ∼ = SU(2)/{±Id}, SO(4) ∼ = (SU(2) × SU(2))/{±(Id, Id)}, and U(2) ∼ = (SU(2) × S 1 )/{±(Id, 1)}, where S 1 = T 1 is considered as S 1 ⊂ C. | 2016-02-15T09:55:24.000Z | 2016-02-15T00:00:00.000 | {
"year": 2016,
"sha1": "82325ccd79f41cbc5f3821c93c41096803de2597",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1602.04602",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "82325ccd79f41cbc5f3821c93c41096803de2597",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
2429654 | pes2o/s2orc | v3-fos-license | What Patients Value About Reading Visit Notes: A Qualitative Inquiry of Patient Experiences With Their Health Information
Background: Patients are increasingly asking for their health data. Yet, little is known about what motivates patients to engage with the electronic health record (EHR). Furthermore, quality-focused mechanisms for patients to comment about their records are lacking. Objective: We aimed to learn more about patient experiences with reading and providing feedback on their visit notes. Methods: We developed a patient feedback tool linked to OpenNotes as part of a pilot quality improvement initiative focused on patient engagement. Patients who had appointments with members of 2 primary care teams piloting the program between August 2014-2015 were eligible to participate. We asked patients what they liked about reading notes and about using a feedback tool and analyzed all patient reports submitted during the pilot period. Two researchers coded the qualitative responses (κ=.74). Results: Patients and care partners submitted 260 reports. Among these, 98.5% (256/260) of reports indicated that the reporting tool was valuable, and 68.8% (179/260) highlighted what patients liked about reading notes and the OpenNotes patient reporting tool process. We identified 4 themes describing what patients value about note content: confirm and remember next steps, quicker access and results, positive emotions, and sharing information with care partners; and 4 themes about both patients’ use of notes and the feedback tool: accuracy and correcting mistakes, partnership and engagement, bidirectional communication and enhanced education, and importance of feedback. Conclusions: Patients and care partners who read notes and submitted feedback reported greater engagement and the desire to help clinicians improve note accuracy. Aspects of what patients like about using both notes as well as a feedback tool highlight personal, relational, and safety benefits. Future efforts to engage patients through the EHR may be guided by what patients value, offering opportunities to strengthen care partnerships between patients and clinicians. (J Med Internet Res 2017;19(7):e237) doi: 10.2196/jmir.7212
Introduction
As the trend toward greater transparency accelerates in health care, clinicians with electronic health records (EHRs) and patient portals are inviting patients to view online laboratory results, medication lists, and more recently, visit notes [1][2]. Health care consumers are seeking more data [3], but little is known about their experiences reading and using this information. A better understanding of what motivates patients to interact with their health data may inform efforts that promote patient engagement through patient portals. Thoughtful EHR and patient portal design may be leveraged to strengthen patient and family-centered care and patient-clinician relationships [4][5][6][7][8].
Although clinicians often report negative experiences with the EHR, patient attitudes about the EHR may be more neutral or even positive [9][10][11]. Greater health information transparency, more rapid communication, patient-friendly educational resources, and easier access to the medical record can send a powerful message of inclusivity to patients and families. What was once the purview of clinicians alone is increasingly shared with patients and families and can lead to better informed shared decision making [12]. Today, over 15 million patients in 40 states have easy access to their visit notes (OpenNotes) through their patient portal [13]. As OpenNotes spreads, sharing health information shows promise not only for patient engagement and adherence [14][15][16], but also for relational benefits such as enhanced patient trust and satisfaction [17,18].
Even though millions of patients can log on to patient portals to read notes, we understand little about what they value in doing so, perhaps because information sharing has been largely one-way and passive. Opportunities to more effectively connect with various patient populations and family care partners through shared notes are vast, but relatively under-explored [19][20][21][22], and patients, families, and communities remain a largely untapped resource as health partners [8]. As patients increasingly gain access to visit notes, they may uncover errors or discrepancies in their records, and they generally lack a systematic way to report this feedback [23]. Tools to guide patients on their health data and systems to efficiently and effectively hear their feedback are needed.
To learn more about the patient experiences with their notes, we piloted an online OpenNotes patient reporting tool as part of a quality improvement initiative [23]. In a 12-month test, we asked patients to report possible inaccuracies in notes. In addition to characterizing patient-identified errors [23], we aimed to understand whether patients thought reading notes and providing feedback was valuable, and if so, why. We envisioned that what patients and care partners value about interacting with their notes could inform organizational patient engagement strategies and further drive patient and family-centered care. This paper focuses on their qualitative responses.
The OpenNotes Patient Reporting Tool
The patient reporting tool was designed together with patients and family members, as well as with Patient Relations and Health Information Management personnel, Patient Safety leadership, clinicians, and other stakeholders. This multidisciplinary team of stakeholders met every other week for nine months to plan the reporting tool and supporting patient education materials, including a patient FAQ specifically designed for the project [23]. These materials underwent several iterations after review by our team, a plain language specialist, and several additional PFAC members who tested the tool and education links. The final patient reporting tool was a 9-item form accessible through a "My Feedback" link located at the end of each visit note. Participants had to first read the note in order to use the reporting tool. Either patients or their care partners (CPs) could complete the form. Questions included whether patients (or CPs) understood the note and care plan, identified possible inaccuracies in the note, had positive feedback for their providers, and found the reporting tool valuable.
Respondents who found the opportunity to read and provide feedback on notes to be "very valuable" or "somewhat valuable" were asked: "What do you like about reading or providing feedback on your note?" We chose this broad exploratory question intentionally because there is little existing data on why patients engage with their health data, how they feel about reading notes, or what benefits they may perceive from a feedback tool linked to their notes. We used this expansive approach because we did not have a preference regarding whether patients responded to their attitudes about reading notes or about using the reporting tool, given that both could inform patient engagement strategies. We anticipated there would be some overlap in responses since patients had to read notes in order to use the tool, but we also hypothesized that some patients may value reading notes alone, and simply use the tool to share this information. Finally, although we considered asking two separate questions, we prioritized streamlining open-ended questions to prevent losing patient interest in completing the form. We anticipated that results from a single exploratory question could then inform more specific future queries as well as targeted interventions to further engage patients and care partners, based on what matters to them the most.
Participants
All patients with portal access and a visit note by a participating provider during August 2014-August 2015 were invited to participate in the feedback project. Patients received an email notification when a note became available including a link to frequently asked questions (FAQ) [23] and a dedicated email address for any project-related concerns. Patients were told that "The goal (of the project) is to help patients and their providers work together to make sure the information in each patient's medical record is accurate and care is the best it can be. We also hope to learn what patients like about reading their notes." Patients were also told that at the end of the QI project, all comments would be de-identified and used to promote organizational learning and quality improvements.
We launched the pilot quality improvement (QI) project with clinicians from 2 of 10 teams in our hospital-based primary care practice. OpenNotes was already implemented at our organization and providers were offered the opportunity to opt-out of participation. As part of the OpenNotes policies at our medical center, clinicians can also "hide" individual notes, such that they do not appear on the portal, although <1% do so (personal communication, Lawrence Markson, MD, Vice President, Clinical Information Systems, BIDMC). All other notes generated by the participating providers included the "My Feedback" link and an invitation for patients to use it.
Analysis
Two researchers (SKB and MG) independently reviewed and coded a subset of responses to identify common themes.
Through discussion, the two researchers merged the themes to develop a codebook, and then coded another subset of responses. Each subset comprised an independent (ie, not previously coded) 10-20% of the data. They repeated this process until no new themes were found. All disagreements were resolved through discussion. Next, the researchers used the codebook to separately code another set of responses and tested reliability between the two researchers (κ=.74). Finally, one researcher (MG) coded the remaining responses using the same codebook.
Ethics
The proposal for implementation and evaluation of the OpenNotes patient reporting tool was reviewed by our institutional review board and determined to be a quality improvement program. Data collected were integrated into existing QI workflows and used in real time to improve care. Patient participation was voluntary. Patients were told that they, and their provider, might be contacted by Patient Relations personnel if their report pointed to a safety concern. Otherwise, the data populated an aggregate database from which we generated de-identified comments for this analysis. We informed patients that de-identified comments would be used to promote organizational learning and quality improvements. Further details of the methods and patient communications have been published elsewhere [23].
Results
We analyzed consecutive reports submitted by patients and care partners over the 12 months of the pilot period. In total, 260 reports were submitted; of which, 256 (98.5%) reports indicated that the tool was valuable, and 179 (68.8%) reports included a qualitative response to what patients liked about the OpenNotes reporting tool process. Compared with patients who submitted a report but did not respond to the voluntary qualitative question, patients who provided a response were slightly older; otherwise patient characteristics were not significantly different between the two groups (data not shown). Responses highlighted a total of 8 key themes, presented below. Four themes pertained to what patients value about the content of notes, and the other four described what patients liked about using the reporting tool (for which reading notes was implicit).
Confirm and Remember Next Steps
For many participants, notes served as an extension of the visit. One patient noted:
I sometimes have white coat syndrome where I am a little nervous in the doctor's office and then cannot remember all that was said. Reading the notes after my visits confirms what I have heard.
By far the most common theme, reading visit notes helped patients to better remember next steps. Many commented on turning to notes as a reminder of tests or other recommended follow-up.
Several participants alluded to the stressful nature of the visit:
I think it is a great way to double check I didn't miss anything if I was not feeling well or was too overwhelmed.
Patients liked reviewing what happened at the visit in the comfort (and pace) of their own homes: Reading the note takes the burden off of me to remember the details of what we discussed and becomes a useful reference for me.
They also liked the ability to confirm or double-check the doctor's recommendations independently: If I forget something, I can go back and read the plan without having to bother the doc [tor].
Quicker Access and Results
Patients and CPs valued the opportunity to have access to records and results, stressing the importance of being able to view this information quickly and at any time. Participants found the notes particularly valuable because they provided context. One patient commented:
I like knowing what the results of my tests mean. The records [laboratory results] show the numbers but the notes provide the interpretation in regards to my personal health status.
Participants also liked having longitudinal access to notes, and the benefits of a consolidated reference, "all in one place." Like an "encyclopedia on a shelf," OpenNotes provided patients with a cohesive roadmap over the arc of their health journey: It is now all on record for me to review…and not just after the consult. Allows for history." Patients noted a heightened sense of ownership of their records and their health when they could review and interact with their notes collectively and comprehensively over time:
Positive Emotions
Reading notes helped patients gain confidence in their providers, "confirm[ing] that…care is being handled well." It also generated additional positive emotions like hope and encouragement. One patient wrote, "I like reading my notes because they keep me uplifted." Another added, "I feel less helpless and perhaps more hopeful." Participants highlighted the relational benefits of "being heard." Their comments described a powerful "validation" from reading notes, and feeling listened to and cared for: We have had a funeral and a hectic week. I felt like someone cared. May seem quite simple but it was a nice human touch. I am a nurse and I am impressed.
Sharing Information With Care Partners
The invitation to read notes and provide feedback was particularly appreciated by care partners who support vulnerable patients. In particular, they found notes essential to the coordination of care for their loved ones: We are grateful to receive "notes" to be able to review the visit and procedures (if any) performed.
Especially helpful for older patients who may have hearing and/or some cognitive [or] memory loss.
Patients liked the option to give their note to care partners too: "I like that I'm able to share how my visit was," and "I can reference info[rmation] to inform my family [and/or] wife [of] what is going on." Another patient added, "I don't have to take tons of notes myself…to make sure I understood." OpenNotes connected care partners with information that they may not have otherwise had access to, and provided a way for them to stay updated on medication or treatment plan changes.
Accuracy and Correcting Mistakes
Patients and care partners commonly noted that what they like about reading notes and providing feedback is the new ability to confirm the accuracy of the note and catch potential errors. While some clinicians worry that patient-found mistakes may lead to casting blame or trust erosion, several participants explicitly commented on understanding human fallibility and wanting to play a role, alongside their provider, in contributing to note precision: "It is easy to make a mistake when writing a note. I like that they can be reviewed for accuracy." Another added: "We can work together to make notes accurate, understood, and…a good resource for future medical care."
Partnership and Engagement
Patients frequently noted that they liked reading notes to "[Make] sure that we are on the same page," and that the feedback tool enhanced a sense of partnership with their clinicians. Participants described notes as a window into how their provider thinks: Moving away from the traditional paternalistic view of medicine, the reporting tool encouraged shared agency for health: "It puts me in an active rather than passive position and cuts out red tape." Several responses addressed the value of inviting patients to provide input. One participant noted, "Health care should be a two-way conversation; this forum provides another opportunity for that." Another commented, "[The note] helps me feel that my [doctor] and I are partners in promoting my health." Finally, several patients and CPs commented on the level of detail, articulation, and precision in the notes. The comprehensive nature of notes helped patients feel that their provider "knows" and cares about them, strengthening a therapeutic alliance through shared values and goals.
Bidirectional Communication and Enhanced Education
Patients and CPs often described reading notes as playing a significant role in improving communication between patients and providers, while also increasing learning. As one patient stated, "It is an opportunity to become more knowledgeable about my condition and how I can manage it better." Patients and care partners emphasized the power of print, indicating that some learning styles favor written information, and the importance of an enduring reference: "I very much appreciate the opportunity to see again in writing what was discussed." Patients also reported feeling more informed and gaining a better understanding of their health condition as a result of reading notes, and that the reporting tool extended "teach-back" opportunities from providers to patients, with an opportunity for bidirectional communication: I like the educational and improvement potential of the process. I learn. My provider learns. All good.
Several reports also emphasized that reading notes and providing feedback affords patients a way to share information without bothering their providers: "It allows more frequent non-intrusive communication with doctor." Patients liked the chance for "no embarrassing face to face asking of questions if [they] want to understand or know more."
Importance of Feedback
Patients embraced the opportunity for feedback on many levels: receiving feedback about their health and how they are doing in various aspects of their care, and giving feedback to their providers. Many patients liked the tool because it offered a new way to share positive feedback: "I appreciate the opportunity to praise my healthcare providers." Others saw the tool as a safe haven for feedback: This is a way to [confidentially] reflect a patient's reaction to a provider without "causing trouble." I will use it a lot.
Another noted:
This new project, [OpenNotes] Feedback, is terrific. Finally. Because it is [confidential] I will use it with a mental comfort I have not had till now-over 10 years.
Some patients read notes as a self-feedback mechanism-a way to check how well they were communicating and understood by their providers.
Patients also valued feedback as a way to contribute to the note, for example adding missing information patients found important. Several comments reflected an understanding of quality improvement and a desire to participate in making care better: "Having the opportunity to provide feedback is important to moving the program forward and helps stimulate innovation." Patients appreciated being asked for their input, irrespective of whether they identified a potential safety concern in their note: I am happy that you asked for feedback-if only so that I can say how helpful it is and how pleased I am to have this site available to me.
As above, patient comments drew a link between the invitation for feedback and the effect of inclusivity on strengthening patient-clinician relationships: Being able to provide feedback is very important to me as well. I feel it keeps me connected to my health care providers.
Principal Findings and Implications
With little knowledge on what motivates patients to engage with their health data, we sought to characterize what patients value about reading visit notes as part of a quality improvement initiative. Our findings highlight several insights. Patients and care partners described priorities that can be leveraged to design patient portals that better support patients and families while improving quality of care. For example, participants liked reading notes to remember and confirm next steps. They felt less overwhelmed and more proactive in their care as a result of reading notes. Patients valued the ability to go back to their health information at their own pace and leisure as an enduring, longitudinal resource; open bidirectional dialogue with clinicians and the ability to ask questions with "non-embarrassing" face to face dynamics; and quicker access to notes and results, an established ambulatory care safety priority [20]. Additionally, patients reported developing a greater understanding of their condition from reading notes and liked learning about "the doctor's thought process." Taken together, the specific features that patients valued have direct implications for strengthened shared decision making and informed consent [12,24,25].
Participants also particularly valued the ability to check note accuracy and to share notes with family care partners. A feedback mechanism that encourages commentary from patients and care partners, who may catch possible documentation errors or clinically important oversights in the notes, may also improve portals and care. Poor electronic health record interoperability is a recognized problem [26], medication errors are frequent, and missing information poses a safety threat, particularly for vulnerable patients with complex care needs. As supporting family care partners of older or vulnerable patients becomes a health care priority [21], OpenNotes and the reporting tool may empower care partners with health information and provide a space for their feedback. Though some studies question whether patients would be willing to identify errors [27], our findings resonate with recent reports showing that patients and families can recognize quality problems [28,29], and suggest that at least some patients and care partners particularly value working alongside their providers to ensure their records are accurate.
Shifting the nexus of control away from clinicians alone to one that is shared with patients and families and reflects their values has been described in the literature as patient-centered care, person-centered care, and relationship-centered care, among other terms [8,30]. Here, we refer to "patient and family-centered care" although several of the other terms also apply. In our findings, patients suggested that an invitation to read notes and use the reporting tool sends a message of inclusivity and empowerment, validating patients as capable change agents. Such comments resonate with experts' support for "democratization of health care," shifting traditional power relationships in medicine, and bringing patient and family voices more consistently to health decisions, system design and patient activation tools so that they can engage in ways that "matter most to them" [8,12]. Inviting patients and families to read notes and give feedback helps to level the playing field, providing more information needed for participation in care. Recognizing that while some patients want to be included in decision-making and treated as experts or safety partners regarding their own experience [31], not all patients desire this degree of engagement [32], and hence the evolution of patient portals should work toward closing the digital divide while respecting individuals' choices. As information transparency spreads, our findings can help inform patient and family-centered strategies that further engage those patients who seek their health data (Table 1).
Portals and electronic information are never a substitute for meaningful face-to-face time with clinicians. But although doctors worry that computer use during shorter visits can make clinical interactions feel impersonal [33], patients who read notes liked "feeling heard," describing a deeper sense of caring and respect, and improved patient-clinician relationships. OpenNotes is not a solution for the shortcomings of the EHR, but it may help make the computer feel like less of an obstacle and more of a shared resource, particularly if clinicians turn it toward patients' view and actively invite them to read notes and even provide feedback after the visit. Although some health care providers worry that doing so may increase liability or erode trust, our findings suggest that this innovation may strengthen partnerships with clinicians, consistent with prior studies and data in other fields suggesting that transparent communication enhances trust [18,34].
The availability of notes may also make face-to-face time more effective. Some patients felt more attentive or present during visits because they didn't need to take copious notes, knowing they could access the documentation later. Because patients can go back to notes repeatedly and at patients' own leisure and pace, reading notes may extend the visit, and clinicians may find opportunities to take advantage of this extra "time with patients." With patients as a consistent audience to notes, clinicians may even begin to adapt note-writing in the future to be more personalized, trust-building, or even therapeutic [35]. Finally, we were struck by patients' interest in praising their providers and their description of positive emotions stemming from reading notes. At a time when clinician burnout is in the spotlight [36,37], it is intriguing to consider the potential positive relational effects of OpenNotes on both providers and patients. Creating a space for patients to provide positive feedback for clinicians may bolster morale and even influence positive culture change if amplified across practice settings. Like clinicians, patients and care partners too may be alienated, emotionally distanced and exhausted from interactions with a fragmented and depersonalized health care system [38]. Mechanical, template notes with abundant copy and paste material may exacerbate the problem, and OpenNotes may make this problem more "visible." On the other hand, restoring some patient narrative to notes may help patients feel heard. Assimilation of multiple visits through integrated note access on a single portal may help unify the patient's perception of care, particularly if clinicians refer to each other's notes, as patients learn about how the team works together. Additionally, similar to approaches to decrease burnout for clinicians, enhancing meaningful connections between patients and providers through supportive language in notes and a sense of belonging to the team may be a valuable strategy.
Although these reports reflect the perspectives of patients and care partners who are already engaged by reading notes, organizational exploration of what patients value about note transparency can have a large impact, considering that over 15 million patients have access to their notes across the country today [13]. Building a system in which people want to engage requires knowing what matters most to them. We were struck that half of the themes described by patients reflected what patients valued about reading notes alone, suggesting that simply sharing notes (even without a patient reporting tool) can help patients better remember the care plan, feel less overwhelmed, gain quicker access to results, generate positive emotions, and enable information sharing with care partners. The other themes-ensuring note accuracy, enhanced engagement and partnership, bidirectional communication and education, and the opportunity for feedback and inclusivity-are also valued by patients who read notes, and further strengthened by a patient reporting tool. These can serve as important first steps to inform patient engagement strategies through the patient portal (Table 1). Additional research and health literacy supports are needed to learn what matters most to patients and families who are not yet registered on patient portals and to make that information accessible to them in meaningful ways.
Limitations
Our findings are limited by the small size of a pilot initiative at a single institution. Respondents likely represent a self-selected population, biased toward activated patients who are registered on the patient portal, use OpenNotes, and are from one geographic area. Patients at our medical center are largely white and more likely to have a 4-year college degree or higher. This quality improvement initiative was designed specifically for one health care organization, limiting generalizability to other patient populations. Although a formal analysis of additional sites is beyond the scope of this report, as the OpenNotes reporting tool has expanded to other clinical settings and organizations, we are seeing similar themes surface, reflecting our findings.
Conclusion
In summary, as EHR transparency spreads, new ways for patients to engage with their data in ways that matter to them most and to comment on their records are needed. Many aspects of what patients and care partners like about reading notes and providing feedback have important implications for improving patient and family-centered quality of care, safety, and patient-clinician relationships, and can also inform future patient engagement strategies and patient portal design. | 2017-07-19T16:24:14.181Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "a922fdcf8e8919f86af3d29b7282b2c5d067bcba",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/jmir.7212",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bbd2d8c4e8e110baa0f94e638a53d1e87845c2e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44092581 | pes2o/s2orc | v3-fos-license | The Effects of the Special K Challenge on Body Composition and Biomarkers of Metabolic Health in Healthy Adults
Citation: Shaw P, Walton J, Jakeman P (2015) The Effects of the Special K Challenge on Body Composition and Biomarkers of Metabolic Health in Healthy Adults. J Nutr Health Sci 2(4): 403. doi: 10.15744/23939060.2.403 Volume 2 | Issue 4 Journal of Nutrition and Health Sciences
Introduction
The Special K Challenge is a short term (14 day) partial meal replacement diet designed to reduce body mass and motivate long term reduction in body mass.Our study evaluated the effects of the Special K Challenge on reported energy intake, body mass, body composition and biomarkers of metabolic health in healthy overweight and obese men and women.We found a reduction in total reported energy intake; body mass; fat mass and waist circumference but no changes in plasma total cholesterol or triglycerides in response to the Special K Challenge.The reduction in total reported energy intake facilitated a positive, health-related decrease in body mass, regional fat mass and waist circumference in this small sample size.Our study suggests that the Special K Challenge may act as an effective motivator to long term reduction in body mass.Keywords: Energy intake; Body composition; Special K Challenge; Biomarkers; Meal replacement Obesity is caused by an energy imbalance between calories consumed and calories expended [1] that poses an increased risk of hyperglycaemia, hypercholesterolemia and insulin resistance.Results from the 2013 Health Survey for England (HSE) estimate that approximately 62.1% of the adult population of the UK and Ireland are overweight or obese [2].As a recommended strategy in the treatment of overweight and obesity and related disorders, 20-30% of adults use dieting in an effort to reduce body mass [3].In accordance with the World Health Organization and the National Obesity Observatory an energy deficit of 600kcal per day is recommended to reduce body mass by approximately 0.5-1.0kgper week [4,5].Whilst diets focus on body mass, body compositional change is of significant clinical importance as a reduction in total and regional fat mass is associated with a reduction in metabolic risk factors and mortality [1,6].Meal replacement diets have been shown to be effective in achieving a reduction in body mass [7][8][9][10] compared to traditional reduced energy diets [11][12][13][14][15]. Furthermore, maintained over a 3 month period, meal replacement diets may also lead to a reduction in biomarkers of metabolic risk (i.e.plasma cholesterol, triglycerides, blood pressure, glucose and insulin) [13].Promoted as a motivation tool to encourage long term dietary change, and conducted in accordance with the NICE recommendations for body mass reduction, short term (~14 day) proprietary meal replacement diet plans, such as the Special K Challenge, have been reported to achieve an energy deficit approximating to 600kcal per day and a commensurate 2kg reduction in body mass in healthy overweight and obese individuals [7,14].However, the regional fat loss distribution and the effect of the Special K Challenge on biomarkers of metabolic risk have not previously been evaluated.
Given the association between the compositional change in body mass and metabolic risk, the aim of this study was to investigate whether the Special K Challenge, could achieve a favourable compositional change in body mass and associated changes in biomarkers of metabolic health.
Study design
The study was approved by the Faculty of Education and Health Sciences Research Ethics Committee (EHSREC 10-50) and all participants provided informed consent.All screening, data collection and analysis took place at the University of Limerick.An illustration of the study design is provided in Figure 1.The study comprised of 2 consecutive 14 day phases.During the first phase (basal phase) participants followed their usual diet.During the second phase (diet phase) participants engaged in the Special K Challenge.A factorial design with repeated measures was used by which the participants acted as their own control.All statistical analysis was performed using PASW Statistics 18.0 (SPSS, Inc., Chicago, IL).Normality of data was confirmed using a Shapiro-Wilk test and Analysis of Variance (with repeated measures) applied to determine statistically significant (p < 0.05) effect.All participants were 20-60 years of age, with a mean body mass index (BMI) of 24-35kg/m 2 .Participants were excluded if they were pregnant, lactose intolerance, diabetic or coeliac or taking medication affecting cholesterol; blood pressure or glucose regulation.
Participants
The Special K Challenge is a partial meal replacement diet whereby two main meals of the day are replaced with Special K cereal (Special K, Kellogg's Marketing and Sales Co., UK) for 14 days.Therefore this study evaluated the effects of the Special K Challenge over 14 days only.Each meal substitution consisted of 30g of cereal and 125ml of semi-skimmed milk.Between meals Special K snacks (23g cereal bars) were recommended in addition to fruit or vegetables.
Intervention
Prior to starting, all participants attended a nutritional analysis interview consisting of a 24hr recall; a 52-item food frequency questionnaire (FFQ) and detailed instructions and familiarisation with the food diaries and Special K Challenge.FFQ's have demonstrated acceptable validity and reproducibility in the estimation of energy and nutrient intake [16].A portion size booklet was provided to assist the participants in the estimation of their portion sizes [17].During the basal and diet phases participants were required to keep 7 and 14 day food diaries respectively.The 7 day food diary was used to record the first 7 days of the basal phase, consisting of five weekdays and two weekend days [18].To assist the participants, the 14 day food diary, used to record the diet phase, included the meal substitutions at breakfast and lunch and therefore participants selected 'yes' or 'no' as appropriate.
In addition there was space to record any additions or alterations to the meal substitution, for example tea, coffee, fruit, etc.The use of a 7 day food diary during the basal phase aimed to minimise misreporting as a result of fatigue which may occur following 28 consecutive days of dietary records.The calculation of nutrient intake was performed using WISP Dietary Analysis Software Package (Tinuviel Software, Warrington, UK).
Dietary analysis and compliance
To facilitate dietary compliance, pre-packaged, 30g portion sized boxes of cereal and a selection of Special K snacks were provided gratis.Participants were considered noncompliant if on any one occasion they failed to replace two of their main meals, breakfast, lunch or dinner, with the proprietary dietary substitution as per instructions.This was determined by the analysis of the food diaries and a post-diet phase interview.
Participants arrived at the test centre following an overnight fast and with an empty bladder and were required to abstain from exercise for 12 hours prior to the test.Height was measured to the nearest 0.1cm using a stadiometer (Seca, Birmingham, UK).Waist circumference was measured at the midway point between the ribs and iliac crest.Two 4.5 ml blood samples were obtained by venipuncture of an antecubital vein of the left arm.Serum and plasma were separated by centrifugation and frozen at -78 °C until analysis.The Biochemistry Department of the University Hospital, Limerick, undertook the analysis of plasma cholesterol, triglycerides (TAG), calcium, albumin and urea (UniCel DxC 800 Synchron ® Clinical Systems,Beckman Coulter; UK), glucose and insulin (Elecsys 2010 :Roche Diagnostics; Germany).Leptin (pg/mL) and adiponectin (µg/mL) were analysed by immunoassay (Meso Scale Discovery®, Meso Scale Diagnostics, LLC, Gaithersburg, MD, 20877 USA) at the University of Limerick.Thirty participants were recruited to the study.Two participants withdrew during the intervention and four were excluded from analysis due to poor compliance.A total of twenty four participants completed both the basal and diet phases and were included in the analysis.Participant characteristics are shown in Table1.
Anthropometry
A bioelectrical impedance analyser (Tanita MC-180MA Body Composition Analyzer, Tanita UK Ltd) was used to determine total body water content (kg).A 0.4% coefficient of variance for the measurement of total body water by BIA had been established previously [19].
Bio-electrical impedance analysis (BIA)
Whole body compositional analysis was measured by dual energy x-ray absorptiometry (Lunar iDXA TM scanner; GE Healthcare, Chalfont St Giles, Bucks., UK with enCORE TM v.14.1 software.The precision for repeated measurement was 0.6% body [20].Total body mass was reported as reconstituted body mass (kg) (LTM + BFM + bone tissue mass).The enCore TM software provided the segmental analysis into arm, leg and trunk segments.The trunk segment is defined as all tissue distal to the lowest point of the skull, excluding that contained in arms and leg segments.Body fat mass was partitioned into visceral adipose tissue (VAT, kg); abdominal fat mass (kg); trunk fat mass (kg) and android fat mass (kg).Android fat mass, produced automatically by the enCore TM software, is measured approximately from the top of the pelvis to the midpoint of the lumbar spine.Abdominal fat mass was manually defined by the region between upper edges of the first lumbar vertebrae to the lower edge of the fourth lumbar vertebrae (L1-L4) and was measured using the custom region of interest (ROI) analysis procedures [21,22].Repeated measurements of this procedure have a coefficient of variance of 1.5% [22].Visceral adipose tissue (VAT, kg) was measured using CoreScan TM (GE Healthcare, Madison, WI) software.The mean (SD) reported energy and nutrient intakes during the basal and diet phases are shown in Table 2. Overall there was a significant reduction in mean reported energy intake and a 50% reduction in total and saturated fat intake.There were also significant reductions in protein, carbohydrate, fibre, sodium and calcium intakes.When expressed as a percentage of baseline, men and women had a similar reduction in reported energy intake (34 (10) vs. 28 (9)%; p = 0.147).Mean reported energy intake was significantly reduced by 198kcal (95% CI -268 to -127; p < 0.001) at breakfast, 287kcal (95% CI -399 to -175; p < 0.001) at lunch and 135kcal (95% CI -236 to -34; p = 0.011) during snacking (Figure 2).Mean fat intake was significantly reduced by 13g (95% CI 9 to -7.4; p < 0.001) at breakfast, 19 (18)g (95% CI -26.4 to -11.0; p < 0.001) at lunch and 5g (95% CI -10.7 to 0.1; p < 0.045) during snacking (Figure 3).Men and women had similar total fat mass at baseline (p = 0.167) and therefore women had a significantly higher body fat percentage then men (40.7 (4.3) vs. 28.8(7.2)%; p < 0.001).Table 3 shows the changes in body mass and body composition during the basal and diet phases of the intervention.There were no statistically significant changes in any of the variables during the basal phase.In response to the diet phase there was a significant reduction in reconstituted body mass, BMI, total and regional fat mass, waist circumference and lean tissue mass.The change in body mass between the basal and diet phases is shown in Figure 4 and 5. Men had a greater reduction in total fat was when compared to women (-1.1 (0.8) vs. 0.3 (0.5)kg; p < 0.005) however there were no sex-specific changes in lean tissue mass (p = 0.265) or total body water (p = 0.458).The response of the biomarkers of metabolic health to the diet phase is shown in Table 4.During the basal phase there were no statistically significant changes in any of the variables.We did not find significant reductions in plasma total, HDL, LDL cholesterol, TAG or HOMA during the basal or diet phases however during the diet phase there were significant reductions in leptin and adiponectin.
Effect of the Basal and Diet Phases on Biomarkers of Metabolic Health
The present study aimed to evaluate the effect of the Special K Challenge on reported energy intake; body composition and biomarkers of metabolic risk.We found that the Special K Challenge reduced the total reported energy intake by a mean of 673 (360)kcal/day and therefore achieved the target of 600kcal/d set by the NICE guidelines.This reduction came predominantly from a reduction in fat intake.The low fat content of the cereal and the semi-skimmed milk (0.5g and ~2g per serving respectively) led to a 50% reduction (40g) in the mean total fat intake during the Special K Challenge.To a lesser extent, the reduction in reported energy intake came from a 20 and 23% reduction in carbohydrate and protein intake respectively.The role and significance of a high protein intake during dietary interventions for the maintenance of lean tissue has previously been demonstrated [23] and therefore the reduction in protein intake in the present study may have been a contributing factor to the reduction in lean mass.
Discussion
During the basal phase of the intervention the mean intake of sodium exceeded the recommended daily allowance (RDA) [24] of 1600mg by approximately 1300mg.Meal replacements have been criticized for their high salt content [5] however the present study found that the Special K Challenge resulted in a 30% reduction in mean sodium intake.There was a less favourable reduction in mean calcium intake which was reduced from 932mg to 687mg, marginally below the UK RNI of 700mg/d for adults [25].This result was not anticipated as the Special K Challenge requires the daily addition of milk at two meals and is therefore most likely due to the small portion size.Whilst underreporting of energy intake is a limitation in dietary interventions the findings of the present study support conclusions drawn in previous research using the same diet in a similar population group and are representative for the mean reported energy intake of the Irish population as reported by the Irish Universities Nutrition Alliance [7,14,26].A novel feature of the present study was the analysis of meal compositions of individuals.The Special K Challenge led to significant reductions in reported energy and fat intake at breakfast, lunch and during snacking but no change during dinner.It may be assumed that a lower reported energy intake at breakfast and lunch may result in overcompensation during dinner; however the present study found that the Special K Challenge was effective as a partial meal replacement diet by reducing energy intake at two main meals but appearing to have no influence on the third main meal.The recommended inclusion of Special K snacks led to a further significant reduction in reported energy intake from snacking (135 (239)kcal) equating to approximately 20% of the total energy reduction.Body mass was reduced by a mean of 1.6 (1.4)kg (Range -6.0 to 0.0kg) in response to the Special K Challenge.Our findings agree with those reported in previous interventions [7,14] that report a mean reduction in body mass of 2.0kg (Range 0.2 to 4.6kg) and 1.9 (0.19)kg (Range -4.2 to -0.1kg) respectively where the meal replacements were the same.Our results were also consistent with the UK's NHS recommendation of a reduction in 600kcal for a reduction of 0.5-1.0kgbody mass, for most participants [4].
In the present study the Special K Challenge led to a statistically significant -0.7 (0.8)kg (Range -2.8 to 0.6kg) reduction in body fat mass.This reduction represented up to 10% of total body fat mass in some participants.The inter-individual variability in body mass reduction in response to the Special K Challenge can be caused by differences in baseline body composition and energy expenditure.Those with higher baseline fat tissue mass will expend a greater proportion of net energy deficit towards loss of body fat mass versus lean tissue mass than do those with lower baseline body fat mass [27].In a review of several body mass reduction programmes in the UK, a comparable meal replacement diet (SlimFast) led to a 2.3kg reduction in body fat mass after 8 weeks [10].Therefore, the 2 week duration of the Special K Challenge may be too short to cause clinically significant reductions in body fat mass as reduction in fat mass during the first few days of energy restriction is minimal and increases as the reduction in lean mass begins to cease [28].
Irrespective of total body fat mass, excess abdominal fat presents a higher risk for the development of the metabolic syndrome [29].The Special K Challenge was found to be effective in significantly reducing abdominal fat when measured by waist circumference, waist-to-height ratio and DXA.Approximately 20% of the reduction in fat tissue mass occurred within the abdomen.
A secondary aim of the present study was to evaluate the effects of the Special K Challenge on blood lipid biomarkers.It was hypothesized that a reduction in fat intake would result in reductions in plasma total, LDL, and HDL cholesterol and TAGs however we found no significant effect on biomarkers of metabolic risk.The efficacy of meal replacements was reviewed in a meta-analysis and was found to be effective in improving blood lipid biomarkers and reducing the risk of the metabolic syndrome after 12 weeks [13].The 14 day duration of the Special K Challenge may not, therefore be sufficient to effect a significant change in biomarkers of metabolic risk.Furthermore, the small sample size is a limitation of this study.
In conclusion, the Special K Challenge was found to be effective in reducing body mass, total and regional fat mass and waist circumference through a reduction in total energy and fat intake in accordance with international guidelines for body mass reduction but did not confer significant reduction in biomarkers of metabolic risk in these subjects.
Conclusion
The authors declare that there are no conflicts of interest.The study was funded by Kellogg's Co. Ltd. (Manchester).The authors acknowledge the financial support of an Educational Scholarship award to PS.
Figure 2 :
Figure 2: Reported energy intake (kcal) at breakfast, lunch, dinner and snacks during the basal and diet phases.Data are mean (SE).Significantly different from the basal phase: *p < 0.05; **p < 0.001
Figure 3 :
Figure 3: Reported fat intake (g) at breakfast, lunch, dinner and snacks during the basal and diet phases.Data are mean (SE).Significantly different from the basal phase: *p < 0.05; **p < 0.001
Figure 4 :Figure 5 :
Figure 4: Individual and mean (SE) reconstituted body mass changes (kg) in men in response to the basal and diet phases (n = 12).Significantly different from the basal phase: *p < 0.001
Table 4 :
/ml)* Significantly different to the basal phase: *p < 0.001 Effects of the basal and diet phases on biomarkers of metabolic health.Data are mean (SD)
Table 1 :
Demographic and anthropometric characteristics at baseline.Data are mean (SD)
Table 2 :
Effects of the diet phase on reported energy and nutrient intake.Data are mean (SD) *Median (IQR); 1 Saturated; 2 Monounsaturated; 3 Polyunsaturated
Effect Size Power P-value 95% CI Diet phase 95% CI Basal phase
BFM: Body Fat Mass; LTM: Lean Tissue Mass; TBW: Total Body Water; FM: Fat Mass
Table 3 :
Effects of the basal and diet phases on body mass and body composition.Data are mean (SD) | 2018-05-27T18:32:22.344Z | 2015-11-24T00:00:00.000 | {
"year": 2015,
"sha1": "8ad0234e7e0dc233e26c1a895d12ab1d38a339a1",
"oa_license": "CCBY",
"oa_url": "http://www.annexpublishers.com/articles/JNH/2403-The-Effects-of-the-Special-K-Challenge-on-Body-Composition.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8ad0234e7e0dc233e26c1a895d12ab1d38a339a1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2308874 | pes2o/s2orc | v3-fos-license | Comparison if the addition of multilevel vertebral augmentation to conventional therapy will improve the outcome of patients with multiple myeloma
Background This was a prospective study to evaluate the effect of multilevel vertebral augmentation in addition to conventional therapy in multiple myeloma patients. Methods We treated 27 patients, whom were recently diagnosed to have multiple myeloma by two ways of treatment. Thirteen patients (group I) were treated with conventional therapy and 14 patients (group II) with adding vertebroplasty and kyphoplasty. Patients were evaluated pre-treatment and at half, one, two and 3-years post-treatment by using Oswestry Disability Index (ODI), the Stanford Score (SS) and the Spinal Instability Neoplastic Score (SINS). Results Mean values of ODI, SS and SINS were 31.9 (63.8%), 4.3 and 13.8 for group I and 33.2 (66.4%), 4.6 and 12.8 for group II before starting treatment. Group II showed improvement better than group I at all follow-up intervals with best results at first 6 months. P-values at the end of the study were ODI = 0.047, SS = 0.180 and SINS = 0.002. Mortality rates were equal of both groups (four patients of each group). Conclusion Adding vertebral augmentation to conventional therapy improves multiple myeloma patients’ quality of life, but didn’t affect the mortality rate.
Background
Multiple myeloma is accumulation of malignant plasma cell in the bone marrow leading to impaired blood cell formation and multiple lytic lesions in the skeleton. The incidence of bone involvement is about 70-100% while vertebral column is about 60% [1][2][3]. Bone becomes week and easy to fracture, which may cause pain in the bone and inability to use the limb. In the spine, fractured vertebra causes pain, kyphotic or kyphoscoliotic deformity, compression of the spinal cord or cauda equina in addition to the general symptoms of multiple myeloma [1,3].
General treatment of the disease includes radiotherapy, chemotherapy and bisphosphonate to decrease bone resorption in addition to analgesia, bed rest and bracing to treat pathological fractures [1]. Minimally-invasive vertebroplasty and balloon kyphoplasty are used as local treatment of the vertebral lesions to decrease pain and prevent or treat deformities [3][4][5]. Vertebroplasty is insertion of bone cement (polymethylmetacrylate) inside the vertebral body using pedicle cannula unilaterally or bilaterally while balloon kyphoplasty is insertion of balloon tamps through pedicle cannulae to reduce the height of the vertebra, realign the sagittal plane and create a cavity for bone cement [6][7][8]. Many studies were concentrating on multilevel vertebroplasty and kyphoplasty to treat multiple myeloma, but not more than 6 and 8 levels [9,10].
At our hospital, we used vertebral augmentation in the management of multiple myeloma in a different way. We perform multilevel vertebral augmentation for all vulnerable vertebrae; thoracic, lumbar and sometimes the first sacral vertebra. Our practice is to not wait until the vertebra collapsed as a cause of the tumor, which may lead to neurological sequelae. As such, the following prospective study evaluated the outcomes of our multiple myeloma patients who underwent multilevel vertebral augmentation in addition to conventional therapy.
Methods
This is a prospective study of effectiveness of the addition of vertebral augmentation to conventional chemotherapy and radiotherapy in treating multiple myeloma patients. Our main aims were to prevent spinal column collapse, back deformity, neurological deficits, minimize pain and decrease general morbidities.
We treated 27 patients diagnosed with multiple myeloma at our institution that were newly diagnosed with more than 3 years follow-up. All patients had back pain without neurological deficits. All patients' demographic data were extracted from the medical charts, consisting of age, gender, presenting symptoms, and follow-up period. Imaging studies included plain x-ray and magnetic resonance image (MRI) at the time of diagnosis (Figs. 1, 2, 3 and 4). The patients had histological diagnosis with bone marrow biopsy. The involved vertebra included lesions in the thoracic, lumbar, sacral vertebrae and cervical in one patent (C6 and C7). Mild kyphosis was seen in half of the patients. Consent form was signed by patients and Institutional Review Board (IRB) approval was obtained.
All patients received conventional chemotherapy and radiotherapy according to standard protocols of the hematology oncology. Patients were then were randomly categorized into two groups: Group I: 13 patients were treated by conventional treatment (i.e. chemotherapy and radiotherapy) ( Table 1). Group II:14 patients; 206 vertebrae, number of vertebrae ranged between 10 and 16 (mean: 14.7); were treated by vertebral augmentation in addition to conventional therapy (five patients with chemotherapy and radiotherapy and nine patients with chemotherapy) ( Table 1). One patient needed radiotherapy post augmentation.
Vertebral bodies were augmented from the third thoracic (T3) to first sacral vertebrae (S1), all vertebrae were augmented if they were fractured or vulnerable to fracture whatever the size of the lesion. Vertebral augmentation was done under general anesthesia in the operation room under fluoroscope control. All levels for single patient were done at the same session. Balloon kyphoplasty and cement injection was used to restore the height of collapsed vertebra. Two balloons were used for each level. Transpedicular technique was used for vertebrae below T8 and extrapedicular for T8 and above. Vertebroplasty was used for noncollapsed vertebrae by inserting working cannula and injection of bone cement. We used a transpedicular technique for vertebrae below T8 and extrapedicular for T8 and above, unilateral working cannula for T9 and above and bilateral for T10 and below (Figs. 5 and 6). Patients were observed at surgical flour for one postoperative day then transferred to the hematology ward or discharged, then followed-up at the outpatient clinic. Patients who had spinal canal extension or spinal cord compromise, cauda equine compression, late-stage disease and patients who are previously underwent spinal surgery were excluded. International scoring and questionnaire systems consisting of the Oswestry Disability Index (ODI), the Stanford Score (SS) and the Spinal Instability Neoplastic Score (SINS) were used to evaluate the clinical and radiological results. The patients were evaluated clinically and radiographically on discharge-day, 6, 12, 24 and 36 months.
Statistically, we used SPSS version 20 (Chicago, IL, USA) to evaluate the results. Levene's test for equality of variances was used to evaluate patients at each follow-up interval. This test gives mean values, standard deviation and p-value. Test of between-subjects effect, transformed variables: Average using ANOVA Method to evaluate the end results.
Results
We treated 27 patients, whom were recently diagnosed with multiple myeloma. There were 13 patients in group I (conventional treatment) and 14 patients in group II (n = 206 vertebrae, vertebral augmentation group). Mean age for group I was 58.2 years, mean follow-up was 36 months and male to female ratio was 9:4. For group II, the mean age was 58.9 years, the mean follow-up was 36 months and male to female ratio was 6:8. There was no significant statistical difference in the age between two groups as shown in Table 1.
Four patients (30.8%) of group I died between 7 and 11 months after diagnosis: three patients due to advanced disease and one by acute pneumonia. Four patients (28.6%) of group II died: one died at day of surgery by acute lung embolism and three died 18-24 months after surgery due to advanced disease and severe pneumonia ( Table 2). There were few intraoperative complications in group II. Cement leak inside the spinal canal with no significant neurological compromise or deficits occurred in one patient and intravascular leak in a small vessel was seen in two patients (Figs. 5 and 6). Bone cement didn't affect chemotherapy or radiotherapy.
Oswestry Disability Index, Stanford score and the Spinal Instability Neoplastic Score values were nearly equal in both groups before treatment. ODI of group I was 31.9 (63.8%) with SD = 8.34 and of group II was 33.2 (66.4%) with SD = 5.98 (p = 0.418). SS of group I was 4.3 (SD = 2.6) and of group II was 4.6 (SD = 2.9) (p = 0.309). SINS of group I was 13.8 (SD = 2.9) and of group II was 12.8 (SD = 2.9) (p = 0.482).
At 1 year follow-up, group I score values showed more improvement. ODI value for group I was 28.4 (56.8%) with SD = 8.79 and for group II was 21.4 (42.8%) with SD = 9.24 (p = 0.874). The SS value for group I was 5.28 with SD 2.88 and for group II was 7.52 with SD 1.48 (p = 0.012). SINS value for group I was 12.85 with SD 2.88 and for group II was 7.23 with SD 3.37 (p = 0.526).
At 2 years follow-up, ODI for group I was 28.42 (56.85%) with SD 8.79 and for group II was 21.43 (42.65%) with SD 9.24 (p = 0.874). The SS for group I was 5.40 with SD 2.83 and for group II was 7.68 with SD 1.56 (p = 0.047). SINS value for group I was 12.75 with SD 2.67 and for group II was 7.31 with SD 3.43 (p = 0.278).
At 3 years follow-up, ODI for group I was 29.17 (58.34%) with SD = 9.37 and for group II was 21.43 (42.86%) with SD 9.931 (p = 0.840). The SS mean value for group I was 5.27 with SD 2.94 and for group II was 7.83 with SD 1.64 (p = 0.040). SINS mean value for group I was 12.58 with SD 2.75 and for group II was 7.36 with SD = 3.72 a (p = 0.121).
At the end of the study (3 years), we used test of between-subjects effect, transformed variables: average using ANOVA method to compare the end results of each group. ODI and SINS showed significant difference between two groups (p = 0.047 and p = 0.002) with less significant difference by using SS (p = 0.180). All group II were freely mobile except one who used a cane when walk. All patients were back pain free except three, who had number of exacerbations of pain that may be attributed to disc disease or fracture of vertebral end plate over bone cement. All patients had preserved vertebral height and sagittal balance except for one who had history of inter-scapular pain 4 years after surgery and xrays showed mild loss of height of T4 around the bone cement which was insignificant as compared to three patients of group I who became bedridden due to vertebral fractures with involvement of the spinal canal.
Discussion
This is was a prospective study to compare two groups of multiple myeloma patients who were treated at our institute. Group I was treated with conventional therapy and group II with multilevel vertebral augmentation in addition to conventional therapy. We couldn't find any similar studies in the literature doing the same comparison. Most of previous studies involving multiple myeloma discussed mixed population of malignancy and were not focused such a disease. Few studies that focused on multiple myeloma treated only the fractured vertebra and showed similar results of our study [9,[11][12][13][14][15][16][17]. Our results showed that the addition of vertebral augmentation had better improvement in the outcome, both subjective and objective. The best rates of scores improvement were seen during the first 6 months. After that, improvement rate decreased with time, whereby group II showed better results as demonstrated in Figs. 7, 8 and 9.
The p-values (Levene's test) of scores at each interval of follow-up is shown in Table 3. P-value of ODI was 0.316 at 6 moths and increase to 0.87 at all intervalfollow up, which was considered insignificant. P-values of SS were ≤0.05 at all follow up periods, which were considered significant. P-value of SINS was 0.45 and decreased after 2 years to reach 0.12, which is more significant than ODI. At the end of study, p-value (by using ANOVA test) of ODI, SS scores, and SINS were 0.047, 0.180 and 0.002, respectively, which were statistically significant. This means that back pain, mobility, kyphotic deformity due to vertebral collapse and sagittal balance were improved. Most patients of group II became ambulating and totally pain free. As compared to none of group I were pain free and half of them were ambulating with aid.
In the literature review addressing previous studies discussing multilevel vertebral augmentation, we found that most of them were dealing with less than eight levels and were performed in more than one surgical session [9,18]. Two case reports were found with multilevel vertebral augmentation. The first one was used to treat newly adjacent level fractures in a patient who was treated for osteoporotic fracture [19]. The second case was treated for multiple osteoporotic fractures that occurred at different times after vertebroplasty in a patient with chronic liver disease [20].
In our study, all the patients had the same disease, were treated by the same hematologist, all procedures were done by the same spine surgeon and evaluated by independent physicians. In group II, 14 patients (206 vertebrae), who underwent vertebral augmentation, the procedures were done at same session for all involved vertebra for single patient. This decreased the need and risk of repetitive anesthesia, although increased the operative time and radiological exposure. We did several measures to decrease surgical time and radiological Fig. 9 Shows the Spinal Instability Neoplastic Score; pre-treatment and at intervals of follow up exposure, inserting a working cannula directly (eliminating introducing cannula and K wire need), inserting multiple cannulas at the same time and using unilateral cannula at T9 and above. In addition, there was no significant statistical difference of mortality rate between the two groups.
Conclusion
Multilevel vertebral augmentation in addition to conventional therapy showed superior results as compared with conventional therapy alone. It relieves pain, preserves vertebral height, sagittal balance and improves mobility of the patients. There was no significant difference of mortality rates between the two groups, but there was significant improvement of morbidity rates. The limitation of this study was the small sample size and variable follow-up periods. Larger, prospective studies are needed to further assess the outcome of such treatment modalities in multiple myeloma patients. | 2017-11-15T11:15:11.436Z | 2016-12-29T00:00:00.000 | {
"year": 2016,
"sha1": "9940a48ae59d0a2780f2f3b30d2cccaa2960c653",
"oa_license": "CCBY",
"oa_url": "https://scoliosisjournal.biomedcentral.com/track/pdf/10.1186/s13013-016-0107-6",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9940a48ae59d0a2780f2f3b30d2cccaa2960c653",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
100410802 | pes2o/s2orc | v3-fos-license | Green synthesis and characterization of ZnO nanoparticles for photocatalytic degradation of anthracene
Zinc oxide nanoparticles were prepared using corriandrum sativum leaf extract and zinc acetate dihydrate. It was utilized as a photocatalyst for the degradation of anthracene. The catalyst was characterized by x-ray diffraction, high-resolution transmission electron microscopy, scanning electron microscopy, dynamic scattering light, Raman spectrometry and UV–vis spectrophotometry. The catalyst was used in a bench-scale design for degradation of anthracene. The factors affecting the photocatalytic degradation efficiency, including irradiation time, loading catalyst doses, and initial concentration of anthracene were investigated. The results obtained showed that the photocatalytic degradation efficiency was increased with both the decrease of the initial anthracene concentration and the increase of the photocatalyst doses. The optimum photocatalytic degradation was obtained at pH 7, irradiation time of 240 min and loading catalyst dose of 1000 μg L−1. Under these conditions, the photocatalytic degradation percentage of anthracene was 96%. The byproduct was the much less toxic (9, 10-anthraquinone) and a small amount of phthalic acid as confirmed by gas mass spectrometry and high-pressure liquid chromatography. The kinetic studies revealed that the photocatalytic degradation process obeyed the Langmuir–Hinshelwood model and followed a pseudo-first-order rate expression.
Introduction
Anthracene is commonly used in the production of artificial dyes, insecticides and coating materials. It is also known by its high toxicity of biological tissues. When anthracene enters the body, it directly harms the skin, stomach, intestines and the lymphatic system, and probably induces some tumors [1]. Because of low molecular weight, compared to most of the other polyaromatic hydrocarbons (PAHs) and other organic pollutants [2], anthracene has a higher solubility in aqueous solutions and can be found at more significant levels in water. It represents a high concern to the environment [3]. It is listed as a priority hazardous substance in the European Union and the United States Environmental Protection Agency pollutants list [3,4]. Due to its structural similarity to the high molecular weight carcinogenic (PAHs) [5], it is usually used as an important model compound for the degradation studies of these compounds [4,6].
The literature survey shows that the most recent studies of anthracene degradation have been conducted using biodegradation methods. Aspergillus fumigates [7], white rot fungus phanerochaete chrysosporium immobilized on sugarcane bagasse [8], fusarium solani strains isolated from mangrove sediments, immobilized peroxidase from momordica charantia [9], immobilized enzyme [10], have been used. Most of these methods, however, have poor efficiency, require several days for degradation and involve time consuming manipulation steps and unavailable biological catalysts. Comparison between many of those previously published methods and the present study are summarized in table 1. Heterogeneous photocatalytic using photocatalysis materials with high efficiency and low cost was considered as one of the effective futuristic water purification methods. Once these photocatalysis materials have been developed, heterogeneous photocatalytic can be applied for highthroughput systems in environment protection. Until now, TiO 2 and ZnO are most widely studied photo-catalysts, both as semiconductors (SCs) have the similar band gaps (3.2 and 3.3 eV). Moreover, ZnO has shown higher quantum efficiency [21], because it absorbs more quantum of light in the UV region compared with TiO 2 [22]. Moreover, ZnO is known as an important photocatalyst due to its unique advantages, such as the simple and cheap fabrication materials, the high photocatalytic activity, the non-toxicity, and the high photosensitivity and stability. Using SCs photocatalysts for the removal of organic pollutants in wastewater has attracted a lot of attention as an important issue on environmental protection.
Throughout the development of nanoscience and nanotechnology, many research reports have been focused on the application of nanocatalyst materials. Nanomaterials have a higher specific surface area appropriate for catalysis, and are synthesized with controllable sizes and morphologies which affect the catalytic activity. However, the photocatalytic properties of ZnO nanoparticles in the degradation of pollutants are directly related to their synthesis, e.g. particle size, morphology and dopant concentrations. It has been noticed that the surface characteristics of ZnO are determined by the different synthesis processes and this influence the photocatalytic property and the final degradation efficiency [23].
Although, physical and chemical methods are prevalent in nanoparticles synthesis, the green synthesis is the best improvement due to the protection of the environment as well as the synthesized nanoparticles with small size and large surface area. The plant phytochemicals with antioxidant properties is responsible for the synthesis of metal and metal oxide nanoparticles. This benign reaction is quite rapid, readily conducted at room temperature and pressure, and easily scaled up. Laterally, synthesis of nanoparticles has been accomplished by bacteria, fungi, actinomycetes [24]. Moreover, the use of the extract of neem, camellia Sinensis, corriandrum, nelumbolicifera, ocimum sanctum and many other plants comply with the principles of green chemistry and is environmentally benign [25].
Mechanism of photocatalytic oxidation is shown as follows. The photocatalytic oxidation reaction initiated when a photogenerated excited electron transferred from the filled valence band of the photocatalyst to the empty conduction band as the absorbed photon energy, hν, equals or exceeds the band gap of the photocatalyst. The photogenerated electrons and holes have been found to degrade many types of organic and inorganic pollutants [26]. It is possible to suggest that the electron-hole pair (e − −h + ) is generated at the surface of ZnO NP's photocatalyst through possible reactions leading to the formation of reactive oxidative hydroxide radicals through catalytic photo-oxidation using ZnO NP's as follows: It has been reported that the hydroxyl radical ( • OH) is a powerful oxidant for the degradation of many organic compounds [27]. Figures 1 and 2 illustrate the possible pathway of the degradation process of anthracene byproducts by the effect of the hydroxyl radical ( • OH) [28].
The present work deals with the preparation of ZnO nanoparticles using an aqueous leaf extract of corriandrum sativum, characterization and application in the presence of UV radiation for photocatalytic degradation of anthracene. The effects of various operating degradation conditions and the kinetics of the reaction based on Langmuir-Hinshelwood model are described [29].
Equipment
The crystalline phase of ZnO powders was analyzed by x-ray diffraction (XRD) X'Pert Pro-PANlytical. The operation conditions at 40 kV and 40 mA were used with CuKα (λ=1.54 A o ) and 2θ ranging from 4 to 80°. The microstructure morphology of ZnO was obtained by scanning electron microscopy (SEM) JSM-5300. The particle sizes of ZnO nanopowder were detected by dynamic scattering light (DLS) Zetasizer Nano-ZS, Malvern, Instruments, UK, at 633 nm (laser source He-Ne) and high-resolution transmission electron microscopy (HR-TEM) JEM-2100, JEOL. The elemental spectrum of ZnO nanoparticles was gained by energy dispersive x-ray spectroscopy (EDX) JEM-2100, JEOL. The optical properties of the prepared ZnO nanoparticles were characterized by UV-vis spectrophotometry using JENWAY 6505 spectrometer. Raman spectra were taken using Bruker senterra dispersive Raman system with 532 nm laser line.
For gas chromatography-mass spectrometry (GC-MS) analysis, Varian cp-3800 instrument equipped with DB5-m's column (30 m×0.32 mm) was used for detecting the final degradation products of anthracene. The column film thickness was 0.25 μm, thermal program was started from 40°C (4 min, 4°C min −1 ) to 280°C (5 min) using helium gas as a carrier. Injector with split ratio of 40, temperature 250°C and a mass spectrometer detector (Varian 1200L) were used. The mass range was 50-450 (full scan), the ion source was EI (70 eV) and the sample size was 0.5 μl. At the end of the photocatalytic reaction and after the removal of the photocatalyst by filtration, the filtrate was extracted three times with 5 ml of doubly distilled chloroform (Sigma-Aldrich). The extract was collected and subsequently evaporated to less than 1% of the original volume and 10 μl was injected into the GC-MS to identify the degradation products of anthracene.
High pressure liquid chromatograph (HPLC, Agilent 1200 series) with a 5 μm×25.0 cm×4.6 mm LC-C18 column and equipped with PDA detector and auto-sampler was used to follow up the degradation reaction. The mobile phase consisted of 40% water and 60% acetonitrile. The solvent program was isocratic, the flow rate was 1.0 ml min −1 and the injection volume was 5 μl.
Reagents
All reagents used were of high purity grade and doubly distilled, deionized water was used throughout. Zinc acetate dihydrate (99% purity), anthracene 99% analytical grade, and sodium hydroxide (pellet 99%) were purchased from Sigma-Aldrich (St. Louis, MO). The corriandrum sativum plant was obtained from local grocery shops. Freshly corriandrum sativum leaves were collected, shredded, washed several times with water to remove the dust particles followed by deionized water and air dried at 50°C for 30 min to remove the residual moisture. The plant extract was prepared by weighting 50 g of the washed, dried leaves in 500 ml glass beaker followed by the addition of 200 ml of deionizer water. The mixture was boiled until the color of the aqueous solution changes to dark yellow. The extract was cooled to room temperature and filtered using Whatman (No 40) filter paper.
Qualitative phytochemical analysis of corriandrum
sativum leaf extract. The extract was subjected to qualitative tests for the identification of various phytochemical constituents using the standard procedures [30]. The results showed the presence of some phytochemicals, which are responsible for the synthesis of metal oxide nanoparticles such as alkaloids, flavonoids, carbohydrates, glycosides, steroids and tannins. Fixed oils, proteins, terpenoids, and saponines were not detected.
Preparation of zinc oxide nanoparticles by green
syntheses. To 50 ml of distilled water, a 0.2 g portion of zinc acetate dihydrate was added under vigorous stirring for 10 min. A 1.0 ml aqueous corriandrum sativum leaf extract was added to the above solution followed by the addition of 2.0 M NaOH drop wise until the pH became 12 and a pale white aqueous solution was obtained. The mixture was stirred for 2 h. The obtained pale white precipitate was isolated, washed several times with distilled water, followed by ethanol and dried at 60°C under vacuum over night. The pale white powder of ZnO nanoparticles was carefully collected and used for further investigation.
ZnO nanoparticles also prepared without using of the extract as a plank, a 0.2 g portion of zinc acetate dihydrate was mixed with 50 ml of distilled water. The mixture was vigorously stirred for 2 h and a 2.0 M NaOH solution was added drop wise to reach pH 12. The white precipitate formed was filtered off, washed thoroughly with distilled water, followed by ethanol and dried at 60°C under vacuum oven overnight.
Photocatalytic degradation reaction of anthracene
The photocatalytic degradation of anthracene, using green synthesized ZnO nanoparticles (prepared using corriandrum sativum extract), was investigated by using a batch photocatalytic reactor. A mixture of ZnO and anthracene suspension was sonicated in an ultrasonic bath for 20 min before irradiation at 368 nm by 2 UV lamps, 20 W each. The light source was fixed at a distance of 10 cm from the surface of the reaction vessel and the radiation source was placed on the middle of the photoreactor. The reaction mixture was irradiated at 368 nm with continuous stirring. After each run, the photocatalyst was separated from the anthracene solution by centrifugation at 4000 rpm.
To study the effect of some parameters such as pH, irradiation time, initial concentration and photocatalyst dose on the anthracene degradation, batch experiments were conducted for each parameter. Anthracene solutions in acetone with various initial concentrations (25, 50, 75 and 100 μg L −1 ) were prepared. Each of these solutions was mixed with various concentrations of ZnO nano particles (250, 500, 750, 1000 and 1250 μg L −1 ). The pH of the mixtures was adjusted at pH 5, 7, and 9 using aqueous 2 M solution of HCl or NaOH. The irradiation time interval was from 10 to 360 min.
A series of batch experiments were conducted under the optimum condition of photoccatalytic degradation in terms of concentration of ZnO NP's, temperature, pH values, and irradiation time as follows: (1) Anthracene aqueous solution 25 μg L −1 concentration was irradiated by using of UV light in the absence of ZnO at pH 7, temperature 25°C and irradiation time 240 min. (4) Anthracene aqueous solution 25 μg L −1 concentration allowed to contact with 1000 μg L −1 chemically ZnO catalyst at pH 7, temperature 25°C and irradiation time 240 min.
Characteristics of ZnO nanoparticles
The XRD pattern of the green synthesized ZnO nanoparticles obtained from zinc acetate dihydrate and aqueous extract of corriandrum sativum leaf is illustrated in figure 3(a). The peaks obtained show that the powder is highly crystalline, and all peaks are in good agreement with the hexagonal structure of the reference pattern: ZnO, 04-016-6648. High purity and crystalinity of the prepared ZnO NPs are revealed by the appearance of a clear, sharp peak and the absence of peaks from other phases of zinc oxide and impurities. The XRD pattern of the chemically synthesized ZnO NPs obtained from zinc acetate dihydrate and NaOH is shown in figure 3(b). The structure of this compound does not agree with the hexagonal structure of the reference pattern of ZnO, 04-016-6648. Peaks due to the presence of Zn (OH) 2 are also displayed.
DLS measurements for a size distribution profile of green synthesized ZnO NPs (figure 4(a)) reveals a maximum intensity at the average particle size of 52 nm. On the other hand, chemically synthesized zinc oxide shows an intensity for particles at an average size of 253 nm ( figure 4(b)). These results confirm the presence of phytochemical compounds from the extract preventing particle agglomeration.
HR-TEM analysis ( figure 5(a)) shows that the green synthesized ZnO NPs are in the particle size range from 9 nm to 18 nm. The HR-TEM images demonstrate the internal structure and give a more accurate detection of particle sizes. Figure 5(b) shows that the particle size of chemically synthesized zinc oxide nanoparticales ranges from 190 nm to 210 nm.
SEM image of ZnO NPs (figure 6) shows the external morphology of the nanoparticles.
EDX measurement result of the green synthesized ZnO nanoparticles is shown in figure 7. The peaks of zinc and The percentage of anthracene photocatalytic degradation is increased to 88% by increase the irradiation time up to 240 min. Increasing the irradiation time to above 240 min, causes a decrease in the degradation percentage of anthracene, probably due to the presence of a large amount of the intermediates and byproducts produced by the photocatalytic degradation. These small organic molecules are adsorbed on the surface of green synthesized ZnO and compete with the large molecules of anthracene leading to a decrease in the amount of • OH radicals. In addition, the further increases of irradiation time, produces a greater amount of free radicals and leads to more crowding and recombination between the free radicals which decrease the percentage of degradation.
3.2.2. Effect of pH. By allowing anthracene aqueous solution (100 μg L −1 ) to contact with green synthesized ZnO NPs catalyst 1000 μg L −1 at different pH 5, 7 and 9 under irradiation time from 10 to 360 min, the result obtained show that the highest efficiency was obtained at pH 7.
3.2.3. Effect of temperature. The effect of temperature on the rate of anthracene degradation using green synthesized ZnO NPs was studied over the temperature range of 25-40°C. The efficiency of degradation values indicates that increasing temperature causes a slight increase in the reaction rate. Unlike thermal catalytic reactions, photocatalytic degradation reactions are known to be insensitive to temperature change [33]. The little enhancement of photocatalytic degradation by increasing temperature is probably due to the increase of collision frequency of molecules. Irradiation is believed to be the primary source of electron-hole pairs at ambient temperature because the band gap E g is too high to overcome by thermal excitation [34]. Photocatalytic degradation reactions, using high band gap SC catalysts, normally have rates that are known to be independent of temperature [35]. The energy provided by heating is relatively small; heating to 40°C (313 K) provides only a fraction of an electron volt, which is far less than that needed to excite the high-band gap of ZnO. Moreover, at higher temperatures, contaminant molecules may become more desorbed away from catalyst surface, which lowers the reaction rate [36]. Higher temperatures are also responsible for removal of oxygen from the reaction mixture which is necessary for the contaminant oxidation [37].
3.2.4. Effect of ZnO catalyst dose. The effect of ZnO catalyst dose on the efficiency of anthracene photocatalytic degradation was detected by using various amounts of the green synthesized ZnO NPs (250, 500, 750, 1000 and 1250 μg L −1 ) and 100 μg L −1 aqueous anthracene solution in a total volume of 100 ml. The pH of the solution was kept at 7, the temperature at 25°C and the UV irradiation time interval ranged from 10 to 240 min. A maximum photocatalytic degradation efficiency (89%) was obtained with 1000 μg L −1 ZnO photocatalyst. The results obtained ( figure 11) show that the degradation efficiency increases by increasing the photocatalyst dose up to 1000 μg L −1 . This is due to the increase in the number of active sites on the catalyst surface, causing an increase in the number of absorbed photons, which leads to production of large number of • OH radicals, and increase degradation of anthracene molecules. By increasing the catalyst dose greater than 1000 μg L −1 the solution become turbid, thus, decreasing the effectiveness of the catalyst activation during the UV irradiation. Furthermore, as the concentration of anthracene increases with constant intensity of light illumination, the number of photons entering the solution decreases, so only fewer photons reached the catalyst surface. As a result, the productions of holes or hydroxyl radicals that can attack anthracene are limited. Therefore, the relative availability of HO • ready to attack the anthracene compound decreases and thus the photo-degradation percentage decreases. Also, when the initial concentration of anthracene is high, it will inhibit the photocatalytic degradation due to the interception of the photons before reaching the catalyst surface. However, with dilute anthracene solutions, an increase in the incidental photonic flux of catalyst irradiation causes an increase in the rate of hydroxyl radical ( • OH) production and accelerates the degradation process.
Photolysis and adsorption of anthracene
The results obtained ( figure 13) show that photolysis under UV irradiation (in the absence of ZnO catalyst) gives low degradation efficiency (about 22% after 240 min). The presence of green synthesized ZnO NPs without irradiation gives a decrease of anthracene concentration by about 60% within 240 min. Chemically synthesized ZnO NPs without irradiation gives lower efficiency for anthracene adsorption (about 25% in 240 min) compared with 60% obtained by using green synthesized ZnO.
Irradiation of anthracene with UV light in the presence of chemically synthesized ZnO NPs catalyst under the same conditions gives lower degradation efficiency (31% in 240 min), compared with a green synthesized ZnO NPs catalyst which caused high decrease of anthracene concentration (96% in 240 min) due to photocatalytic degradation.
Kinetic study of photocatalytic degradation
The kinetic study of the photocatalytic degradation of anthracene using green synthesized ZnO was investigated with the Langmuir-Hinshelwood kinetic model, which also covers the adsorption properties of the substrate on the photocatalyst surface [29]. The equation is represented as follows: where C is the concentration of anthracene at the irradiation time, k the reaction rate constant, and K is the degradation coefficient of the reactant. By the integration of this equation (with initial condition C=C 0 for t=0), we obtain first-order expression: where k′ is the apparent rate constant. A plot of ln (C 0 /C) versus time results in a straight line; its slope is the pseudofirst-order degradation rate constant (K app ).
The experimental data obtained in the kinetic study of anthracene degradation at different concentrations were fitted with the Langmuir-Hinshelwood kinetic model. The results obtained are illustrated in figure 14.
The plots of the concentration data give a straight line, showing that the photocatalytic degradation of anthracene can be described by the pseudo-first-order kinetic model. The correlation coefficient constant for the fitted line and the rate constants are graphically obtained and their values for each concentration are represented in table 2.
Identification of the byproduct of anthracene photocatalytic degradation
The byproducts of the photocatalytic degradation of anthracene with ZnO NPs photocatalyst after 6 h of irradiation were isolated by extraction with chloroform and identified using GC-MS. The chromatogram (figure 15) displays a main strong peak of 9, 10-anthraquinone, besides minor peak of phthalic acid.
It is well documented that anthraquinone is much less toxic than anthracene [38]. It has been reported that the acute oral LD50 of anthraquinone in rats is >5000 mg kg −1 and the acute dermal LD50 in rabbits is >5000 mg kg −1 , non-toxic to bluegill sunfish and rainbow trout, and non-toxic to freshwater daphnids, compared with the toxicity of anthracene LD50 in intraperitoneal mouse is 430 mg kg −1 and LD50 rat dermal >1320 mg kg w −1 , toxicity to fish LC50 for lepomis macrochirus (bluegill) is 0.001 mg l −1 in 96 h, EC50 for daphnia magna (water flea) is 0.1 μg L −1 in 48 h [39]. The ecotoxicity of phthalic acid on fish have been cited in the ECOTOX database as, acute toxicity to fish LC50 (48 h) >1000 000 μg L −1 [40].
Conclusions
Zinc oxide nanoparticles was prepared using the extract of corriandrum sativum leaf, characterized and utilized as an effective photocatalyst for anthracene degradation.. Instrumental methods (XRD, SEM, HR-TEM, DLS, Raman, UV-Vis) reveal and confirm the formation of nanoparticles of sizes in the range of 9-18 nm. Optimum photocatalytic degradation of 100 μg L −1 anthracene is 1000 μg L −1 ZnO NP's, at ambient temperature (25°C), pH 7 and ultraviolet irradiation for 240 min. Under these conditions the percentage decomposition of anthracene is ∼96%. The kinetic study of the reaction obeys Langmuir-Hinshelwood model and fitted the pseudo first order rate constants. Formation of anthraquinone as a main decomposition product was confirmed by HPLC and gas-mass-spectrometry. This photocatalytic degradation reaction significantly reduces the toxicity of anthracene.
It is concluded that the photocatalytic degradation of anthracene with ZnO NP's prepared using the extract of corriandrum sativum plant is an effective method in terms of simplicity, degradation efficiency and time of degradation. Figure 15. GC-MS chromatogram of the degradation products of anthracene using green synthesized ZnO nanoparticles where a major peak of 9, 10-anthraquinone [1] and traces of phthalic acid [2]. | 2019-04-08T13:07:28.862Z | 2015-11-13T00:00:00.000 | {
"year": 2015,
"sha1": "a315e5e93af42e4d99e3f8d946e6e93c994c2c2d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2043-6262/6/4/045012",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "df354a0bd5d3054bc6a4c55321b51a8bf176ed4e",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
259019324 | pes2o/s2orc | v3-fos-license | The significance of PD-1/PD-L1 imbalance in ulcerative colitis
Objectives To investigate the expression and significance of programmed cell death protein 1 (PD-1) and programmed cell death ligand-1 (PD-L1) in the mucosal tissues and peripheral blood of patients with ulcerative colitis (UC). Methods Eighty patients with UC were recruited from January 2021 to August 2022 from the Shanxi Province People’s Hospital. PD-1 and PD-L1 expression was assessed by immunohistochemistry in mucosal tissues. An enzyme-linked immunosorbent assay was used to measure soluble PD-1 and PD-L1 levels in peripheral blood serum, and the membrane-bound forms of PD-1 (mPD-1), (T-helper cell) Th1 and Th17, in peripheral blood were determined by flow cytometry. Result PD-1 expression was observed only in the monocytes of the mucosal lamina propria of UC patients, while PD-L1 was mainly located in both epithelial cells and monocytes on the cell membrane. The expression level of PD-1/PD-L1 in the monocytes and epithelial cells of mucosal lamina propria increased with disease activity (P < 0.05). The percentages of PD-1/T and PD-1/CD4+T in the peripheral blood of moderate UC patients (PD-1/T 12.83 ± 6.15% and PD-1/CD4+T 19.67 ± 9.95%) and severe UC patients (PD-1/T 14.29 ± 5.71% and PD-1/CD4+T 21.63 ± 11.44%) were higher than in mild UC patients (PD-1/T 8.17 ± 2.80% and PD-1/CD4+T 12.44 ± 4.73%; P < 0.05). There were no significant differences in PD-1/CD8+T cells between mild and severe UC patients (P > 0.05). There was a statistically significant difference in the expression level of sPD-L1 between the UC groups and healthy controls, and the expression level of sPD-L1 increased with disease severity (P < 0.05); however, there was no statistically significant difference in sPD-1 expression levels between the UC groups and healthy controls (P > 0.05). The correlation coefficients between Th1 and sPD-L1, PD-1/T, PD-1/CD4+T and PD-1/CD8+T were 0.427, 0.589, 0.486, and 0.329, respectively (P < 0.001). The correlation coefficients between Th17 and sPD-L1, PD-1/T, PD-1/CD4+T and PD-1/CD8+T were 0.323, 0.452, 0.320, and 0.250, respectively (P < 0.05). Conclusion The expression level of PD-1/PD-L1 was correlated with UC disease activity, and two forms of PD-1 and PD-L1 may be used as a potential marker for predicting UC and assessing disease progression in UC patients. PD-1/PD-L1 imbalance was a significant phenomenon of UC immune dysfunction. Future research should focus on two forms of PD-1/PD-L1 signaling molecules to better understand the pathogenesis of UC and to identify potential drug therapies.
INTRODUCTION
Ulcerative colitis (UC) is a chronic relapsing inflammatory bowel disease (IBD) with rectal bleeding, diarrhoea, and abdominal pain as its main symptoms and inflammatory cell infiltration, diffuse crypt abnormalities, and diffuse superficial ulcers as its main histopathological features (Feakins, 2014). IBD is a global disease with a high incidence and prevalence throughout the world (Molodecky et al., 2012). Genetic susceptibility, environmental factors, gut microbiota dysbiosis and immune dysbiosis contribute to the pathogenesis of UC (Ananthakrishnan, 2015). Although the exact causes of UC are unclear, there has been a growing interest in uncontrolled immune responses as an important factor (Tatiya-Aphiradee, Chatuphonprasert & Jarukamjorn, 2018). Current treatments for UC include salicylic acid preparations, glucocorticoids, biological agents, fecal bacteria transplantation, and surgical resection, and many UC patients have benefited from these therapies (Ungaro et al., 2017). Antitumor necrosis factor antibodies are a novel therapy for UC that have gained recent attention, but antibody therapy is not effective for all patients, and can actually lead to an increased risk of infectious complications in some patients (Katsanos & Papadakis, 2017;Click & Regueiro, 2019). It is therefore imperative to identify novel therapeutic targets beyond immune suppression for the treatment of UC and IBD.
Programmed cell death-1 (PD-1) and programmed cell death ligand-1 (PD-L1) are members of the CD28 superfamily and the B7 superfamily, respectively. PD-1 emits negative signals when it interacts with PD-L1. Both PD-L1 and PD-1 are expressed most prominently on activated CD4+ and CD8+ T cells, and their interaction inhibits activated CD4+ and CD8+T cell proliferation and mediates immune tolerance or causes a harmful effect on antitumor immunity, contributing to immune evasion (Pinchuk et al., 2008;Wang & Wu, 2020). Previous research has found that these negative co-stimulators play a critical role in innate and adaptive immune responses and in gut homeostasis (Chulkina, Beswick & Pinchuk, 2020). At the same time, recent studies on PD-1/PD-L1 have also made new progress in UC, and PD-1/PD-L1 may be a potential therapeutic target for UC (Roosenboom et al., 2021;Cassol et al., 2020). PD-1 and PD-L1 also exist in soluble forms. Soluble programmed cell death ligand-1 (sPD-1) is encoded by PD-1Deltaex3, which lacks the transmembrane region and has its own immune regulatory function; like a cytokine, it plays a role in aberrant T-cell proliferation (Dai et al., 2014). Soluble programmed cell death ligand-1 (sPD-L1) is mainly produced by the cleavage of PD-L1. SPD-1 and sPD-L1 have important immune regulatory functions which can bind specifically to sPD-1 and PD-1 (He et al., 2020). However, the changes of sPD-1/sPD-L1 in UC peripheral blood and their role in immune dysfunction need to be explored as PD-1/PD-L1 likely plays an important role in cellular immunity dysfunction in UC. In our study, mucosal tissue and peripheral blood samples from UC patients were used to investigate the clinical value of two forms of PD-1/PD-L1 in UC.
Setting and study design
In this prospective cohort study, 80 patients with ulcerative colitis (UC), 30 healthy controls (HC) and 20 patients with acute enteritis were recruited from the Department of Gastroenterology at the Shanxi Province People's Hospital, which a general teaching hospital affiliated with Shanxi Medical University
Immunohistochemistry
The immunohistochemistry methods were as follows: the fixed samples were embedded in neutral buffered formalin for 6-12 h. They were then washed, dehydrated, and embedded in paraffin and cut into three-mm-thick sections. Immunoperoxidase staining was then performed with antibodies against PD-1 (Clone MX033; ready to use) and PD-L1 (Clone E1L3N; 1:200) in addition to HE staining. PD-1 expression was found to be restricted to inflammatory cells in the lamina propria, but not on epithelial cells. However, PD-L1 was found on both epithelial and inflammatory cells in the lamina propria. Cells expressing PD-1-and PD-L1-positive markers were expressed as continuous variables, ranging from 0 to 100, and as categorical variables, divided into four categories by staining intensity: 0 (negative): 1% of cells stained, one (weak): 1-5% of cells stained, two (moderate): 5-10% of cells stained, and three (strong): >10% of cells stained. Healthy controls underwent colonoscopy for polyp surveillance. Healthy controls did not present endoscopic abnormalities reflecting inflammation and pathological findings revealed no active inflammatory response in mucosal tissues. Specimens were obtained from patients with acute enteritis who had a pathological diagnosis of acute inflammatory response in intestinal mucosal tissue.
ELISA
Blood samples were temporarily stored in a yellow test tube before testing, and then centrifuged for 3,000 r/min for 15 min. The serum was taken and stored in a test tube without endotoxin at −80 C. All reagents and samples were removed from the refrigerator 60 min before measurement and returned to room temperature. The sPD-1 and sPD-L1 reagents were purchased from RuiXin Biotech, and detailed experimental procedures were outlined in the reagent instructions. Interferon c was analyzed using an automatic chemiluminescence analyzer (WanTai Caris-200).
Statistical analysis
Measurement data conforming to normal distribution were expressed as meanstandard deviation. The T-test or analysis of variance were used to compare data between groups. Non-normal measurement data were expressed as medians with interquartile ranges (IQR). The Mann-Whitney U test was used to compare continuous variables of two independent groups, and the Kruskal-Wallis H test was used to compare multiple independent samples. SPSS version 23.0 (IBM Corp., Armonk, NY, USA) was used for statistical analysis. The data were considered to be statistically significant when P < 0.05.
Baseline patient characteristics
There were 80 UC patients included in this study: 25 mild UC cases (14 males/11 females), with an age range of 23 to 68 years and a mean age of 46.84 ± 12.84 years; 30 moderate UC cases (17 males/13 females), with an age range from 18 to 66 years and a mean age of 45.93 ± 13.70 years; and 25 severe UC cases (18 males/seven females), with an age range from 24 to 72 years and a mean age of 48.92 ± 13.58 years. The control group included 20 acute enteritis patients (nine males/11 females), with an age range of 28 to 66 years and a mean age of 44.88 ± 10.88 years, and 30 healthy controls (16 males/14 females), with an age range of 21 to 67 years and a mean age of 44.07 ± 12.17 years. The mean age did not differ significantly between the experimental and control groups. It was noteworthy that WBC, Albumin and K+ were statistically significant in the UC group, acute enteritis group and healthy control group (Table 1).
PD-1/PD-L1 was specifically expressed in the mucosal tissues of UC patients, but not in acute enteritis patients or healthy controls PD-1 was only expressed in monocytes located on the mucosal lamina propria of UC patients, while PD-L1 was discovered in both epithelial cells and in monocyte membranes. Negative PD-1/PD-L1 expression was observed in normal mucosa and in the mucosa of common acute enteritis patients. Programmed cell death-1 (PD-1)/programmed cell death-ligand 1 (PD-L1) expression in the colon mucosa of healthy controls (HC), acute enteritis patients and ulcerative colitis (UC) patients (200×) are shown in Fig. 1. Normal colon mucosa shown in Figs. 1A-1C show PD-1 and PD-L1-negative monocytes without PD-L1 staining in the epithelium. Figure 1D-1F shows the colon mucosa of acute enteritis patients with PD-1/PD-L1-positive monocytes, but no epithelial cells. Colon mucosa of Table 1 Baseline patients characteristics. There are 80 UC patients including 25 mild UC cases. The age of mild UC cases ranged from 23 to 68 years and the mean age was 46.84 ± 12.84 years, 14 male cases; 30 moderate UC cases, the age of moderate UC ranged from 18 to 66 years and the mean age was 45.93 ± 13.70 years, 17 male cases; 25 severe UC cases, the age of severe UC ranged from 24 to 72 years and the mean age was 48.92 ± 13.58 years, 18 male cases; 20 acute enteritis patients cases, the age of acute enteritis patients ranged from 28 to 66 years and the mean age was 44.88 ± 10.88 years, nine male cases and 30 healthy control cases, the age of healthy control ranged from 21 to 67 years and the mean age was 44.07 ± 12.17 years, 16 male cases. The mean age did not differ significantly between the experimental and control groups. It was noteworthy that WBC, Albumin and K+ were statistically significant in the UC group, acute enteritis group and healthy control group. ulcerative colitis patients had numerous PD-1-positive monocytes and a strong expression of PD-L1 in the monocytes and epithelium, as shown in Figs. 1G-1I.
The immunohistochemical analysis showed the expression of PD-1/ PD-L1 in the mucosal tissues of UC patients was affected by the degree of UC disease inflammation The expression of PD-1/PD-L1 was negative in healthy controls, while acute enteritis patients had very few PD-L1-positive expressions in the monocytes. In mild to severe UC, PD-1/PD-L1 expression was statistically significant (P < 0.001), and the expression level of PD-1/PD-L1 in mucosal tissues increased with disease activity, though there were still a few colon biopsies with negative PD-1/PD-L1 expression in the UC groups ( Table 2).
The expression of PD-1 on immune cells in the peripheral blood of UC patients, as analyzed by flow cytometry, was different from the healthy control group Peripheral blood sPD-L1, but not sPD-1, was significant in UC patients ELISA was used to measure sPD-1/sPD-L1 in peripheral blood to further analyze the function of sPD-1/sPD-L1. Although sPD-1 expression was increased in the severe UC group, there was no statistically significant difference in sPD-1 expression between the UC group and the control group (P > 0.05). However, the expression level of sPD-L1 was statistically different between the severe UC group and the control group (232.2765.93 pg/mL), and sPD-L1 expression increased with UC disease severity (P < 0.05), from mild UC (256.3880.23 pg/mL) and moderate UC (350.3095.67 pg/mL) to severe UC (441.6485.57 pg/mL), respectively (Fig. 3).
A correlation analysis of sPD-L1 in peripheral blood found PD-L1 on monocytes or PD-L1 on epithelial cells in mucosal tissue
In order to further analyze the source of sPD-L1, we analyzed the correlation between sPD-L1 and PD-L1 on monocytes and PD-L1 on epithelial cells. The correlation coefficient between sPD-L1 and PD-L1 on monocytes was 0.606, and the correlation coefficient between sPD-L1 and PD-L1 on epithelial cells was 0.420 (P < 0.001; Table 3). Table 2 The percentage of PD-1/PD-L1 expression in monocyte and epithelium in healthy control, acute enteritis and UC. The expression of PD-1/PD-L1was negative in healthy controls, while acute enteritis with very few PD-L1-positive expression in the monocyte. In mild to severe UC, PD-1/PD-L1 expression was statistically significant (P < 0.001), the expression level of PD-1/PD-L1 increased with disease activity. A flow cytometry analysis of Th1/Th17 percentage in the peripheral blood of patients with ulcerative colitis The expression level of Th1/Th17 was significantly different between UC patients and healthy controls, with Th1 cells increasing with the severity of the disease (P < 0.05), from mild UC (16.186.31%) and moderate UC (26.458.84%), to severe UC (30.8611.62%). Though Th17 expression levels were higher in UC patients than in healthy controls (P < 0.05), and Th17 expression increased with severity of illness, there was no statistically significant difference between mild UC (2.100.99%), moderate UC (2.881.46%), and severe UC (3.942.57%; P > 0.05; Fig. 4).
Interferon levels in the peripheral blood of UC patients were significantly higher than in healthy controls, especially in the severe UC group The content of interferon in the peripheral blood of mild UC patients (4.800.26 pg/mL), moderate UC patients (6.590.53 pg/mL) and severe UC patients (9.900.94 pg/mL) was higher than in healthy controls (2.940.12 pg/mL; P < 0.001), and interferon levels in the severe UC group were higher than in the mild UC group (P < 0.005; Fig. 5). Table 3 Correlation analysis of sPD-L1 with PD-L1 (monocyte) and PD-L1 (epithelium) in mucosal tissues. The correlation coefficients between sPD-L1 and PD-L1 on monocytes, PD-L1 on epithelial cells is 0.606, 0.420, respectively (P < 0.001). A correlation analysis of PD-1/PD-L1 with peripheral blood Th1/Th17 showed a good correlation between Th1/Th17 and the expression of PD-1 on T cells The results of the ELISA and flowcytometry analyses revealed positive correlations between PD-1/PD-L1 and Th1/Th17 (all P-values < 0.005). The correlation coefficients between Th1 and sPD-L1, PD-1/T, PD-1/CD4+T and PD-1/CD8+T were 0.427, 0.589, 0.486, and 0.329, respectively (P < 0.001). The correlation coefficients between Th17 and
DISCUSSION
This cross-sectional study of 80 UC patients found that PD-1 expression in monocytes is increased in UC patients compared to healthy controls and acute enteritis patients. PD-L1 expression in monocytes and epithelial cells was also increased, especially in the severe UC group. These results indicate that PD-1/PD-L1 expression tends to be up-regulated with an increase in UC disease severity, which differs from healthy controls and from an acute inflammation reaction. In addition to being expressed primarily on activated T and B cells, PD-1/PD-L1 can also attenuate the activation signal of immune cells and mediate immune tolerance to autoantigens (Bai et al., 2017). In the clinical treatment of cancer, PD-1/PD-L1 has proven to be an important immunotherapeutic target, but 2-5% of tumor patients with anti-PD-1/ PD-L1 may develop intestinal adverse reactions, and some patients experience structural changes to mucosal tissue, ulcers and other pathological changes similar to UC (Han, Liu & Li, 2020;Dougan et al., 2021). A previous study suggests that the interrupted PD-1/PD-L1 signal path contributes to the tolerance of intestinal mucosa to auto-antigens in mice, which could lead to severe autoimmune enteritis (Chulkina, Beswick & Pinchuk, 2020). Previous research also reflects the importance of PD-1/PD-L1 in maintaining intestinal mucosal health. In our study, we found up-regulation of PD-1/PD-L1 on inflammatory cells in the mucosal lamina propria and on epithelial cells. PD-1/PD-L1 was specifically expressed in UC mucosal tissues, with an increasing trend observed in PD-1/PD-L1 expression with the progression of inflammation, specifically the expression of PD-1/PD-L1 on monocytes in UC mucosal tissue, which is the result of the adaptation of mucosal tissue immune cells to chronic inflammation. However, not all UC tissue specimens in our study showed positive expression of PD-1/PD-L1. Some specimens (including samples Table 4 Correlation analysis of PD-1/PD-L1 with peripheral blood Th1. The correlation coefficients between Th1 and sPD-L1, PD-1/T, PD-1/CD4+T and PD-1/CD8+T is 0.427, 0.589, 0.486, 0.329, respectively (P < 0.001).
sPD-L1 PD-1/T PD-1/CD4+T PD-1/CD8+T from the mild UC, moderate UC and severe UC groups) showed negative expression. There are two possible reasons for this. First, acute inflammation of UC mucosa was observed, and the number of neutrophils was much higher than that of monocytes. Second, the mucosal tissues of UC patients are in a sustained state of chronic inflammation, so the lymphocytes are no longer sensitive to the stimulation of cytokines and other signaling molecules.
To further assess the degree of immune dysfunction in UC patients, flow cytometry was used to detect Th1/Th17 in peripheral blood. Cellular immune dysfunction in UC patients was mainly observed in Th1/Th17, especially Th1, which increased with UC disease activity. Recent research has made clear that the dysfunction of lymphocyte subsets is a critical part of how UC develops immunologically (Rovedatti et al., 2009). CD4+ T cells are classified as helper or regulatory T cells, with a range of effector or regulatory functions. Aside from IFN-, other cytokines released by Th1 cells, such as TNF-a, and IL-17A, IL-17F, and IL-22, which is released by Th17 cells, play an important role in the immune response of UC patients (Lee et al., 2021).
Ulcerative Colitis is a chronic recurrent intestinal disease, and UC patients experience a state of sustained inflammation which activates the immune response of the body. Inflammatory factors secreted by immune cells further affect the expression of PD-1/PD-L1 on mucosal or peripheral blood immune cells. Some studies show that PD-1/PD-L1 mediates immune cell-macrophage interactions to control inflammation in the gut (O'Malley et al., 2018). There is evidence that B lymphocytes with high PD-L1 expression change from plasma cells into memory cells to affect the function of Th1/Th17 cells (Khan et al., 2015). Aguirre et al. (2020), however, demonstrated that normal fibroblasts (MFs) can inhibit Th1/Th17 cell activity through the PD-1/PD-L1 pathway, while in Crohn's disease patients, increased matrix metalloproteinases can fracture PD-L1, contributing to Th cell dysregulation. This also indicates that the expression of PD-L1 in UC mucosal tissues is different from the expression in Crohn's disease. It is worth noting that PD-L1 can be cleaved by matrix metal enzymes, which is one of the important pathways for the production of sPD-L1. Studies have also found a positive correlation between PD-1 expression on Th cells and disease activity in active UC patients (Long et al., 2021). IFN-increases antigen presentation and promotes Th1 differentiation, leading to cellular immunity, as well as up-regulating PD-L1 in ovarian cancer cells, promoting tumour growth (Abiko et al., 2015). PD-1/PD-L1, which is influenced by cytokines, also regulates the function of immune cells. Another study suggests that IL-17 and TNF-a act individually rather than cooperatively to up-regulate PD-L1 expression in HCT116 cells by activating NF-κB and ERK1/2 (Wang et al., 2017). There is a soluble form of PD-1 found in the plasma of healthy individuals, and it is elevated in autoimmune diseases and chronic infections (Khan, Arooj & Wang, 2021). Excessive soluble PD-1 also blocks the PD-1/PD-L1 pathway, contributing to immunologic injury (Zhao et al., 2018;Elhag et al., 2012). Previous research has also shown that excessive amounts of soluble PD-1 contribute to the progression of arthritis via the Th1 and Th17 pathways (Liu et al., 2015). Differing from arthritis, sPD-1 levels in the peripheral blood of UC patients does not significantly differ from healthy controls, suggesting that sPD-1 may not play a role in UC. SPD-1 and sPD-L1 have been detected in plasma, and elevated levels have been linked to advanced disease and poorer prognosis (Khan et al., 2020). However, the role of sPD-L1 in UC progression still needs to be explored. Our findings show that sPD-L1 levels in the peripheral blood of UC patients were significantly elevated and increased with UC disease severity. In a recent study, sPD-L1 was shown to inhibit T lymphocyte function, acting as a negative regulatory factor, indicating that sPD-L1 has a negative regulatory effect on immune cells in peripheral blood. SPD-L1 is also valuable in assessing disease severity in patients with UC. Previous research has investigated sPD-L1 as a biomarker of disease progression, prognosis, and response to checkpoint immunotherapy, and found that a high sPD-L1 level is associated with a worse clinical response (Zhang et al., 2019). Using peripheral blood sPD-L1 for UC prognosis needs further investigation, as our findings indicate sPD-L1 has the potential to evaluate the prognosis of patients with UC.
In combination with PD-L1, sPD-L1 is more of a general indicator of an inflammatory state, and the different forms of PD-L1 reinforce the dynamic crosstalk between the variety of cells implicated in the system (Cheng et al., 2020). The increased PD-1/PD-L1 in mucosal tissue and sPD-L1 in the peripheral blood of UC patients may function as protective feedback mechanisms made by immune cells in the state of inflammation. The exact factors that contribute to the dysregulated PD-1/PD-L1 balance in UC are not yet known, though this dysregulation increases the patient's susceptibility to autoimmune complications of UC. PD-1/PD-L1 is an important signaling molecule for future research on the pathogenesis of UC immunology and for identifying potential drug therapy targets.
CONCLUSIONS
This study found that the expression level of PD-1/PD-L1 was correlated with UC disease activity, and two forms of PD-1 and PD-L1 may be used as potential diagnostic markers for UC and markers for assessing UC disease activity. PD-1/PD-L1 imbalance is a major characteristic of UC immune dysfunction. Future research should focus on the PD-1/PD-L1 signaling molecule and its connection to the pathogenesis of UC immunology and for identifying future drug therapy targets.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
A grant from the Health Commission of Shanxi Province financed this project (NO: 2020032). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | 2023-06-02T15:13:46.723Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "2bc55111026e55aae19b9c13d38aa613083e5593",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.15481",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf8f0bafd74276495f1786ada19d3b925fe40eb6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247687568 | pes2o/s2orc | v3-fos-license | Phase 3 Randomized, Multicenter, Placebo-Controlled Study to Evaluate Safety, Immunogenicity, and Lot-to-Lot Consistency of an Adjuvanted Cell Culture-Derived, H5N1 Subunit Influenza Virus Vaccine in Healthy Adult Subjects
A cell-based process may be better suited for vaccine production during a highly pathogenic avian influenza (HPAI) pandemic. This was a phase 3, randomized, controlled, observer-blind, multicenter study evaluated safety, immunogenicity, and lot-to-lot consistency of two doses of a MF59-adjuvanted, H5N1 influenza pandemic vaccine manufactured on a cell culture platform (aH5N1c) in 3196 healthy adult subjects, stratified into two age groups: 18 to <65 and ≥65 years. Immunogenicity was measured using hemagglutination inhibition (HI) titers. HI antibody responses increased after the first aH5N1c vaccine dose, and 3 weeks after the second vaccination (Day 43), age-appropriate US Center for Biologics Evaluation and Research (CBER) and former European Medicines Authority Committee for Medicinal Products for Human Use (EMA CHMP) immunogenicity criteria were met. Six months after the first vaccination, HI titers were above baseline but no longer met CBER and CHMP criteria. No relevant changes over time were seen in placebo subjects. Solicited AEs were more frequent in the active treatment than the placebo group, primarily due to injection site pain. No serious adverse events (SAEs) related to aH5N1c- were reported. aH5N1c influenza vaccine elicited high levels of antibodies following two vaccinations administered 21 days apart and met both CBER and former CHMP immunogenicity criteria at Day 43 among both younger and older adults with a clinically acceptable safety profile. Consistency of the three consecutive aH5N1c vaccine lots was demonstrated (NCT02839330).
Introduction
In the last century, the rise in international trade and travel has increased the probability of worldwide pandemics, as seen most recently with the infectious disease, COVID-19. The primary prophylactic measure against pandemic influenza are vaccines, and the ability to rapidly develop and produce a specific monovalent vaccine targeted to a new circulating virus strain is vital to pandemic preparedness plans worldwide [1].
During the most recent influenza pandemic-due to the (H1N1)pdm09 virus or "swine flu"-an estimated 60.8 million swine flu cases with 274,304 hospitalizations and 12,469 deaths occurred between 2009 and 2010 in the US alone, and it is estimated that the swine flu caused over 500,000 deaths worldwide. However, this pandemic appeared to be less severe than would have been expected with an associated mortality rate of only 0.001% to 0.007% in the first year, whereas for other influenza pandemics the worldwide mortality rate has ranged from 0.03% for the 1968 H3N2 pandemic to 1% to 3% during the 1918 1N1 pandemic. In addition, the 2009 H1N1 pandemic primarily affected the young and middle aged, whereas many older adults were found to have antibodies to this virus from an earlier H1N1 infection [2]. The H5N1 avian influenza virus represents another pandemic threat. In 1997, the first outbreak of highly pathogenic H5N1 avian influenza occurred in Asia (Hong Kong), which led to 18 human cases and 6 deaths before public health authorities ordered the slaughter of poultry throughout Hong Kong to stop the spread of this virus [3]. It re-emerged in 2003, leading to worldwide concerns over the possibility of an H5N1 pandemic. According to the World Health Organization (WHO), 862 human cases of H5N1 infection were identified from 2003 to 2021 and resulted in 455 deaths, representing a case fatality rate of 53% [4].
In addition to vaccine subtype, pandemic preparedness planning must consider the capacity and efficiency of the manufacturing process. Influenza vaccine manufacturing has relied on embryonated chicken eggs to produce antigens for over 50 years. During a highly pathogenic avian influenza outbreak, both egg quantity and quality may be compromised, yet rapid production of a vaccine specific against an emerging pandemic influenza strain is critical to controlling its spread.
In alignment with the US Department of Health and Human Services (DHHS) Pandemic Preparedness Plan [5], an MF59-adjuvanted cell culture-derived monovalent H5N1 pandemic influenza vaccine (aH5N1c) vaccine was developed by Seqirus, Inc. Cell culturederived vaccines are not subject to the potential limitations of egg-based production (e.g., the need for large quantities of fertilized eggs; the potential for egg-adaption of seed virus and antigenic mismatch) and help address the medical need for safe and effective pandemic vaccines [1,6].
Previous clinical experience suggests that two doses of nonadjuvanted H5N1 influenza vaccine with 90 µg of strain-specific hemagglutinin (HA)-which represents six times the normal 15 µg/dose required for the interpandemic seasonal influenza vaccine-are necessary to induce a substantial increase in antibody responses in unprimed, immunologically naïve individuals [7]. The use of an adjuvant, however, allows a reduction in the quantity of antigen per dose ("antigen sparing") and would potentially lead to increased vaccine production capacities [8]. In addition, the observation of enhanced and broader, i.e., crossreactive, immune responses after vaccination with MF59-adjuvanted H5N1, and seasonal (FLUAD) vaccines is of great interest for the development of pre-pandemic vaccines, as stockpiled vaccines may be used during the early days of a pandemic before the strain matched vaccine becomes available [9,10].
To address the threat of an HPAI outbreak, e.g., H5N1, when both egg quantity and quality may be compromised, an alternative to traditional egg supply is needed. In preparation for a future H5N1 pandemic, this study evaluated the immunogenicity, lot-tolot consistency, and safety of three consecutively produced lots of the aH5N1c pandemic vaccine in healthy subjects ≥18 years of age.
Study Design and Randomization
This phase 3, multicenter, randomized, observer-blind, controlled study involved subjects aged ≥18 years who were stratified into two equal age groups, 18 to <65 and ≥65 years of age, and then randomized in a 1:1:1:1 ratio to receive one of three consecutively produced aH5N1c vaccine lots (Groups A-C) or placebo (saline). Subjects received two doses of vaccine or placebo intramuscularly given three weeks apart on Day 1 and Day 22. After each vaccination, subjects remained under medical supervision at the study site for at least 30 min to observe any immediate adverse events (AEs). After the second vaccination, subjects were monitored for 12 months for safety, for a total study duration of approximately 13 months per subject.
This trial was designed, implemented, and reported in accordance with the International Conference on Harmonization (ICH), Harmonized Tripartite Guidelines for Good Clinical Practice (GCP), with applicable local regulations, and with the ethical principles laid down in the Declaration of Helsinki. An independent institutional review board approved the study protocol and informed consent form. All study subjects provided written, informed consent. The study is registered at https://clinicaltrials.gov/, accessed on 18 March 2022 (NCT02839330).
Study Vaccine Administration
The vaccine used for this study was an MF59-adjuvanted, cell culture-derived, monovalent, inactivated, H5N1 subunit influenza virus (A/turkey/Turkey/1/2005 NIBRG-23 strain; Seqirus Inc., Holly Springs, NC, USA). The three lots included Lot No.181053 (Group A, Lot 1), Lot No.181054 (Group B, Lot 2), and Lot No.181675 (Group C, Lot 3). Each dose was 0.5 mL and contained 7.5 µg hemagglutinin with 0.25 mL MF59. A fourth group received 0.5 mL placebo (0.9% NaCL, 2 mL vial, West-Ward Pharmaceuticals, Cherry Hill, NJ, USA) batch number 035385. Vaccines were administered on Day 1 and Day 22 as single intramuscular injections in the nondominant arm by designated site staff who did not participate in any assessment of outcomes. The subjects, investigators, and site personnel who evaluated AEs remained blinded to treatment group assignment.
Study Participants
The study enrolled 3196 healthy subjects ≥18 years of age who gave consent and were willing and able to comply with protocol requirements. Individuals were excluded if they had impaired immune systems, previous influenza vaccination within 7 days of starting the study or any other vaccination within 28 days of study start, or a history of H5N1 influenza or H5N1 influenza vaccination. Female subjects of childbearing potential were excluded if they were pregnant, breastfeeding, or not using adequate birth control.
Study Objectives and Endpoints
The coprimary study objectives were to determine lot-to-lot consistency across three consecutively produced lots of the aH5N1c vaccine in terms of geometric mean titers (GMT) and achievement of US Center for Biologics Evaluation and Research (CBER) criteria for the percentage of subjects achieving an hemagglutination inhibition (HI) antibody titer ≥ 1:40 [11]. Secondary immunogenicity objectives were to evaluate immune responses to the aH5N1c vaccine according to immunogenicity criteria defined by European Medicines Authority Committee for Medicinal Products for Human Use (EMA CHMP) recommendations (as applicable at the time of study conduct) 3 weeks after the second vaccine administration (Day 43) and by CBER and CHMP recommendations 3 weeks after the first vaccine administration (Day 22), as well as to evaluate immune responses to the aH5N1c vaccine 6 months after the first vaccine administration (Day 183) [11,12]. Immunogenicity endpoints were assessed by HI assay against the H5N1 vaccine strain according to standard methods, expressed as GMTs on Days 1, 22, 43, and 183 and geometric mean ratios (GMRs; Day 22/Day 1, Day 43/Day 1, and Day 183/Day1) in each treatment group (pooled aH5N1 or placebo) in the total population and by age cohort (18 to <65 years of age and ≥65 years of age). The proportions of subjects with HI ≥ 1:40 on Days 1, 22, 43, and 183 and those achieving seroconversion (defined as HI titer ≥ 1:40 for subjects negative at baseline (HI titer < 1:10) or a minimum 4-fold increase in HI titer for subjects positive at baseline (HI titer ≥ 1:10)) on Day 22 and Day 43 were also determined for each treatment group (pooled aH5N1c or placebo) in the total population and by age cohort.
Safety endpoints included solicited local and systemic adverse events (AEs) collected on subject diary cards for 7 consecutive days after each injection. Solicited local AEs included injection site induration, erythema, ecchymosis, and pain. Solicited systemic AEs included nausea, generalized myalgia, generalized arthralgia, headache, fatigue, chills, loss of appetite, malaise, and fever (≥38.0 • C). Erythema, induration, and ecchymosis were graded as Grade 0 (<25 mm) or any (25-50 mm (Grade I), 51-100 mm (Grade II), >100 mm (Grade III). Injection site pain, systemic AEs except fever, and all unsolicited AEs were graded as mild (transient with no limitation in normal daily activity), moderate (some limitation in normal daily activity), or severe (unable to perform normal daily activity) as assessed by the investigator. Body temperature ≥39 • C was considered severe fever.
All unsolicited AEs were collected from first vaccination through Day 43. Serious adverse events (SAEs), AEs of special interest, new onset of chronic disease, AEs leading to vaccine/study withdrawal, medically attended AEs, associated concomitant medications for any of these events, and all vaccinations, were collected throughout the study. The causal relationships of AEs to the study vaccines were assessed by the investigators as either not related, possibly related, or probably related.
Statistical Methods
The full analysis set (FAS) included all subjects who received at least one dose of study vaccination and provided at least one evaluable serum sample at both pre-and post-vaccination timepoints. The primary analysis population was the per protocol set (PPS), which included all subjects in the FAS who received the correct vaccine to which the subject was randomized at the scheduled time points and who were not excluded due to a major protocol deviation or other reasons (e.g., withdrew informed consent). The solicited safety set included all subjects who received a study vaccination and who underwent any assessment of local and systemic site reaction and/or assessment of any use of analgesics or antipyretics. The unsolicited safety set included all subjects who received a study vaccine.
Based on data from previous studies in similar populations, a single equivalence test based on 718 subjects per lot group was determined to have a power of 95% with alpha of 0.025. Taking a dropout rate of approximately 10% into account, a total study enrollment of 3192 subjects (798 subjects per lot) was planned.
Lot consistency was assessed by determining the geometric mean titer ratio (GMT ratio) of HI antibody responses to the H5N1 vaccine strain in healthy adults three weeks after the second vaccine administration (Day 43). Lot-to-lot consistency was demonstrated if the 2-sided 95% confidence intervals (CIs) of all three pairwise GMT ratio comparisons (Group A/Group B, Group A/Group C, Group C/Group B) fell within the equivalence range of 0.667 to 1.5. Adjusted estimates of GMT ratios and their associated 95% CIs at Day 43 were computed using analysis of covariance (ANCOVA) on the log-transformed titers at Day 43 with factors for vaccine lot group, age group, center, and a covariate for the effect defined by the log-transformed prevaccination antibody titer (Day 1).
After confirmation of lot-to-lot consistency, the results of all vaccine recipients were pooled to evaluate immune responses to the aH5N1c vaccine according to the CBER criteria for HI antibody titer ≥1:40 on Day 43 as measured by age cohort and by strain-specific HI assay. For subjects aged 18 to <65 years, CBER criteria were met if the lower bound of the adjusted 2-sided 95% CI for the percentage of subjects achieving an HI antibody titer ≥ 1:40 was ≥70%. For subjects ≥65 years, CBER criteria were fulfilled if the lower bound of the adjusted 2-sided 95% CI for the percentage of subjects achieving an HI antibody titer ≥ 1:40 was ≥60%. Adjusted proportions and 95% CI were calculated using the log-linear model with the factors for treatment and center.
Secondary immunogenicity endpoints were based on the age-appropriate CBER and CHMP criteria on Days 22, 43, and 183. The age-appropriate CBER criteria for seroconversion require the lower bound of the 2-sided 95% CI for seroconversion rate to be ≥40% or ≥30% for subjects 18-65 and ≥65 years of age, respectively. Analysis of CHMP criteria requires point estimates for seroconversion rate to be >40% and >30%, for percentage of subjects achieving an HI antibody titer ≥1:40 to be >70% and >60%, and for GMR to be >2.5 and 2.0, for subjects aged 18-60 and ≥61 years, respectively.
Study Population
The study was conducted at 26 centers in the US between 11 July 2016 and 4 October 2017, and enrolled a total of 3196 adults, including 1597 subjects aged 18 to <65 years and 1599 subjects aged ≥65 years. Of the total enrolled population, 2394 subjects received aH5N1c and 797 received a placebo; 2234 and 747 subjects, respectively, completed the study; and 2249 and 739 were included in the PPS (Figure 1). Demography and other baseline characteristics were similar across the treatment groups ( Table 1). The mean age was 58 years, and the proportion of subjects in each age subgroup was evenly distributed across the four treatment groups. The majority of participants were women, and most subjects were white. Exposure to seasonal vaccine in the previous 12 months was higher in the older age cohorts than in the younger age cohorts, consistent with typical clinical practice in the US.
Study Population
The study was conducted at 26 centers in the US between 11 July 2016 and 4 October 2017, and enrolled a total of 3196 adults, including 1597 subjects aged 18 to <65 years and 1599 subjects aged ≥65 years. Of the total enrolled population, 2394 subjects received aH5N1c and 797 received a placebo; 2234 and 747 subjects, respectively, completed the study; and 2249 and 739 were included in the PPS (Figure 1). Demography and other baseline characteristics were similar across the treatment groups ( Table 1). The mean age was 58 years, and the proportion of subjects in each age subgroup was evenly distributed across the four treatment groups. The majority of participants were women, and most subjects were white. Exposure to seasonal vaccine in the previous 12 months was higher in the older age cohorts than in the younger age cohorts, consistent with typical clinical practice in the US.
Coprimary Objectives: Lot-to-Lot Consistency and CBER Criteria
GMTs at Day 43 for each of the aH5N1c lots were 128.6 (95% CI 118.9 to 139.1), 127.4 (117.6 to 138.0) and 132.2 (122.1 to 143.1). Pairwise comparisons of the GMT ratios demonstrated lot-to-lot consistency (Figure 2a). The CBER immunogenicity criteria were also met, with the lower bound of the 95% CI for the proportion of patients with HI ≥ 1:40 on Day 43 well above 70% in subjects younger than 65 years and above 60% in subjects aged ≥65 years (Figure 2b).
Immunogenicity
As shown in Table 2, baseline GMTs were slightly higher in older adults (≥65 years). GMTs increased from baseline in the active treatment groups at Day 22, three weeks after the first vaccination, with a further increase at Day 43, three weeks after the second vaccination, in both age groups. Increases, as assessed by GMRs, were larger in younger (18 to <65 years) than in the older (≥65 years) adults, as would be anticipated due to immunosenescence. CHMP criteria were met in both age groups on Day 43 (Supplementary Table S1). In subgroup analyses, no clinically significant differences between genders were observed. Center for Biologics Evaluation Research (CBER) criteria were met if the lower bound of the 95% CI was ≥70% in subjects aged 18 to <65 years and ≥60% in subjects aged ≥65 years.
Immunogenicity
As shown in Table 2, baseline GMTs were slightly higher in older adults (≥65 years). GMTs increased from baseline in the active treatment groups at Day 22, three weeks after the first vaccination, with a further increase at Day 43, three weeks after the second vaccination, in both age groups. Increases, as assessed by GMRs, were larger in younger (18 to <65 years) than in the older (≥65 years) adults, as would be anticipated due to immunosenescence. CHMP criteria were met in both age groups on Day 43 (Supplementary Table S1). In subgroup analyses, no clinically significant differences between genders were observed. Seroconversion rates were consistently higher among subjects receiving aH5N1c than the placebo. On Day 43, 79.9% (95% CI 77.4 to 82.3) of subjects 18-65 years of age and 54.0% (95% CI 51.0 to 57.0) of subjects ≥65 years of age receiving aH5N1c had achieved seroconversion and met the age-appropriate CBER criteria for seroconversion rates (Table 3). CHMP criteria for seroconversion were also met on Days 22 and 43 for subjects aged 18 to <60 years and on Day 43 among those aged ≥60 years in the active vaccine group (Supplementary Table S2). Abbreviations: CBER, Center for Biologics Evaluation and Research; CI, confidence interval; HI, hemagglutination inhibition. a Seroconversion was defined as either a prevaccination (baseline) HI titer <1:10 and postvaccination HI titer ≥ 1:40 or a prevaccination HI titer ≥ 1:10 and a ≥4-fold increase in postvaccination HI antibody titer. Boldface indicates CBER criteria for seroconversion were met, i.e., lower bound of 95% CI ≥ 40% for subjects younger than 65 years and ≥30% for subjects aged ≥65 years on Day 43.
Safety
The frequency of any solicited local or systemic AE was comparable between the aH5N1c groups and higher in the active treatment groups than in the placebo group. The proportion of subjects for whom any solicited AE was reported was lower after the second than after the first vaccination in both the active treatment and the placebo groups (Figure 3). In a subgroup analysis by gender, no clinically significant differences in safety endpoints were observed.
Safety
The frequency of any solicited local or systemic AE was comparable between the aH5N1c groups and higher in the active treatment groups than in the placebo group. The proportion of subjects for whom any solicited AE was reported was lower after the second than after the first vaccination in both the active treatment and the placebo groups (Figure 3). In a subgroup analysis by gender, no clinically significant differences in safety endpoints were observed. Injection site pain was the most common solicited local AE, reported by 49.9% of subjects who received aH5N1c compared to 14.7% of those who received the placebo. Pain was reported more frequently among younger than older subjects: 64.1% vs. 35.9% among those aged 18 to <65 and ≥65 years, respectively, in the aH5N1c treatment groups and 19.9% vs. 9.6%, respectively, in the placebo group. The majority of pain reported was of mild or moderate intensity and mostly resolved within a couple of days following vaccination. The frequency of severe pain after any vaccination was low: 4 out of 2352 (0.2%) subjects in the aH5N1c group compared to 1 out of 784 (0.1%) subjects in the placebo group. The frequency of other solicited local AEs was too low for meaningful comparison between age groups. The most common solicited systemic AE was fatigue, which was reported by 22.2% of subjects in the aH5N1c group compared to 20.4% in the placebo group. In the aH5N1c treatment group, more subjects aged 18 to <65 years reported fatigue (24.8%) than subjects aged ≥65 years (19.7%); the frequency of fatigue in the placebo group was 21.4% and 19.4% among the younger and older age groups, respectively. Malaise, headache, and myalgia were also reported more frequently by subjects aged 18 to <65 years than ≥65 years in the aH5N1c treatment groups. In both groups, solicited systemic AEs were predominantly mild or moderate in severity and mostly occurred within 3 days of injection. The frequency of severe solicited AEs was 1.9% in the aH5N1c group compared with 2.8% in the placebo group.
The proportion of subjects reporting unsolicited AEs was similar among those receiving aH5N1c (53.1%) or the placebo (52.3%) throughout the study (Figure 4) Injection site pain was the most common solicited local AE, reported by 49.9% of subjects who received aH5N1c compared to 14.7% of those who received the placebo. Pain was reported more frequently among younger than older subjects: 64.1% vs. 35.9% among those aged 18 to <65 and ≥65 years, respectively, in the aH5N1c treatment groups and 19.9% vs. 9.6%, respectively, in the placebo group. The majority of pain reported was of mild or moderate intensity and mostly resolved within a couple of days following vaccination. The frequency of severe pain after any vaccination was low: 4 out of 2352 (0.2%) subjects in the aH5N1c group compared to 1 out of 784 (0.1%) subjects in the placebo group. The frequency of other solicited local AEs was too low for meaningful comparison between age groups. The most common solicited systemic AE was fatigue, which was reported by 22.2% of subjects in the aH5N1c group compared to 20.4% in the placebo group. In the aH5N1c treatment group, more subjects aged 18 to <65 years reported fatigue (24.8%) than subjects aged ≥65 years (19.7%); the frequency of fatigue in the placebo group was 21.4% and 19.4% among the younger and older age groups, respectively. Malaise, headache, and myalgia were also reported more frequently by subjects aged 18 to <65 years than ≥65 years in the aH5N1c treatment groups. In both groups, solicited systemic AEs were predominantly mild or moderate in severity and mostly occurred within 3 days of injection. The frequency of severe solicited AEs was 1.9% in the aH5N1c group compared with 2.8% in the placebo group.
The proportion of subjects reporting unsolicited AEs was similar among those receiving aH5N1c (53.1%) or the placebo (52.3%) throughout the study (Figure 4). The majority of the reported unsolicited AEs were of mild or moderate intensity. No differences in frequency, severity, or nature of unsolicited AEs in the aH5N1c group compared to the placebo group were observed. of the reported unsolicited AEs were of mild or moderate intensity. No differences in frequency, severity, or nature of unsolicited AEs in the aH5N1c group compared to the placebo group were observed. None of the serious AEs or AEs of special interest reported by subjects who received aH5N1c were considered vaccine related. Two subjects in the placebo group reported a related AE of special interest (immune thrombocytopenic purpura and polymyalgia rheumatic); these events were also considered serious AEs. During the study, 12 (0.4%) subjects had serious AEs with a fatal outcome, none of which were attributed to the study treatment, and most (n = 11) occurred after Day 43 during the follow-up period in subjects ≥65 years with underlying severe comorbidities and multiple concomitant medications.
Discussion
The results from this study demonstrated that the aH5N1c vaccine was highly immunogenic for both younger (18 to <65 years) and older (≥65 years) adults and elicited high HI titers in both age groups. The coprimary immunogenicity objectives were met, showing consistency between the three consecutively produced lots of aH5N1c and also demonstrating that the vaccine met age group-specific CBER licensure criteria for the proportion of subjects with HI ≥1:40 and for seroconversion on Day 43. Moreover, at Day 43, all three CHMP age group criteria (GMR, proportion of subjects with HI ≥1:40, and seroconversion rates) were met. GMTs, GMRs, and seroconversion rates demonstrated significantly greater antibody responses among aH5N1c recipients than placebo recipients at all time points and across age groups. In the gender-based subgroup analyses, we observed no clinically significant differences in either immunogenicity or safety endpoints.
After the first vaccination, immune responses increased from baseline, but two doses of aH5N1c augmented the response to levels that met licensing criteria. A decline in antibody titers 6 months after immunization was observed. This result was contrary to expectation based on phase 2 study results but consistent with other H5N1 studies [13][14][15][16]. Per- None of the serious AEs or AEs of special interest reported by subjects who received aH5N1c were considered vaccine related. Two subjects in the placebo group reported a related AE of special interest (immune thrombocytopenic purpura and polymyalgia rheumatic); these events were also considered serious AEs. During the study, 12 (0.4%) subjects had serious AEs with a fatal outcome, none of which were attributed to the study treatment, and most (n = 11) occurred after Day 43 during the follow-up period in subjects ≥65 years with underlying severe comorbidities and multiple concomitant medications.
Discussion
The results from this study demonstrated that the aH5N1c vaccine was highly immunogenic for both younger (18 to <65 years) and older (≥65 years) adults and elicited high HI titers in both age groups. The coprimary immunogenicity objectives were met, showing consistency between the three consecutively produced lots of aH5N1c and also demonstrating that the vaccine met age group-specific CBER licensure criteria for the proportion of subjects with HI ≥ 1:40 and for seroconversion on Day 43. Moreover, at Day 43, all three CHMP age group criteria (GMR, proportion of subjects with HI ≥ 1:40, and seroconversion rates) were met. GMTs, GMRs, and seroconversion rates demonstrated significantly greater antibody responses among aH5N1c recipients than placebo recipients at all time points and across age groups. In the gender-based subgroup analyses, we observed no clinically significant differences in either immunogenicity or safety endpoints.
After the first vaccination, immune responses increased from baseline, but two doses of aH5N1c augmented the response to levels that met licensing criteria. A decline in antibody titers 6 months after immunization was observed. This result was contrary to expectation based on phase 2 study results but consistent with other H5N1 studies [13][14][15][16]. Persistence of immune response was nevertheless evident, with HI titers that remained elevated over baseline 6 months after the first vaccination. Of note, in the pediatric population, both the cell and egg culture-derived MF59 adjuvanted H5N1 vaccines have been shown to elicit high antibody titers, which persisted up to 1 year after vaccination [17,18].
Although the immune response was lower in older than younger subjects, as may be expected due to age-related immunosenescence [19], both CBER and CHMP immuno-genicity criteria were met. Immunosenescence encompasses a range of alterations in the immune response, including impaired function of antigen presenting cells and decreases in the number of T cells available to respond to new antigens, antibody response, high-affinity antibodies, and metabolic activity within memory CD4+ cells [20][21][22]. The MF59 adjuvant is a proprietary squalene-based, oil-in-water emulsion that improves the magnitude, breadth, and persistence of the immune response by enhancing antigen uptake at the injection site [23][24][25][26]. In multiple clinical trials, MF59-adjuvanted seasonal influenza vaccine boosted the immune response in older adults relative to standard influenza vaccines [27].
Baseline HI titers, prior to vaccination, were slightly elevated across all treatment groups. These titers appeared to be highest in the ≥65 years age group (whether receiving aH5N1c or placebo); similar results have been found by other investigators [14,28,29]. One of the possible explanations for this phenomenon is that elderly people with prolonged natural exposure to seasonal influenza viruses and/or multiple lifetime vaccinations may develop antibodies with antigenic cross-reactivity with H5N1 strains [28,29].
The aH5N1c vaccine was safe, well tolerated, and shown to have an acceptable riskbenefit profile overall. The majority of AEs were mild or moderate in severity and of a transient nature. Solicited AEs were more common with aH5N1c than the placebo, which is consistent with previous studies on the H5N1 vaccine [7]. The frequency of AEs was lower after the second than after the first vaccination, and generally, the incidence of solicited AEs was higher among subjects aged <65 than ≥65 years. The difference between aH5N1c and placebo in solicited local AEs, i.e., injection site pain, is consistent with trials of adjuvanted seasonal influenza vaccines [30,31]. The frequencies, nature, and severity of solicited systemic and unsolicited AEs were similar between the active vaccine and placebo groups, as reported in a trial of the nonadjuvanted H5N1 vaccine [7].
Cell-based production of pandemic vaccines may offer several advantages over eggbased methods. First is the potential concern of using egg-based production to combat an avian influenza pandemic. Second, egg-adaption of seed virus introduces the potential for antigenic mismatch between the vaccine and circulating strain [32][33][34][35]. In contrast, a vaccine production platform based on mammalian cell culture ensures a closer match between the original candidate virus and the vaccine virus [36]. Cell-based manufacturing may also facilitate more rapid production to meet the needs of a population beset by a pandemic [1,6].
Conclusions from this study are limited because there was no evaluation of crossreactive antibodies, although this was assessed in a previous phase 2 study with an MF59 adjuvanted egg culture-derived H5N1 vaccine [15]. The size the study population was adequate to evaluate general safety but was not large enough to detect rare events.
Conclusions
Both coprimary immunogenicity objectives of this study were met for the aH5N1c vaccine. The ratio of GMTs for HI antibody responses to the H5N1 pandemic vaccine strain three weeks after the second vaccine administration demonstrated consistency in three consecutively produced lots of the aH5N1c vaccine. In addition, the age-appropriate CBER immunogenicity criteria for the percentage of subjects achieving an HI antibody titer ≥ 1:40 and those achieving seroconversion at Day 43 were met in both age groups (18 to <65 years and ≥65 years), and all three CHMP criteria (GMR, proportion of subjects with HI ≥ 1:40, and seroconversion rates) were met for subjects 18 to <60 years and ≥60 years of age. Vaccination with 7.5 µg of the aH5N1c vaccine elicited an immune response as shown by the increase in HI GMT after the first vaccination (measured on Day 22) that was further increased after the second vaccination (measured on Day 43). The aH5N1c influenza vaccine was well tolerated with a clinically acceptable safety profile. | 2022-03-26T15:16:28.535Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "77a08ca2f33e1edc9c1d9a740e82aa854ac15132",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/4/497/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73b053fb3463211c42bec3e414e3eac943a7bac0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55292966 | pes2o/s2orc | v3-fos-license | Stomatal Conductance and Chlorophyll Characteristics and Their Relationship with Yield of Some Cocoa Clones Under Tectona Grandis, Leucaena SP., and Cassia Surattensis.
An optimum physiological condition will support high yield and quality of cocoa production. The research was aimed to study the effects of stomatal conductance and chlorophyll content related to cocoa production under three shade regimes.This research was conducted in Kaliwining Experimental Station, elevation of 45 m above sea level with D climate type based on Schmidt & Fergusson. Cocoa trees which were planted in 1994 at a spacing of 3 X 3 m were used in the study planted by using split plot design. The shade tree species were teak (Tectona grandis), krete (Cassiasurattensis), and lamtoro (Leucaena sp.) as the main plots, and cocoa clones of Sulawesi 01,Sulawesi 02, KKM 22 and KW 165 as sub plots. This study showed that there was interaction between cocoa clone and shade species for stomatal conductance where stomatal diffusive resistance of KKM 22 was the best under Leucaena sp.and Cassiasurattensis with the values of 1.38 and 1.34 s.cm -1, respectively. The highest chlorophyll content, stomatal index and transpiration values was under Leucaena sp. shade. There was positive correlation between chlorophyll content and transpiration with pod yield of cocoa. The highest yield and the lowest bean count wereobtainedon Sulawesi 01 clone under Leucaenasp. shade.
INTRODUCTION
During many years, cocoa has been regarded as a favored export commodity in Indonesia. Cocoa farming involves about 1.6 million farmers for the industry. Cocoa production in Indonesia has reached 833,310 tons/year which occupies the 3 rd largest producer after Ghana and Ivory Coast (Ditjenbun, 2012).
Several factors which affecting production are climatic factors and crop management systems such as shade, pruning, fertilizing and genetic properties (Cannell, 1985;Susilo, 2015). Considering that cocoa is a C3 plant which is shade loving tree, then it needs suitable micro-climatic conditions for optimum growth. Meanwhile, studies have shown that shade management in cocoa plantation can mitigate the effects of extreme temperature and precipitation, there by reducing the ecological and economic vulnerability of farmer (Da Matta et al., 2007).
Young cocoa plants require light intensity around 25-60% of whole light intensity for their growth (Abdoellah & Soedarsono, 1996) while the intensity of 50-70% reported that provided the highest production of mature cocoa (Prawoto, 2012). Previous research has studied that solar energy efficiency conversion PELITA PERKEBUNAN, Volume 31, Number 2, August 2015 Edition under that lamtoro (Leucaena sp.) shade was 59.8% which was higher than under Cassia sp. and Tectona grandis shades for several clones of cocoa (Regazzoni et al., 2015). Meanwhile, the shade was reported to have an important role in providing optimum conditions in some cocoa plant physiological conditions, primarily leaf area index (LAI), chlorophyll and stomatal density, but some other physiological characteristics were needed to be studied (Regazzoni et al., 2014).
Photosynthesis is a key condition in the establishment of the plant assimilates. The factors that determine the photosynthesis are the availability of CO 2 and H 2 O while the synthesis of carbohydrates is influenced by light and chlorophyll. CO 2 and H 2 O distribution is determined by stomatal conductance. Stomatal diffusion resistance is directly related to the process of photosynthesis.The process of photosynthesis requires CO 2 , one of the substrate in the process of photosynthesis, which is obtained from the air, taken by leaves up by diffusion through the stomata, subsequently in the process of uptake, stomata diffusion resistance has very important role (Salisbury & Ross, 1975). Chlorophyll in leaves is as a pigment that absorbs sunlight for photosynthesis process, especially in light reaction processes (Prawoto, 2015).
Transpiration is an important dynamic physiological activity and plays a role in regulatory mechanism and adaptation to internal and external conditions of its performance, mainly associated with the control of fluids (cells tissue turgidity), absorption and transport of water, mineral, nutrient and tissue temperature (Lakitan, 2013).
An optimum physiological condition will support the production and the quality of crop production. Studies have shown that shade trees assist in maintaining coffee yields in the long term by reducing periodic over-bearing and subsequent die-back of coffee branches (DaMatta et al., 2007). However, shade may positively affect bean size and composition as well as beverage quality by delaying and synchronizing berry flesh ripening (Muschler, 2001). The study of physiology and its effect on production and quality of cocoa beans in various shade conditions need further study. This aim of this research was to investigate the effects of stomatal conductance and chlorophyll content related to cocoa production under three shade species.
MATERIALS AND METHODS
Field experimental was conducted in Kaliwining Experimental Station, Indonesian Coffee and Cocoa Research Institute, Jember, Indonesia at an elevation of 48 m above sea level. The soils are classified as low humic loamy clay with D climate type based on Schmidt & Fergusson classification. The layout was split plot in randomised complete block design, with three replications. The study was carried out during April 2014 -May 2015.
The cocoa trees, which were planted in 1994 of plant material were used in the study. The trees were planted at a spacing of 3 m x 3 m. The shades were teak (Tectona grandis), krete (Cassia surattensis), and lamtoro (Leucaena sp.) as a main plots. The cocoa clones were Sulawesi 01, Sulawesi 02, KKM 22 and KW 165 as sub plots.
Temperature and relative humidity under the shade trees were measured using thermohygrometer, recorded at 10.00 AM. Ground coverage used fish eye capture method and anlalyzed by using Software Hemiview Canopy Analysis Delta-T.
Simultaneous records of photosynthetically active radiation (PAR), leaf temperature, stomatal diffusive resistance and leaf PELITA PERKEBUNAN, Volume 31, Number 2, August 2015 Edition transpiration were taken using a steady state porometer (LI-1600, Licor incorporation, USA). For this purpose, the equipment was tagged on leaves of each plant (the youngest or second youngest fully expanded, fully hardened leaf in each case) which are reported to be the most physiologically active (Daymond et al., 2011). For the gas exchange measurements, leaf chamber temperatures was 29.5-30.1 O C.
Stomatal index was observed through slathered stomata by using nail polish. Polish applied to the abaxial leaf and observed under a microscope. Number of stomata and leaf epidermis were observed to determine the stomatal index. Stomatal index obtained from: Stomatal Index = Chlorophyll content was measured by using SPAD-502 Chlorophyll meter Minolta. The value of SPAD (unit) was converted to chlorophyll content according to the equation of Markwell et al. (1995). Chlorophyll content = 10 (M 0.256 -PRO P -2 where M is measurement result. For this observation, in each plot 10 plants were determined using systematically random sampling. On April and September, small pods number (length 2-10 cm), medium pods (length 11-15 cm), large pods (length >15 cm) was observed. The expected yield was calculated by using the opportunities to be harvested as followed 20% of small; 75% of medium; and 95% of large pods (Prawoto, 2014).
Data were analyzed using analysis of variance, if there is a significant difference then it is continued with Duncan test at 5% level. Regression analysis was also conducted to determine the relationship among of parameters.
RESULTS AND DISCUSSION
According to Table 1. the temperature in three shade regines during the rainy season showed almost the same value that was 34-35 O C, but the moisture in the shade on teak was higher due to the canopy was denser than others. While the optimum shade on lamtoro, the temperature and humidity was 38 O C and 50% in the dry season and 35 and 71% in the wet season, respectively. The optimum temperature for a cocoa plantation is 22.4-30.4 O C to support photosyntesis processes (Prawoto, 2015).
Ground coverage is a description about the coverage of incoming light to the ground due to the obstruction of shade canopy. Based on Table 1, the ground coverage of Leucana sp. was 38 ± 6.8% (62% of light illumination), Cassia suratenesis was 49.6 ± 8.2% (51.4% of light illumination), and teak was 57 ± 6.8% (43% light illumination). Regazzoni et al. (2014) reported that increasing solar energy efficiency was followed by an increasing the percentage of shading, where Leucaena sp. (60% shading) absorbed solar energy lower than Tectona grandis and Cassia surattensis.
Stomatal diffusion resistance is resistance to the movement of gas from high concentration to low concentration through stomata. Stomatal resistance inhibits gases which will enter through the stomata (Salisbury & Ross, 1975 The higher the value of stomatal resistance, the larger the gas (included CO 2 ) diffusion barrier that enters into leaves will be, so that photosynthesis will be smaller. The higher the intensity of incoming light, the lower the stomatal diffusion resistance. It can be shown by the negative value of the linear regression (R 2 = -0.11, data is not presented) between stomatal diffusion resistance and the percentage of shading.
According to Drake et al. (2006), the stomatal characters affect the mechanism of gas exchange in the plant. Stomatal response with respect to the time was an important factor to determine the gas exchange on the leaf, so it can be seen the optimum environmental conditions. However, rate of the response of stomata will also affect the photoshyntesis and transpiration process, so it can be used to make improvements to the transpiration efficiency.
The single factor of shade species and clones provides significant different values. The highest transpiration was given by clone Sulawesi 01 that was not significantly different from KKM 22 and Sulawesi 02 clones, but significantly different from the KW 165 clone. In contrast to the treatment of shade species, the higher the intensity of the radiation, the higher the transpiration will be. At this point, the transpiration under Leucaena sp. shading (60% light intensity) was 2.46 Pg.cm -2 .s -1 that was the higest than Cassia surattensis (50% shading) and Tectona grandis were 2.25 Pg.cm -2 .s -1 and 2.11 Pg.cm -2 .s -1 , respectively (Table 3). (2012) (Sena et al., 2007).
Mayolie & Gitau
Transpiration rates initially increased and declined which were caused by differences in stomatal reaction to vapour pressure deficit (VPD). This was also observed by Hernandez et al. (1989) who reported that rapid closure of stomata as VPD was increased reduced transpiration in coffee, cacao and tea. The content of CO 2 increased photosynthesis rate and decreased stomata resistance diffusion. The characteristic of stomata influenced CO 2 fixation in the leaf mesophyll (Wong et al., 1979).
Based on this research, stomatal index was affected by shade tree species and clones. The higher the stomatal index, the higher the number of stomata. However, it is also influenced by number of existing epidermis. Meanwhile, the stomatal index of Sulawesi 01 was 20.6%, which was significantly different than other clones.
Stomatal index derived from the ratio of the number of stomata and epidermis. The stomata index of Leaucena sp., Cassia surattensis, Tectona grandis were 19.8%, 18.9% and 18.0%, respectively. However, it was seemed that Leucaena sp. shade provided the highest stomata index than Tectona grandis. It shows that the increasing of shading percentage will be followed by the increasing of stomatal index (Regazoni, 2014;Wahyudi et al., 2014).
Interaction were not significantly different in chlorophyll parameters. Meanwhile, clones and shade tree species provided significantly different in chlorophyll, Leucaena sp. shading provides the highest chlorophyll content (397.5 mol m -2 ). Sulawesi 01 clones had the highest chlorophyll value (416.8 mol m -2 ).
The production could be shown in the number pods per tree in two semesters. In semester 1, production was lower than semester 2 due to heavy rain that caused flower falling. Number of pods per tree on Sulawesi 01 was higher (16.1 pods/tree). However, KW 165 had high cherelle wilt level that caused low number mature pods (Figure 1). Note: number within the same column follow by the same letter are not significantly different at 5% level according to Duncan test. According to the number of beans per pods, pod length, and pod girth showed there were no differences. Meanwhile, in the Leucaena sp. shading, pod girth was larger that indicated pod quality was better. In addition, pod husk weight under Leucaena sp. was smaller than Cassia surattensis and Tectona grandis. Assimilate was directed to form bean quality that can be shown in bean count (Figure 4).
Bean count means number of dry beans (7-8%) per a hundred gram of dry bean. The smaller the bean count value, the more filled out that bean. Figure 4 shows that bean count under Leucaena sp. shades and Tectona grandis shades was lower than under Cassia surattensis shades. Sulawesi 01 clones under Leucaena shades was lower than other clones which ammounted to 93.4 dry beans/100 g dry beans. Mayolie & Gitau (2012) suggested that coffee and cocoa flowering is controlled by the amount of light reaching the coffee and cocoa trees, with more sunlight resulting in more flowers (Beer et al.,1998) possibly because more nodes are formed per branch or more flower buds exist at each node of coffee trees. Leucaena sp. is a nitrogen-fixing plant that may supply 254 kg N ha -1 yr -1 to crops in an alley cropping system (Pandey et al., 2006 Bean formation was affected by optimum physiological conditions. Based on this study, chlorophyll has an effect to increase the production that was shown by positive correlation coefficient (R 2 = 0.47). Chlorophyll may increase light absorbing surfaces as to increase the photosynthetic efficiency. The increasing of chlorophyll will be followed by the increasing of production. The high content of chlorophyll total will have high potential of biomass production (Suharja & Sutarno, 2009;Daymond, 2011). In addition, transpiration has shown positive correlation coefficient (R 2 = 0.40).
Transpiration provides several advantages for plants by accelerating the rate of nutrient transport in the xylem vessels, keeping plant cells turgidity to remain in optimum condition and maintaining the stability of leaf temperature. The study of Novak & van Genutchen (2008) revealed that yield and biomass in corn increased followed by increasing the transpiration. In addition, transpiration was highly correlated with total biomass of sorghum (R 2 = 0.82) (Vades et al., 2011), which was similar to that found recently by Xin et al. (2009). | 2018-12-06T21:29:16.633Z | 2015-08-31T00:00:00.000 | {
"year": 2015,
"sha1": "9b923d47f4baed6ec00138e0678863bd44c619db",
"oa_license": "CCBYNC",
"oa_url": "https://www.ccrjournal.com/index.php/ccrj/article/download/165/pdf_75",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f75df1984c5e4ac4c7e3cfca7ee3dd684aec3997",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
218809952 | pes2o/s2orc | v3-fos-license | Spatial-Temporal Characteristics of Coastline Changes in Indonesia from 1990 to 2018
As a valuable resource in coastal areas, coastlines are not only vulnerable to natural processes such as erosion, siltation, and disasters, but are also subjected to strong pressures from human processes such as urban growth, resource development, and pollution discharge. This is especially true for reef nations with rich coastline resources and a large population, like Indonesia. The technical joint of remote sensing (RS) and geographic information system (GIS) has significant advantages for monitoring coastline changes on a large scale and for quantitatively analyzing their change mechanisms. Indonesia was taken as an example in this study because of its abundant coastline resources and large population. First, Landsat images from 1990 to 2018 were used to obtain coastline information. Then, the index of coastline utilization degree (ICUD) method, the changes in land and sea patterns method, and the ICUD at different scales method were used to reveal the spatiotemporal change pattern for the coastline. The results found that: (1) Indonesia’s total coastline length has increased by 777.40 km in the past 28 years, of which the natural coastline decreased by 5995.52 km and the artificial coastline increased by 6771.92 km. (2) From the analysis of the island scale, it was known that the island with the largest increase in ICUD was Kalimantan, at the expense of the mangrove coastline. (3) On the provincial scale, the province with the largest change of ICUD was Sumatera Selatan Province, which increased from 100 in 1900 to 266.43 in 2018. (4) The change trend of the land and sea pattern for the Indonesian coastline was mainly expanded to the sea. The part that eroded to the land was relatively small; among which, Riau Province had the most significant expansion of land area, about 177.73 km2, accounting for 23.08% of the increased national land area. The worst seawater erosion was in the Jawa Barat Province. Based on the analysis of population and economic data during the same period, it was found that the main driving mechanism behind Indonesia’s coastline change was population growth, which outweighed the impact of economic development. However, the main constraint on the Indonesian coastline was the topographic factor. The RS and GIS scheme used in this study can not only provide support for coastline resource development and policy formulation in Indonesia, but also provide a valuable reference for the evolution of coastline resources and environments in other regions around the world.
The coastline is the boundary between sea and land contact [1], an important base and carrier for human survival and development, and a special natural resource [2]. It features include instability [3][4][5][6][7], functional diversity [8], regional difference [9,10], and other important features. From the twentieth century, coastal countries have turned their economic development centers to coastal areas, and nearly 50% of the global population has settled in areas within 100 km of the coastline [11]. However, the transfer of economic centers has caused rapid changes in coastline resources, which has had a huge impact on the economic, social, ecological, and environmental aspects of coastal areas [12]. For example, large-scale reclamation projects have greatly reduced the proportion of natural coastlines [13]. The excessive use of coastlines, artificial correction of naturally curved coastlines, and disorderly aquaculture has degraded the ecological environment of coastal areas. With the continuous development of marine resources, the coastal zone will become a key area for coastal economic development. Monitoring coastline changes is an effective way to study the environmental and ecological changes of the coastal zone. Therefore, monitoring the coastline has become an important task for sustainable development and environmental protection [14,15].
At present, remote sensing (RS) image-based methods have become common for monitoring coastline changes due to their large coverage and low cost [16][17][18][19][20][21][22]. Many scholars have researched coastline changes, and the potential causes of these changes have been analyzed and recognized [23]. For example, Mishra et al. [24] evaluated the long-term to short-term dynamics of the coastline location of the Uri district in India over the past 25 years (1990-2015) by using open-access multitemporal satellite imagery, deriving that the reason behind the changes as were human buildings and the erosion of coastlines. To clearly illustrate the impact of the dam construction on the delta, Kale et al. [25] studied the erosion rates and coastline variations on the Yesilirmak River in northern coastline of Turkey before 2017. Thoai et al. [26] studied the coastline changes of the Ca Mau Cape in Vietnam over the past 20 years and concluded that the most important factors affecting the coastline changes in the region were forest area loss, river dredging, and aquaculture and infrastructure development. Wu et al. [27] took the Shenzhen Special Economic Zone of China as an example to study the changes on coastline in the region from 1988 to 2015, and found that the characteristics of coastline stability were completely different between the eastern and western coastlines of Shenzhen, its regional differences were mainly reflected in the morphological changes and changing laws of the coastlines. Throughout the above studies, the research on coastline changes has been mainly focused on two aspects: firstly, that the characteristics of the spatiotemporal changes to the coastline are described by the rate of coastline change and the changes in the land and sea area caused by the coastline changes [28,29]; secondly, analyzing the characteristics and trends, whilst also exploring the influence of climate, geology, human activities, and other factors on coastline changes [30][31][32][33]. However, existing coastline research has mainly focused on smaller areas, such as bays and estuaries [34][35][36]. Their regional limitations have prevented the comparative analysis of large-scale long-term sequences [37]. Further, few studies have been conducted on the coastlines of island countries with more fragmented ground. Research on island coastlines has just begun in the twenty-first century [38].
Due to global warming and rising sea levels, the future development of islands faces serious threats [39,40]. Governments around the world are adopting various strategies to address the threat posed by rising sea levels and coastal flooding to coastal cities [41]. However, so far, most research has only focused on a few islands and atolls [42][43][44]. In the absence of large-scale island country research and data support, the spatial and temporal changes of island coastlines have not been well documented. Intuitively, the geographically fragmented island nations are the most vulnerable to seawater erosion and human activities, due to their open coasts and frequent land-water exchanges. But is this actually the case? If they are effected, what are the main effects of the change in their coastlines? Are they mainly affected by natural or human factors? What is the regional difference? The answers to these questions are of great reference value to understanding whether geofragmented island nations should actively respond to the ecological and environmental problems brought about by the coastline change, and, if necessary, how they should respond.
Indonesia is a typical archipelago country with rich coastline resources [45]. Its geographical location is at the crossroads of Asia and Oceania, and it sits on important straits for maritime transportation: the Malacca Strait, Lombok Strait, and Makassar Strait. However, it is because of the unique geographical location of the region that frequent disaster outbreaks have caused great damage to various types of resources in the coastal area. At present, many scholars' research on the area focus mostly upon the impact of resource and environmental damage causing by coastal disasters [46][47][48]. For example, Borrero et al. [49] conducted on site investigations of the tsunami that occurred in the Bengkulu Province of Sumatra, Indonesia in 2007, focusing on the destruction of coastline resources and casualties, and obtained relevant data for disaster prediction and simulation. Paulik et al. [50] observed buildings and environmental damage within 300 m of the coastline by using satellite images after the earthquake and tsunami disaster in Palu, Sulawesi. The results of the survey can provide a basis for future tsunami disaster and risk research in Indonesia. The above studies only examined disaster prediction and damage losses, but ignored the changes in coastline resources that were most directly damaged by the disaster. Some scholars also conducted coastline monitoring for the city of Semarang, Indonesia, and found that the changes in this area were mainly dominated by sedimentation [51,52]. However, the above studies only analyzed a small area and lacked the monitoring of national coastline changes. In recent years, Zhang et al. [38] have also analyzed the characteristics of coastline changes in large areas of Southeast Asian countries, and concluded that the coastline of Southeast Asia has remained relatively stable during 2000-2015, but has seen a large spatial change in estuaries, bays, and straits. Considering the particularity of Indonesia's geographic and climatic characteristics, few studies have conducted a comprehensive analysis of the coastline spatial changes on a relatively national scale, island scale, and interprovincial scale over a long time period. Obtaining data on the coastline spatial changes at different scales in the region from 1990 to 2018 can provide important information for evaluating the development and ecological risks of the coastal areas of multi-island countries and help to more scientifically and rationally manage and protect the coastline.
Therefore, this paper took Indonesia as an example, and used remote sensing technology to quantify the development and utilization of the country's coastline resources and the spatial distribution of resources on different scales [53]. These achievements have guiding significance for assisting the geographically fragmented island nations that straddle multiple straits, waterways, and harbors to achieve optimal regional coastline resource allocation alongside economic development. It can provide basic support data for disaster prevention and control in Indonesia in response to the frequent disasters caused by climate change. Furthermore, this study can also provide reference for other island nations studying and responding to coastline environmental problems caused by natural and human factors.
Study on the Situation of Area
Indonesia is located in Southeast Asia and is mainly affected by the northern and southern equatorial currents, spanning the equator, longitude 96°-140°E, 12°-7°N. It is the largest archipelago country in the world, consisting of approximately 17,508 islands between the Pacific and the Indian Ocean. The land area is about 1.904 million square kilometers and the ocean area is about 3.166 million square kilometers (excluding the exclusive economic zone) ( Figure 1). Kalimantan Island in the north faces Malaysia across the sea, and New Guinea Island is connected to Papua New Guinea. The northeast faces the Philippines, the southwest is connected the Indian Ocean, and the southeast faces Australia. The islands of Indonesia are relatively scattered, including Kalimantan, Sumatra, Papua, Sulawesi, and Java. There are rugged mountains and hills in the interior of the islands, with narrow plains only along the coast, and they are surrounded by shallow seas and corals. On Kalimantan Island, the mountains stretch from the middle to the west, the coastal plain is vast, and the south is swampy. For Sumatra, the mountains run diagonally from northwest to southeast. To the northeast side of the mountains are hills and a wide coastal alluvial plain. The plains are swampy in the east. Sulawesi, mostly mountainous, has narrow plains only along the coast. Java has a plain in the north and a lava plateau and mountains in the south. Papua Island, with high mountains in the west, has the highest peak in Indonesia, Chaya Peak, with an altitude of 5030 m. The southern plain is relatively wide. Indonesia is a typical tropical rain forest climate, with an average annual temperature of 25-27 °C, and no seasonal differences. The north is affected by the monsoon in the northern hemisphere, and precipitation is abundant in July-September. The south is affected by the monsoon in the southern hemisphere; precipitation is rich in December, January, and February, and the annual precipitation is 1600-2200 mm.
Data Source and Preprocessing
In this study, Landsat Thematic Mapper (TM) and Operational Land Imager (OLI) remote sensing images obtained in 1990 and 2018 were used as the basic data sources. All of these were downloaded freely from the website (http://glovis.usgs.gov). In order to study the spatial and temporal changes of the Indonesian coastline in the past 28 years, this study collected 168 scenes from each period of the Landsat TM/OLI images of the Indonesian coastline, and a total of 336 scenes in the two periods (images had less than 20% cloud content). The imaging times were around 1990 and 2018, respectively. Because of the larger image data, these are no longer listed in detail in this article. Due to Indonesia's land spans the equator, remote sensing images were projected into the Mercator projection coordinate system during preprocessing.
In addition, in order to further analyze the potential ecological and environmental problems caused by coastline changes, we collected the elevation data of the Indonesia region from the geospatial data cloud (http://www.gscloud.cn/), with a spatial resolution of 90 m. This was further resampled to 30 m in ArcGIS 10.2. In order to uncover the changes in coastline erosion and sedimentation, economic and population data were used to conduct the drive analysis. We also obtained data from the World Bank's public data (https://data.worldbank.org.cn/) and the Indonesian Central Statistical Office (https://www.bps.go.id/) for collecting Indonesia's Gross Domestic Product (GDP) and population data from 1900 to 2018, as well as population data for each province.
Extraction and Classification of Coastline
There are many standards for the definition of coastlines. In order to facilitate unified standards for monitoring remote sensing change, the default coastline in this article is the instantaneous waterline [54]. Esmail et al. [55] have compared and analyzed the three methods of squared-error clustering method, threshold, and screen digitization, and concluded that squared-error clustering was the best method for coastline extraction, but that traditional methods such as the visual interpretation of human-computer interaction were still commonly used by experts to extract coastal information [56]. Therefore, this paper adopted the method of human-computer interactive visual interpretation. Firstly, ENVI5.2 [57] software was used to preprocess the remote sensing images, including geometric correction, image registration, and image stitching. Most of the images were systematically and geometrically corrected. The geometric correction method of some images involved finding the points with the same name on the two images according to the gray value of the pixel or the feature of the ground. When selecting points, we attempted to distribute the entire image as evenly as possible. The points were mainly selected from easily distinguishable and finer feature points, such as road intersections and the edge of the city profile, and the maximum RMS error must be less than 1. Then, the geometric correction of the two images were completed. On the basis of clarifying the different reflection spectrum characteristics of the features near the coastline, the remote sensing images were separated by normalized difference water index (NDWI) water and land after preprocessing [58], the boundary line was enhanced, and the exact location of the instantaneous water edge was extracted by Otsu'sthreshold segmentation method [59].
The classification of the obtained coastline was mainly based on knowledge obtained by Google Earth images and field surveys; then, the digitization of topographic maps and the visual interpretation of remote sensing images were classified. Concerning the classification details of coastline, we first obtained the instantaneous waterline of the image, and then defined the coastline type on the land cover type on the image within a range of 100 m (3 pixels) along the instantaneous waterline. For the coastlines with more complicated land cover types in this range, we adopted the principle of the largest area of land cover types within a range of 100 m, that is, the land area of a certain land use type had the largest area ratio, which was the coastline classification of this type. After classification according to this standard, the consistency of the extracted coastline was good. In particular, the comparison of the two periods could eliminate as much as possible the inconsistency of the standards, which would have caused deviations in the analysis of coastline changes in the two periods. During 2019, we conducted a field survey covering the coast of Indonesia, and after 14 people had travelled across 1559.97 km 2 , we finally obtained more than 4000 live photos. Based on comprehensive research results [3,[60][61][62][63], the Indonesian coastline was divided into two primary category types, natural coastline and artificial coastline. The natural coastline was further divided into bedrock coastline, silt coastline, mangrove coastline, and sandy coastline; the artificial coastline was further divided into harbor and wharf, embankment, and agricultural, with a total of seven secondary categories types ( Table 1). The corresponding coastline types were displayed on the Landsat image in 543 bands combined with false color display (Figure 2), and finally the coastline classification was achieved by human-computer interactive visual interpretation. The specific processing flow is shown in Figure 3. The boundary between the outer edge of the sea, such as ports, docks, storage land, towns, and industrial land, and the waterway of the ocean is generally distributed on a large scale with certain brightness, but this is not uniform. Agricultural A rectangular grid arrangement, color red, uniform texture .
Coastline Accuracy Evaluation
The statistical method of manually selecting random points was used to evaluate the accuracy of the extracted coastline [64]. Because the coastline is long and the cost of doing many sampling tests was too high, in order to make the sample cover the entire area as much as possible when doing accuracy evaluation, we randomly selected three regions of Indonesia from west to east for representative inspections. In addition, because the standards we extracted were uniform and the data consistency of the entire region was good, we believe that it was feasible to use the accuracy of the detailed verifications instead of the overall accuracy evaluation. Firstly, we manually selected 300 points on the original image along the edge of the coastline (Figure 4). For each area we selected 300 points for each period of data, a total of 1800 points for accuracy evaluation. Then, we calculated the shortest distance from each random point to the extracted coastline. If the random point was inside the land, the distance value was positive; otherwise, it was negative. According to Table 2, the resolution of TM image and OLI image was 30 m. Therefore, according to the statistical results of the histogram ( Figure 5), in the two periods of 1990 and 2018, the proportions of random points selected from west to east in 1990 that were within one-pixel distance were 84.33%, 84.67%, and 89.67%, respectively, and in 2018 within one-pixel distance were 90.66%, 87.33%, and 90.32%, respectively. The extraction accuracy of the coastline met the study needs.
Index of Coastline Utilization Degree
By drawing on the concept and calculation method of land use, the index of coastline utilization degree (ICUD) was determined here [65]. Referring to the definition of the coastline utilization grade index in the study by Wu et al. [65], this article also used the relevant indices. Human activities have a greater impact on the coastline, and the difficulty in restoring the coastline to a natural state has increased, resulting in a decline in the diversity of coastline functions. The index of human force degree ranged from 1 to 4 ( Table 3).
The index of coastline utilization degree was calculated through equation (1): where ICUD is the index of coastline utilization degree, is the impact score of human force degree for category i utilization, is the coastline length percentage of category i utilization, and n is the classification number of coastlines.
Changes in Land and Sea Patterns
The process of coastline expansion or receding to the sea will cause changes in the land and sea pattern of the coastline. Land-to-sea retreat or sea-to-land retreat was referred to as land invasion or transgression. Land-to-sea retreat in space was represented by an increase in land area, while sea-toland retreat was represented by a decrease in land area ( Figure 6). The change in land area reflected the direction and magnitude of change in the coastline. Conversely, changes in the use of the coastline also revealed the main driving factors and processes of land-sea pattern changes. The black curve was the position of the coastline at in 1990, and the red curve was the position of the coastline at in 2018. The two directions were the same, that is, the land was both on the left side of the coastline. The coastlines during the two periods were superimposed, and multiple polygons were generated on both the left and right sides of the earlier coastline. If the later coastline was to the left of the early coastline, it showed that the coastline had receded. On the contrary, if the later coastline was to the right of the early coastline, it showed that the coastline had advanced towards the sea. The formulation that the coastline changes led to land area increasing S (increase) (2) and reducing S (decrease) (3) are as follows: where A represents the increased area, j represents the number of increased areas, B represents the decreased area, and k represents the number of decreased area.
ICUD Analysis at Different Scales
The ICUD was calculated at the national, island, and provincial scales to analyze the regional differences in Indonesia's coastline resources. The island scale referred to the geographical zone of Indonesia that was divided into seven spatial units: Sumatra, Kalimantan, Java, Sulawesi, Nusa Tenggara, Maluku, and Papua. We also analyzed the differences between these islands. Furthermore, the differences in the degree of coastline between Indonesia's 33 provinces were analyzed and classified. (Table 4).
Spatial Distribution Characteristics of Coastlines and ICUD
The coastline types in 1990 and 2018 were obtained by acquiring images of the coastal areas of Indonesia covered by remote sensing satellites over the past 28 years, and subsequently interpreting them. The spatial distribution is shown in Figure 7. The results of the two periods showed that over the past 28 years, due to the comprehensive effects of human development and sea-land interaction, the total length of Indonesia's coastline increased by 777.40 km, of which the natural coastline decreased by 5994.52 km, the artificial coastline increased by 6771.92 km, and the ICUD increased by 16.74. The areas with the most significant changes in artificial coastlines were located on the south coast of Sumatra and Java, the southeastern coast of Kalimantan, and the northern coast of the Maluku Islands. The results show that over the past 28 years, Indonesia's natural coastline has accounted for more than 70% of the total coastline, indicating that Indonesia's coastline resources have not changed significantly and their potential for development and utilization remains huge.
In general, different types of coastline resources often have multiple uses. This is shown in the statistics for the lengths of different types of coastlines in the secondary categories ( Figure 8); among the natural coastlines, all types of coastlines are decreasing, the most significant decrease of which was the bedrock coastlines. These decreased by 4266.65 km in 1990-2018, and its proportion decreased from 53.66% in 1990 to 48.51% in 2018. The reason was that bedrock coastline had advantages in the openness, cover conditions, and water depth of the basin, and could often form an excellent harbor. Under these conditions, it was used for transportation and sea construction, and it had a higher port level and navigation capacity. A reduction of 451.15 km of silt coastline was observed. The silt coastline was often used for tidal flat farming. The mangrove coastline decreased by 259.68 km. The increase or decrease of the mangrove coastline, to a certain extent, indicated the ecological environment of the region. The reduction of the mangrove coastline should be closely monitored. The sandy coastline was reduced by 997.04 km. Although the sandy coastline has poor conditions for port construction, it is gentle and open and can be used for fisheries and tourism. Among the artificial coastlines, all types of coastlines increased, of which agricultural coastline increased the most, reaching 3678.06 km and accounting for more than half of the total artificial coastline growth length. Second was embankment coastline, which increased by 2776.27 km during the last 28 years, and the proportion increased from 10.07% in 1990 to 13.02% in 2018. The harbor and wharf coastline only increased by 227.59 km. However, as the most important transportation building facility for domestic and international traffic flows, the increase in the length of the quay is particularly important for Indonesia's exchanges with other countries.
Temporal and Spatial Dynamics of ICUD on the Island Scale
The islands of Indonesia are scattered, including Kalimantan, Sumatra, Nusa Tenggara, Maluku, Papua, Sulawesi, and Java. There are rugged mountains and hills in the interior of the islands. Only the coastal areas have narrow plains that are surrounded by shallow seas and corals. For example, on Kalimantan Island, the mountains stretch from the middle to the west, the coastal plain is vast, and the south is swampy. In Sumatra, the mountains range from northwest to southeast. On the northeast side of the mountains are hills and a wide coastal alluvial plain. The plains are swampy in the east. Sulawesi, mostly mountainous, has narrow plains only along the coast. Java has a plain in the north and a lava plateau and mountains in the south. Therefore, different landforms make Indonesia's coastline resources show obvious regional differences on the island scale.
On the whole (Figure 9), artificial coastlines are dominated by agricultural and embankment dikes, which are mostly distributed on plain coasts that are easy to develop and on economically developed and densely populated estuaries, such as Java Island. Most of the natural coastlines are bedrock coastline, although some are sandy coastlines and mangrove coastlines. The bedrock coastlines are mainly distributed in the south and northeast of Indonesia. Most mangrove coastlines are located at the estuaries, such as in sparsely populated areas in Papua. It can be seen from the classification result that the ICUD of coastline in 1990 ranked from high to low was: Java Island > Sulawesi > Kalimantan Island > Nusa Tenggara Islands > Sumatra Island > Maruku Islands > Papua Island (Figure 10a). The ICUD in 2018 ranked from high to low was: Java Island > Kalimantan Island > Sulawesi > Sumatra Island > Nusa Tenggara Islands > Maluku Islands > Papua Island (Figure 10b). Moreover, since 1990, the ICUD of Sumatra, Java, Nusa Tenggara, Kalimantan, Sulawesi, and Maluku has increased. However, the ICUD of Papua Island has remained stable. Except for the proportion of bedrock coastlines in Nusa Tenggara and Papua, the proportion of bedrock coastlines for the other islands decreased. Apart from the increase in the proportion of silt coastlines on Papua Island, the proportion on the other islands decreased. The proportion of mangrove coastlines in Sumatra, Java, and Nusa Tenggara Islands rose, and it declined for the rest of the islands. The proportion of mangrove coastlines in Kalimantan fell the most, from 40.86% in 1990 to 12.06% in 2018, and a reduction of 669.11 km in length was obtained. The proportion of sandy coastlines declined, except for on Papua Island. Except for the decline in the proportion of the agricultural embankment on Java Island, the proportion rose for all of the rest of the islands. The increase in the agricultural dikes in Sumatra was the largest, increasing by 5.33% in the past 28 years. The proportion of harbor and wharf, and embankment lines rose on all islands . The largest increase in the proportion of wharves was on Java, and the largest increase in embankment was on Kalimantan. The length of the construction dikes has increased by 415.06 km during the past 28 years.
Temporal and Spatial Dynamics of ICUD Degree at the Provincial Scale
As seen above, the island differences in Indonesia's coastline were analyzed on the scale of different islands. The regional differences between different provinces will now be discussed from the perspective of the provincial scale.
The provinces with the highest coastline development and utilization (ICUD > 250) in 1990 were Sulawesi Barat and Bali, and the provinces with the highest ICUD in 2018 were Sulawesi Barat, Bali, Riau, Sumatera Selatan, Jawa Tengah, and Jawa Barat (Figure 11). In the two phases, both Sulawesi Barat and Bali were the provinces with the highest coastline development and utilization. However, Sulawesi Barat's coastline development and utilization remained around 295 during 1990-2018. The development and utilization of Bali's coastline increased from 255.18 in 1990 to 277.08 in 2018. The provinces whose value of development and utilization of coastline changed by more than 50 in the two phases were counted, and include Bangka-Belitung, Banten, Bengkulu, Kalimantan Selatan, Lampung, Riau, Sumatera Selatan, and Sumatera Utara. The largest change was in Sumatera Selatan province, from 100 to 266.43 in 1990-2018; the province's coastline development and utilization increased by 166.43. The areas with large changes in the level of coastline development and utilization from 1990 to 2018 were mainly located in the western region of Indonesia, of which the coastline development and utilization on the Sumatra, Java, and Kalimantan islands had larger changes ( Figure 12). However, the eastern regions of Nusa Tenggara, Sulawesi, Maluku, and Papua remained unchanged. The results are inseparable from the tendency of "emphasizing the west and neglecting the east" in successive governments after colonial independence [66]. Although Indonesia's western region is closer to Malaysia and Singapore, with frequent trade exchanges and population migration, natural resources such as agricultural, fishery, and mineral deposits are extremely rich in the eastern region of Indonesia. Its geographical location is close to the Philippines and Australia, and its development potential cannot be underestimated. In 1990, only two provinces with Grade IV coastline development were Sulawesi Barat and Bali. In 2018, four provinces, including Jawa Barat, Jawa Tengah, Riau, and Sumatera Selatan, were added. Among them, the development and utilization of the coastline in Sumatera Selatan Province increased the fastest, jumping from Grade I in 1990 to Grade IV in 2018.
Spatiotemporal Changes in the Land-Sea Pattern
We examined the changes in the land area of various provinces and regions in the country from 1990 to 2018, and further analyzed the spatiotemporal changes of the land and sea pattern of the Indonesian coastline over the past 28 years. During this period, the area of Indonesia's landward erosion was 388.09 km 2 , and the area of seaward expansion was 770.14 km 2 . These results show that the trend of Indonesia's coastline change was mainly expansion into the sea, and there was less erosion to land.
These results are shown ( Figure 13). Erosion and reclamation were distributed in 33 coastal provinces. Among them, the expansion of the land area in Riau Province was the most significant, about 177.73 km 2 , accounting for 23.08% of the total land area of the country. Riau Province's largest land expansion area could be related to the Indonesian government's opening of an oil palm downstream industrial zone in Riau Province. Riau Province's economy, based on agriculture and oil chemistry, is expected to strengthen the competitiveness of the national economy. Jawa Timur, Jawa Barat, Sumatera Selatan, and Sumatera Utara had lower land expansion areas (in order). The total expansion area of the four provinces accounts for 31.76% of the total expansion area of the country. The expansion of seawater erosion in Jawa Barat Province was the most significant, about 54.17 km 2 , accounting for 13.96% of the total land receding into the country. According to the remote sensing images, seawater erosion mainly occurred at the junction of the Jawa Barat and Yogyakarta provinces, and the main manifestation was the destruction of agricultural dikes, leading to a reduction in land area (Figure 14a). The land expansion in this area was mainly manifested by the seaward expansion of agriculture (Figure 14b) and the increase of coastal embankments (Figure 14c). The land receding area was lower in the Jawa Tengah, Riau, Kalimantan Timur, Aceh, Sumatera Utara, Kalimantan Barat provinces in sequence. In order to make it easier for us to see the overall changes in the land and sea pattern of the Indonesian region, the values of the increase and decrease area are displayed in space on a scale of 10 km 2 ( Figure 15). From the spatial distribution, Sulawesi, Nusa Tenggara, and Papua had less coastal development and seawater erosion and have remained stable. The seaward expansion of Kalimantan Island was at a medium level, while the seaward expansion of Sumatra and Java was significant, which partly reflects that the human activities of these two islands are the most intense. Among them, both Java Barat and Jawa Tengah provinces on the island of Java showed a significant state of land expansion and seawater erosion, which requires close attention.
Impact of Climate Change on Coastline Changes
Climate change is an inevitable natural phenomenon. Due to their proximity to the ocean, the coastal regions of the world are being affected by severe natural disasters caused by global warming, such as sea water intrusion, coastline erosion, and waterlogging [67]. Statistics pertaining to coastline change provide a better understanding and measure of the direct response to sea level rise [68,69]. In low-lying areas, especially Southeast Asian countries, a large number of people live in low-lying and fragile coastal plains and are considered to be at greater risk of being affected by the climate [70]. The Indonesian region is a country severely affected by both natural disasters and climate change.
On the one hand, the tsunami is the most direct and active dynamic factor in shaping the coast. It has a significant impact on coastal buildings, such as the collapse of dams. At the same time, relative sea level rise caused by global warming has been accelerating, which will have a serious impact on coastal erosion [71][72][73][74]. The strong erosion effects of natural disasters such as tsunamis and storms are particularly evident in Banda Aceh on the northern coast of Sumatra. Banda Aceh was the main city receiving the most damage and highest death toll in the 2004 tsunami. This was the most intense tsunami on record, causing about 400 m of erosion and more than 1 km of permanent land loss [75]. In addition, the rise of relative sea level is a natural factor causing the erosion of the coastline that also posed a great threat to coastline security. For example, in Semarang (Figure 16), the settlement rate is as high as 10 cm per year, which is mainly caused by a large amount of groundwater mining, and sea erosion has closed down about 1.5 km of cities [76]. Therefore, it is of great significance to carry out remote sensing monitoring of the Indonesian coastline to discover the eroded and expanded areas for post-disaster reconstruction. This enables the reasonable allocation of aid and resources according to the spatial distribution of climate change impacts.
Geographical Environment's Influence on Coastline Changes
The particularity of the geographical environment to which the coast belongs also makes changes to the coastline present a certain regular distribution. For example, during 1990-2018, the areas where the secondary type of coastline changed significantly were mainly located in the eastern part of Sumatra and the northern part of Java (refer to section 4.2.), while the western part of Sumatra and the southern part of Java changed little. This is because the hard rock coasts are mainly in the western and southern regions, and it is difficult for this special geographical environment to be transformed into other types of coasts.
In addition, bays and estuary tidal flats often have high ecological and economic value and are the first choices for human migration. As a result, the areas change frequently and the environment is destroyed. For example, changes in the coastline of mangroves, and tidal flats in bays and estuaries often form a calm, hidden mangrove coastal environment, which is conducive to the rapid accumulation of fine particulate matter. However, for nearly 28 years, Indonesia's mangrove coastline has been in decline. Statistics showed that Indonesia has lost 40 percent of its mangrove forests, mainly due to aquaculture occupation of mangrove areas, which is also consistent with the increasing trend of agricultural coastline in these statistics. The reduction of mangroves in Indonesia will have a serious impact on global climate change [77]. Therefore, it is necessary to pay close attention to the development trend of the mangrove coastline in Indonesia.
Social Factors of Coastline Change
The construction of maritime infrastructure is a prerequisite for the interconnection of major islands in Indonesia, including maritime highways, deep sea ports, shipping, and marine tourism. The proportion of quay walls in Indonesia reached only 0.43% in 2018. According to World Bank statistics, Indonesia's population ranks fourth in the world, after China, India, and the United States. However, its population distribution is very uneven. The 32 provinces of the country are divided into two regions, east and west. The west includes 17 provinces, and 78% of the country's 217 million people live in the west. The eastern part contains 15 provinces, covering 70% of the country's area but only 22% of the country's population [66]. This is also the main reason for the differences in the development and utilization of the eastern and western coastlines, and previous governments tend to "more emphasize the development of the west''.
Indonesia's GDP has increased from US $ 10.614 billion in 1990 to US $ 104.217 billion in 2018. Indeed, GDP has increased by more than 9 times in the past 28 years. However, the development and utilization index of the coastline has increased by only 16.74 during this period, indicating that the growth of GDP has a small impact on the development and utilization of the coastline ( Figure 17). Indonesia's population increased by about 1.5 times during the period 1990-2018. During this period, the growth rate of agricultural coastlines was twice as fast as that of artificial coastlines, indicating that the growth of Indonesia's population is the leading factor behind changes in agricultural coastlines. (1) Jawa Barat, Jawa Timur, and Jawa Tengah provinces were the top three provinces in Indonesia in terms of population distribution over the years, but the region's coastline development and utilization were not the highest. For the third-ranked Jawa Tengah, the coastline ICUD was higher than the top two of Jawa Barat and Jawa Timur provinces. The population of Sulawesi Barat province in 2018 was only 1,405,000, but the coastline development and utilization index of 295.29 in the province was the highest over the years. The second was Bali province, with a population of 4,380,800 and a coastline development and utilization index of 277.08 in 2018. The above phenomenon indicated that the provinces with the largest populations did not have high ICUD, while the relatively small populations of the Sulawesi Barat and Bali provinces had higher ICUD. This shows that the degree of ICUD in Indonesia is mainly related to the low-lying flat areas for human habitation along the coast. And the habitability of coastal geographical conditions is more important than the number of populations in determining the ICUD of the coastline.
(2) Over time, the province with the largest population increase was Jawa Barat, but the ICUD in the region only increased by 26.01. The population in the Irian Jaya Barat and Sulawesi Barat provinces increased by 821,378 and 846,349, respectively. However, the ICUD of these two provinces showed a downward trend. Irian Jaya Barat is located in the northwest of Papua. Most of the mountains and plateaus are above 4000 m in altitude. It is the highest island in the world. It can be seen that the difficulties caused by natural topography are the biggest reason for the poor development of the province and city. Sulawesi Barat province, whose economy relies mainly on mining, agriculture, and fishing, is in western Sulawesi. Since the central part of the province is dominated by mountainous terrain and only has low elevations in coastal areas, the population, economy, and agriculture of this region are mostly distributed in coastal areas. As seen in Section 4.3., the province and city had a large ICUD in 1990. With the increase in population, the general phenomenon that the ICUD in the region decreased instead of rose indicates that the region's coastline development had reached the maximum.
Advantages and Disadvantages of Monitoring Coastline Change by Remote Sensing
Using remote sensing technology based on open source remote sensing satellite data, the method of the dynamic monitoring of coastline resources has the advantages of low cost, a historical archive, and wide coverage [78,79]. Moreover, the extraction results are highly relevant to the integrated and sustainable management of coastal areas, such as planning, decision-making, management, and monitoring [80][81][82]. In order to identify trends in coastline types, long-term observations of the ocean need to be achieved through time series assessments. In view of this, we made full use of the potential of remote sensing technology for dynamic monitoring and used open source satellite images of land resources with a long time series coverage to evaluate the spatial and temporal distribution of coastlines. In the field survey, we focused on the coastline of Belawan Pier in Medan, Sumatra ( Figure 19a), and the mangrove coastline along the coast of Samadalin, Kalimantan (Figure 19b). The findings were consistent with the types of results extracted in this paper, which also illustrated the reliability of dynamic monitoring using remote sensing technology.
However, there are also several deficiencies. In performing coastline classification, interpreters need to have rich prior knowledge, and the classification accuracy is limited by the resolution of remote sensing images. At this stage, there are still some difficulties in the automatic extraction and classification of large areas, mainly relying on visual interpretation. Therefore, in the future, developing remote sensing technology for the automatic extraction and classification of coastlines can be considered to improve work efficiency. At the same time, the remote sensing data and methods used in this paper have certain limitations concerning the impact of ocean currents on the coast and the damage to coastal ecosystems. Further, some geological problems, such as the impact of new structures on coastal subsidence, require further data and technical means for further study.
Conclusions
The following conclusions were obtained through long-term remote sensing monitoring of Indonesia's coastline: (1) The overall trend of Indonesia's coastline changes in the past 28 years was the increase in the total length of the coastline, including a decrease in natural coastline, an increase in artificial coastline, and few changes in the overall type of the secondary types. In 1990, artificial coastlines in Indonesia were mainly distributed on the north coast of Sumatra and Java, the west coast of Kalimantan, and Sulawesi. In 2018, the artificial coastline coverage of the entire Sumatra Island was 90%, and Java Island was also fully developed. The change in land-sea pattern was mainly land-to-sea retreat, of which 770.14 km 2 has expanded to the sea in the past 28 years. The land expansion in Riau Province was the most serious, and the seawater erosion in Jawa Barat Province was the most serious.
(2) The main constraint factor that causes the dynamic change of Indonesia's coastline is the terrain, which causes Indonesia's population and industry to be mostly distributed in the coastal plains. The result also confirm that in different provinces a larger the population does not correspond to a higher ICUD. The main driving factor is the increase in population, which has led to the intensification of human activities related to coastal engineering, including the construction of port terminals and the reclamation of agricultural facilities. However, the intensification of human activities has also led to the degradation of mangrove ecological coastlines, which will have a certain impact on the coastal ecological environment.
(3) The use of remote sensing technology can quickly monitor the history and current status of long-term serial coastlines in large regions and provide objective data for rational coastal planning. This article takes the dynamic changes in the coastline of Indonesia as a research object and shows the great potential of remote sensing monitoring. In the future, we will use the advantages of the high frequency and wide range of remote sensing to carry out larger-scale and more detailed research on coastline remote sensing monitoring applications, in order to provide effective technical support for coastal area planning and management.
This study provides basic data on the spatial dynamics of island coastline changes at different scales. The conclusions can provide beneficial help for multi-island countries responding to the effects of climate change and economic development. The spatial dynamics of coastline changes provided by this study are important for coastal managers and planners to prioritize actions related to disaster risk reduction. By understanding areas that are likely to be affected by coastline changes, local governments can modify land use plans and keep coastal residents and economic activities away from dangerous areas. In addition, in heavily eroded areas, the government can implement countermeasures, such as building flood prevention facilities and planting mangroves, to stabilize the coastline. If this data is further combined with socio-economic indicators and physical indicators, the coastline ecosystem can be more accurately described at the micro level. This will become the focus of our future work.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-04-23T09:03:15.247Z | 2020-04-16T00:00:00.000 | {
"year": 2020,
"sha1": "a5c942c3ced740953e8f3e13e867e1e844591d33",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/8/3242/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ef9cff7e992bc41ec7a4979c8ab8207aad17d3e2",
"s2fieldsofstudy": [
"Geography",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
56413686 | pes2o/s2orc | v3-fos-license | Delocalization of a disordered bosonic system by repulsive interactions
Clarifying the interplay of interactions and disorder is fundamental to the understanding of many quantum systems, including superfluid helium in porous media, granular and thin-film superconductors, and light propagating in disordered media. One central aspect for bosonic systems is the competition between disorder, which tends to localize particles, and weak repulsive interactions, which instead have a delocalizing effect. Since the required degree of independent control of the disorder and of the interactions is not easily achievable in most available physical systems, a systematic experimental investigation of this competition has so far not been possible. Here we employ an ultracold atomic Bose-Einstein condensate with tunable repulsive interactions in a quasi-periodic lattice potential to study this interplay in detail. We characterize the entire delocalization crossover through the study of the average local shape of the wavefunction, the spatial correlations, and the phase coherence. Three different regimes are identified and compared with theoretical expectations: an exponentially localized Anderson glass, the formation of locally coherent fragments, as well as a coherent, extended state. Our results illuminate the role of weak repulsive interactions on disordered bosonic systems and show that the system and the techniques we employ are promising for further investigations of disordered systems with interactions, also in the strongly correlated regime.
The interplay of disorder and interactions lays at the heart of the behaviour of many physical systems. Notable examples are the transitions to insulators observed in superconductors and metals [2][3][4][5] , quantum Hall physics 22 , electrical conduction in DNA 23 , or light propagation in nonlinear disordered media 7,8 . An important step towards their full comprehension is understanding disordered bosonic systems at zero temperature, where a competition between disorder and weak repulsive interactions is expected. Indeed, while disorder tends to localize non-interacting particles giving rise to Anderson localization 24 , weak repulsive interactions can counteract this localization in order to minimize the energy. Eventually, interactions can screen the disorder and bring the system towards a coherent, extended ground state, i.e. a Bose-Einstein condensate (BEC). In many years of research, mainly theoretical predictions have been made about the properties of the complex phases expected to appear as a result of this competition [9][10][11][12][13][14][15][16][17] . A systematic experimental study has so far not been possible, since on the one hand interactions in condensed matter systems are strong but difficult to control 1 , while on the other hand in photonic systems only non-linearities corresponding to attractive interactions 7,8 have been explored in experiments. Instead, ultracold atoms in disordered optical potentials are a promising system for such investigations 15,25 , and have already enabled the observation of Anderson localization for bosons in the regime of negligible interactions 26,27 . Using one of these systems in a disordered lattice, we characterize the whole crossover from the regime of disorder-induced localization to that of Bose-Einstein condensation by tuning repulsive interactions in a con- Cartoon of the interaction-induced delocalization. a, In a very weakly interacting system with sufficiently large disorder, the eigenstates are exponentially localized, and several of the lowest energy states are populated an average of 4.4 lattice sites apart (Anderson glass). b, The energies of different states can become degenerate due to repulsive interactions and their shape might be modified, giving rise to the formation of locally coherent fragments (fragmented BEC), though global phase coherence is not restored until c, the entire system forms a coherent, extended state (BEC) at large interaction strengths. trolled manner. The simultaneous measurement of localization properties, spatial correlations and phase coherence properties, and the comparison with the predictions of a theoretical model allow us to identify the different regimes of this delocalization crossover.
The system employed consists of a three-dimensional degenerate Bose gas of 39 K in a one-dimensional quasi-periodic potential, which is generated by perturbing a strong primary optical lattice of periodicity d = π/k 1 with a weak secondary lattice of incommensurate periodicity π/k 2 (k = 2π/λ, where λ is the wavelength of the light generating the lattice). The corresponding Hamiltonian is characterized by the site-tosite tunnelling energy J of the primary lattice, which is kept fixed in the experiment, and the disorder strength ∆. The interatomic interactions can be controlled by changing the atomic s-wave scattering length a by means of a Feshbach resonance 28 , which in turn determines the mean interaction energy per particle E int (see Methods).
In the case of non-interacting atoms, such a system is a realization of the Aubry-André model 29 , which shows an Anderson-like localization transition for a finite value of the disorder ∆/J = 2. Above the transition, the non-interacting eigenstates of the potential are exponentially localized due to the quasi-periodic perturbation of the lattice on-site energies and the energy spectrum is split into "minibands" 13,30 . The localization properties in this case have been studied experimentally in detail in ref. 27, where it was seen that several low-lying eigenstates, separated on average by d/(β − 1) ≈ 4.4d, where β = k 2 /k 1 , are typically populated in the experiment. Adding weak interactions, the different regimes that appear as a result of the interplay of disorder and interactions can be ex- The root-mean-squared width of the momentum distributions and the exponent extracted from a fit (red and blue lines) to the FT give the localization properties. The coherence properties are extracted by measuring the fluctuations of the phase of the interference pattern in the momentum distribution, or by the relative height of the two states 4.4d apart, which can be related to the spatially averaged correlation function g(4.4d).
plored. For very weak repulsive interactions, the occupation of several eigenstates in the lowest miniband is favoured (Fig. 1a). This regime, in which several exponentially localized states coexist without phase coherence, is often identified with an Anderson glass 11, 15 (AG). As E int is increased, coherent fragments, that extend over more than one well of the quasi-periodic potential, are expected to form (Fig. 1b). In this case, global phase coherence would not yet be restored, and the local shape of the states might be modified. Some authors have called this regime a 'fragmented BEC' 12 (fBEC). Finally, for large enough E int a single, extended phase-coherent state is expected to be formed, i.e. a macroscopic BEC (Fig. 1c). The system is prepared by first loading an interacting condensate adiabatically from the ground state of a harmonic trap into the quasiperiodic lattice. The interaction energy is then slowly changed to its final value E int , while the confining potential is reduced. This process is adiabatic for most of the parameter range explored until E int becomes sufficiently low for the system to enter the fully localized regime. Here, several independent low-lying excited states are populated even when it would be energetically favourable to populate just the ground state. This loss of adiabaticity is seen experimentally as a transfer of energy into the radial direction (see Supplementary Information).
The system is characterized in detail by analyzing its momentum distribution, which is recovered by taking an image of the condensate after a long ballistic expansion without interactions (see Methods). From the momentum distribution and derived Fourier transforms, of which we show examples in Fig. 2, we extract the local shape of the wavefunction, spatial correlations, and phase coherence properties for different values of ∆/J and E int /J. The system can be approximately described as the superposition of states with the same envelope separated by 4.4d. First, the mean extension of individual states can be quantified by measuring the root-mean-squared width of the momentum distribution ( Fig. 2a-b). A smaller (larger) width indicates a more extended (localized) state. Next, the mean local shape of the wavefunction on a length scale of 4.4d is ex- tracted from the Fourier transform of the square root of the momentum distribution. From a fit to a generalized exponential function, the localization exponent α is recovered (see also Methods), as shown in Fig. 2cd. The measured momentum width and exponent are shown in Fig. 3. We find that for very small E int , the states are exponentially localized, since α ≈ 1, and the momentum width is large, consistent with the Anderson glass regime. Increasing E int , the width decreases while the exponent increases up to α ≈ 2. Repulsive interactions therefore delocalize the system as expected, or alternatively, the localization transition is shifted to higher values of the disorder strength ∆/J when interactions are introduced into the system. The position of the delocalization crossover is in good agreement with the expectations of a simple screening argument 14 : The increasing interaction energy serves to smooth over the disordering potential in the occupied sites, providing a flatter energetic landscape on which more extended states can form. The centre of the crossover is therefore expected to occur when E int is comparable to the standard deviation of energies in the lowest miniband of the non-interacting spectrum, 0.05∆ (white line in Fig. 3, see also Supplementary Information).
The correlation properties of neighbouring states can be extracted from a Fourier transform of the momentum distribution itself, which gives the spatially averaged first order correlation function g(x) (see Methods). In Fig. 4a-b, g(x) at x = 4.4d is shown for both the experiment and a ground-state theory that we have developed, with generally good agreement. In the localized regime, the correlation is exactly zero in the theory, since no neighbouring states are occupied. In contrast, the correlation is finite in the experiment due to the occupation of neighbouring localized states arising from the non-adiabatic loading, but is small since the states are independent. As E int is increased, the correlation features a crossover towards larger values, signalling that coherence is progressively established locally over distances of at least 4.4d. The shape of the crossover in the experiment is again in qualitative agreement with the Finally, information about the phase coherence of neighbouring states can be obtained by measuring the phase φ of the interference pattern in the momentum distribution for repeated runs of the experiment with the same parameters (see Methods for details). If the states are not phase locked, φ changes almost randomly at each repetition of the experimental sequence. In Fig. 4d we show the standard deviation of φ, estimated from a large number of repetitions of the experiment, for fixed ∆/J = 12. We see a slight decrease of the phase fluctuations with increasing E int , that nevertheless remain relatively large in the crossover region where the correlation increases (Fig. 4c). The fluctuations finally drop to the background value only when E int is comparable to the full width of the lowest miniband of the non-interacting spectrum, 0.17∆. These observations confirm that in the localized regime the states are totally independent, which together with the localization properties ( Fig. 3) indicates that the system can indeed be described as an Anderson glass 11, 15 . The system crosses a large region of only partial coherence while becoming progressively less localized as E int is increased. This is consistent with the formation of locally coherent fragments expected for a fBEC. An analogous fragmentation behaviour was reported in ref. 31. Finally, the features of a single extended, coherent state are seen, i.e. a BEC.
In the mean-field theory, boundaries between the different phases expected for the system can be defined (see Methods). In particular, the transition from the Anderson glass phase to a fragmented BEC (white line in Fig. 4b) occurs when g(4.4d) starts to increase. Similarly, the orange line in Fig. 4b shows where the fragments are locked together in phase to form a single macroscopic condensate for very large interactions. The generally good agreement between the experimental observables and theory indicates that our system is well described by the mean-field theory for most of the parameter space explored experimentally.
In conclusion, we have provided the first experimental characterization of the localization, correlation and coherence properties of the various regimes due to the competition of disorder and weak repulsive interactions in a bosonic system. Other aspects of the delocalization crossover worth further study are, e.g. the detailed properties of the ground state of the AG regime, which was not possible to study in the present set-up, and the presence of a superfluid-insulator transition at the BEC-fBEC boundary analogous to the one observed in superconductors 2 . Regarding the latter, in transport experiments analogous to the ones described in ref. 27 we have been able to verify that the AG and fBEC regimes are not inconsistent with being insulating, as is the case in the regime of vanishing E int (see Supplementary Information). Finally, it would be appealing to employ the present system and the correlation analysis introduced here to explore the regime of strong correlations, E int ≫ J, which could be reached by using a quasi-1D system with strong radial confinement. There, another elusive insulating phase due to the cooperation of disorder and interactions, the so called Bose-glass phase, is expected to appear, although there is debate on the exact shape of the phase diagram 11,15,18,19 .
Condensate with tunable interactions.
A 39 K condensate of about N = 20, 000 atoms with an s-wave scattering length of 250a 0 , where a 0 = 52.9 pm is the Bohr radius, is prepared in a harmonic optical trap. The condensate is loaded into the quasi-periodic potential while the optical trap is decompressed in about 250 ms to reduce the harmonic confinement, and a gravitycompensating magnetic field gradient is added. At the same time, the scattering length a is changed by means of a broad Feshbach resonance to values ranging from a ≤ 0.1a 0 to about a = 300a 0 (ref. 28). Quasi-periodic potential. The quasi-periodic potential is created by two vertically oriented laser beams in standing-wave configuration. The primary lattice is generated by a Nd:YAG laser with a wavelength of λ 1 = 1064.4 nm and has a strength of s 1 = V 1 /E R,1 = 10.5 (corresponding to J/h = 79 Hz), as measured in units of the recoil energy E R,1 = h 2 /(2Mλ 2 1 ). The secondary lattice is generated by a Ti:Sapphire laser of wavelength λ 2 = 866.6 nm, the strength being adjustable up to s 2 = V 2 /E R,2 = 1.7. Both beams are focussed onto the condensate with a beamwaist of about 150 µm. The lattice lasers give a harmonic confinement of ω ⊥ = 2π × 50 Hz in the radial direction. In the vertical (axial) direction, a weak confinement of 5 Hz is given by a weak optical trap as well as by a curvature from the gravity-compensating magnetic field. Energy scales. In the tight-binding limit, the hopping energy J and disorder strength ∆ can be estimated as J = 1. The experimental uncertainty on ∆/J is around 15%. We estimate that around 30 lattice sites, corresponding to about 7 localized states, are populated during the loading of the lattice. We then define a mean interaction energy per particle E int = gN/7 |ϕ(r)| 4 d 3 r, where g = 4π 2 a/m and ϕ(r) is a Gaussian approximation to the on-site Wannier function. We include coupling into the radial directions of our system, with the consequence that the interaction energy is non-linear in the scattering length. Though this definition of the energy is strictly valid only in the localized regime, comparison with a numerical simulation of our experimental procedure has shown that it is a good approximation for all values of the scattering length up to an error of 30%. Note that the potential energy from the residual harmonic confinement is approximately 3 × 10 −3 J over a distance 4.4d.
Momentum distribution analysis. The images of the momentum distribution are taken by absorption imaging with a CCD camera after 36.5 ms ballistic expansion. At the time of release, the scattering length is set to below 1a 0 in less than one ms and kept there until the Feshbach magnetic field is switched off 10 ms before taking the image -at this point, the system has expanded a sufficient amount to minimize the effect of interactions. For such a free expansion, the image is approximately the in-trap momentum distribution ρ(k) = Ψ † (k)Ψ(k) (ref. 21). The acquired images are integrated along the radial direction to obtain a profile. In momentum space, the width of the central peak is calculated by taking the root mean square width within the first Brillouin zone. Due to the quasi-periodic lattice potential, for a sufficiently homogeneous system the in-trap wavefunction can be decomposed into copies of a single state with real and non-negative envelope ξ(x) ∼ exp(−|x/L| α ), spaced by 4.4d. Therefore in momentum space, ρ(k) = ξ(k)S(k), where S(k) is an interference term, and ξ(x) can be extracted from a Fourier transform of ρ(k) (see also Supplementary Information). We fit to the sum of two gener- , where x c denotes the centers of each of these functions, spaced by 4.4d. From this fit, the exponent α is recovered. In addition, from the Wiener-Khinchin theorem, the momentum distribution can be expressed in terms of the first order correlation function G( By taking the Fourier transform of the momentum distribution itself, we can therefore recover the spatially averaged correlation function g(x) = G(x ′ , x + x ′ ) dx ′ . With the same fitting function as above, we evaluate the spatially averaged correlation between two states 4.4 lattice sites apart, A 2 /A 1 . Experimentally, the correlation function saturates at a value around 0.5 due to the finite momentum resolution. The fluctuations in phase between neighbouring states are seen as a fluctuation of the phase φ of the interference pattern of the momentum distribution, which is directly extracted from a fit (see also Supplementary Information). The 2D graphs in Figs 3 and 4 were generated by linearly interpolating a total of 130 averaged datapoints at 9 different values of disorder, changing the interactions. Typical experimental scatter and statistical errors are seen in Fig. 4c. Theory of the ground state. The theoretical calculations presented in the paper rely on a meanfield approach similar to the one of ref. 32. This is an effective one-dimensional model which partially includes also the radial to axial coupling, and is known to provide an accurate description in the two limiting cases of Anderson localization and BEC. The boundaries between the different regimes shown in Fig. 4b are obtained by analyzing the correlation function g(x) and the density distribution. In the theory we define the Anderson glass phase as the one in which the correlation g(4.4d) is zero. To enter the fragmented BEC (fBEC) phase, we require g(4.4d) > 0, which implies that coherent fragments composed of adjacent localized states can start to form. For increasing E int the extension of the fragments increases, until most of the system remains in a single component, which corresponds to a macroscopic BEC. To define the boundary between the fBEC and the BEC regimes, we first identify as fragments the parts of the system separated by low-density regions for which an applied relative phase twist does not affect the energy of the system. When one single macroscopic fragment forms, we assume the system to be in the BEC regime. A more detailed description of the theoretical methods can be found in the Supplementary Information.
Supplementary Information
Energy spectrum The potential of our system can be written in general as where E R,i = 2 k 2 i /(2M) = h 2 /(2Mλ 2 i ) is the recoil energy for the lattice with wavelength λ i = 2π/k i , and s i = V i /E R,i is the height of the lattice i in units of E R,i . Each of the two lattices is λ i /2 periodic. Any external confining potential is given by V ext (x, r ⊥ ). The lattice spacing of such a potential is to good approximation d = λ 1 /2. If the ratio β = k 2 /k 1 is an irrational number, eq. S1 describes a quasi-periodic potential. In our case, λ 1 = 1064.4 nm and λ 2 = 866.6 nm, giving β ≈ 1.228.
The essential features of such a potential are visible in Fig. S1. The potential energy minima of the primary lattice are modulated by the second one, giving rise to characteristic wells separated on average by 1/(β − 1) ≈ 4.4 lattice sites. The energy scales that characterize the corresponding Hamiltonian to such a potential are the tunnelling energy of the primary lattice 1 and the disorder energy 2 ∆ = 0.5s 2 β 2 1.0264 exp −2.3624/s 0.59 Neglecting the external confining potential, the spectrum of such a quasi-periodic potential can easily be calculated and is shown in Fig. S2 for various values of the disorder strength ∆/J. A striking feature is the appearance of minigaps in the spectrum, the lowest of which has approximately the same width for all values of ∆/J. A minigap appears when the potential has two neighbouring lattice sites with almost the same minimum potential energy. Locally, the potential then looks like a double well, for which the two lowest-lying eigenstates have an energy splitting of 2J. In fact, the width of the lowest minigap is approximately 2J throughout the range of ∆/J shown. The lowest "miniband" of energies corresponds to the lowest energy eigenstates localized in the potential wells 4.4d apart. Since in the experiment, only the states in the first "miniband" are populated, we restrict our analysis to these energies and find that their standard deviation is approximately 0.05∆, while the extension of this band is approximately 0.17∆. The effect of a confining potential on the spectrum has been analysed previously in ref.
Momentum distribution analysis
Fourier transform techniques are used to extract information both about the local shape of the wavefunction, and about the coherence properties of neighbouring states. After a long free expansion without interactions, the image of the atoms that is acquired is approximately the in-trap momentum distribution ρ(k) = Ψ † (k)Ψ(k) , whereΨ(k) is the Fourier transform of the bosonic field operatorΨ(x). In order to recover information about the in-trap wavefunction, we can therefore use an inverse Fourier transform. Due to the quasi-periodic nature of the employed lattice potential, we expect that for a sufficiently homogeneous system, the in-trap wavefunction can be decomposed into copies of the same state with real and non-negative envelope ξ(x), spaced by D = 4.4d. The overall wavefunction can therefore be approximated as where φ j is the local phase, and a useful example of ξ(x) is a generalized exponential function exp(−|x/L| α ). In momentum space, the magnitude of the overall wavefunction can then be written as ρ(k) = |ξ(k)|S(k), where is an interference term. For many envelope functions ξ(x), such as the generalized exponentials with 0 < α ≤ 2, the Fourier transform ξ(k) itself is real and non-negative 3 , so that the inverse Fourier transform of ρ(k) can be written as ξ(x) • S(x). This is simply the convolution of the envelope of a single state ξ(x) with the Fourier transform of the interference term, S(x), which can be approximately described as a series of sharp peaks (approaching δ-distributions) spaced by D, with a decreasing amplitude and phases that depend on the local phases φ j and amplitudes a j .
The inverse Fourier transform of the square root of the momentum distribution ρ(k) therefore gives the average local shape of the (wave)function ξ(x). Due to our finite resolution in momentum space (about k 1 /20), we are only able to resolve easily two neighbouring states. The averaged wavefunction is analysed by fitting to the sum of two generalized exponential functions modulated by the primary lattice where x 1 = 0 and x 2 = 4.4d (see Fig. S3 for examples). From such a fit, the exponent α can be extracted. The local size L is shown in (Fig. 3a in the main text).
On the other hand, the inverse Fourier transform of the momentum distribution itself can be employed to find the correlation properties of neighbouring states. Using the Wiener-Khinchin theorem, the momentum distribution ρ(k) can be related to the first order correlation func- By taking the Fourier transform of the momentum distribution, we can therefore recover the spatially averaged correlation function g(x) = G(x ′ , x + x ′ ) dx ′ . We fit with the same generalized exponential of Eq. S6 and recover the spatially averaged correlation between two states 4.4 lattice sites apart as A 2 /A 1 . Also here, the finite momentum resolution limits our analysis to two neighbouring sites, and it follows that the correlation function saturates at a value around 0.5.
The effect of a fluctuating phase between neighbouring states is seen as a shift of the phase φ of the interference in the momentum distribution. We extract this phase by fitting the momentum distribution directly with a fitting function where k C is the center of the distribution, determined by fitting the average of all images of a given dataset. Adiabaticity As discussed in the paper, in the experiment the system is prepared by first loading a strongly interacting condensate (a = 250a 0 ) into the quasi-periodic potential, in presence of a tight axial harmonic confinement (60 Hz). The scattering length is then reduced to its final value with an exponential ramp lasting 250 ms (τ ≈ 25 ms), while the harmonic confinement is linearly reduced to a much lower value (5 Hz). We check the degree of adiabaticity of this procedure by monitoring the evolution of the radial degrees of freedom. From the radial profiles extracted from the absorption images, we can measure the radial temperature of the system. We find that there is always either a quasi-pure condensate or condensed and thermal components. In the latter case, the temperature can be directly measured from the thermal component, 20 nK) can be given. The condensation temperature T C varies across the parameter range explored, but we estimate 4 that it is on the order of 60-100 nK.
We interpret the radial excitation as an interaction-mediated transfer of excitation energy from the axial to the radial degrees of freedom. What presumably happens in the system is the following: As the tunnelling time between the states separated by 4.4d becomes longer than the experimental timescale for decreasing the interaction strength, the population of these states cannot move adiabatically to the absolute ground state of the system. The states are therefore left with an excess population, hence an excess interaction energy that spatially broadens them by populating even more excited states in the neighbouring sites of the primary lattice. These excited states have an energy on the order of the mean separation of energies of neighbouring lattice sites 0.8∆, and can decay to the lowest state in the potential well on a timescale of ∆ −1 by transferring their excess energy to the radial directions.
The observation that this process takes place only near the onset of the Anderson glass regime is in agreement with the expectation of sup-pressed tunnelling between the states 4.4d apart in the exponentially localized regime. A radial excitation therefore shows that there are axial excitations, e.g. due to a lack of adiabaticity during the loading procedure. On the other hand, a lack of radial excitations is not strictly a proof that there are no axial excitations, since the radial heating is caused by local axial excitations, i.e. arising from neighbouring lattice sites, while there could be axial excitations over longer distances that are not able to release energy by relaxing into the true ground state. Ground state phase diagram For the calculation of the theoretical profiles and ground state phase diagram presented in this paper we have used the following meanfield approach. In the limiting cases of a single Anderson localized state (AL) or of a coherent condensate (BEC), the system can be described by a single wavefunction ψ(x, r ⊥ ) that is a solution of the Gross-Pitaevskii equation in the BEC regime, or of the Schrödinger equation in the AL regime. In the latter case the problem is separable, and the wavefunction can be factorized as ψ(x, r ⊥ ) = ϕ(x)φ(r ⊥ ). In the intermediate regime of a glassy phase made of several independent fragments, the atomic distribution that minimizes the energy can be accounted for by considering a wavefunction of the form ψ(x, r ⊥ ) ≈ i ϕ i (x)φ i (r ⊥ ) (similarly to the approach used by Lugan et al. 5 ). An approximate way to describe the overall behaviour of the system in the crossover between different regimes is to consider an effective wavefunction ψ(x, r ⊥ ) = ϕ(x)φ(r ⊥ , σ(x)), where the radial component is a Gaussian with an x-dependent width, and minimizing the corresponding energy functional with V(x) given by eq. S1, that corresponds to solving the nonpolynomial Schrödinger equation of ref. 6. This approach has the advantage of capturing the modification of the axial density distribution in the crossover from the localized to the extended regime, and we use it to evaluate the spatial and momentum distributions of the ground state of the system (Fig. S6). In particular, from the calculated momentum distributions we extract the correlation function g(4.4d) shown in Fig. 4b as we do for the experimental data. The boundaries between different regimes in Fig. 4b have been derived by studying both the g(4.4d) and the spatial distributions. For example, the system is expected to pass from the Anderson glass (AG) to the fragmented BEC (fBEC) when neighbouring localized states start to be macroscopically occupied and have a relative phase coherence. On the one hand, a non-zero value of g(4.4d) indicates the occupation of neighbouring states. On the other hand, we can check for the coherence properties of such states by studying the effect of introducing a phase twist over an extension d between them. If the two states are independent, the phase twist will not modify the energy of the system, while the contrary happens if they are coherent. In practice, we evaluate the energy cost per particle, for introducing a 2π phase twist at the lattice site k where the two states connect. If δE k is less than the change in the energy per particle associated with the removal of one atom from the system, E int /N, where E int is the last term in eq. S8, then the two states are considered independent. We define the boundary between the AG and fBEC regimes where g(4.4d)=0.01. We have verified that above this threshold the neighbouring states are also phase locked, i.e. they constitute a coherent fragment. For increasing interaction energy the size of these coherent fragments increases, while their number decreases. Due to the harmonic confinement, the system tends to form a large central core surrounded by fragments in the low-density tails. We assume that the system has entered the BEC regime when there is a single macroscopic fragment at the center of the trap, and the population of each of the outer fragments is less than 1% of that of the central one. Transport experiments In the experiment, in addition to studying the momentum distribution, we have investigated the transport properties of the various regimes of the system, using the same technique we already employed in ref. 7. The technique consists of suddenly releasing the axial harmonic confinement, while keeping both the quasi-periodic lattice potential and the radial confinement, and then observing the subsequent diffusion (or lack thereof) of the atomic cloud in the axial direction. In the previous experiment we observed that for vanishing E int the diffusion becomes strongly suppressed for ∆/J > 2. We have now observed that it continues to be suppressed for ∆/J 2 also for the values of E int explored in the present work, irrespective of whether the system is in the AG, fBEC or BEC regimes. We can interpret the absence of diffusion in the AG (fBEC) regime as a result of the insulating nature of the system, which occupies (partially) localized states. In contrast, in the extended BEC regime, a slow subdiffusive expansion is expected 8 , or the expansion might be completely suppressed for some values of the parameters by self-trapping 9 . The slow expansion however would probably not be detectable on the one second timescale of the experiments we have performed so far. The absence of a different diffusion behaviour of the system in the AG, fBEC and BEC regimes unfortunately does not allow us to identify the boundary between the fBEC and BEC regimes using this technique, nor to unambiguously prove the insulating nature of the AG and fBEC regimes. Other methods therefore need to be developed for such a purpose. | 2010-01-15T09:02:59.000Z | 2009-10-27T00:00:00.000 | {
"year": 2009,
"sha1": "625c4e9f52b670a798de6ffb182e363f863216af",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/nphys1635.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "625c4e9f52b670a798de6ffb182e363f863216af",
"s2fieldsofstudy": [
"Physics",
"Psychology"
],
"extfieldsofstudy": [
"Physics"
]
} |
239020858 | pes2o/s2orc | v3-fos-license | Detection of SARS-CoV-2 in urban stormwater: An environmental reservoir and potential interface between human and animal sources
While wastewater has been found to harbor SARS-CoV-2, the persistence of SARSCoV-2 in stormwater and potential transmission is poorly understood. It is plausible that the virus is detectable in stormwater samples where human-originated fecal contamination may have occurred from sources like sanitary sewer overflows, leaky wastewater pipes, and non-human animal waste. Because of these potential contamination pathways, it is possible that stormwater could serve as an environmental reservoir and transmission pathway for SARS-CoV-2. The objectives of this study are: 1) determine whether the presence of SARS-CoV-2 could be detected in stormwater via RT-ddPCR (reverse transcription-digital droplet PCR); 2) quantify human-specific fecal contamination using microbial source tracking; and 3) examine whether rainfall characteristics influence virus concentrations. To accomplish these objectives, we investigated whether SARS-CoV-2 could be detected from 10 storm sewer outfalls each draining a single, dominant land use in Columbus, Xenia, and Springboro, Ohio. Of the 25 samples collected in 2020, at minimum one SARS-CoV-2 target gene (N2 [US-CDC and CN-CDC], and E) was detected in 22 samples (88%). A single significant correlation (p = 0.001), between antecedent dry period and the USCDC N2 gene, was found between target gene concentrations and rainfall characteristics. Grouped by city, two significant relationships emerged showing cities had different levels of the SARS-CoV-2 E gene. Given the differences in scale, the county-level COVID-19 confirmed cases COVID-19 rates were not significantly correlated with stormwater outfall-scale SARS-CoV-2 gene concentrations. Countywide COVID-19 data did not accurately portray neighborhood-scale confirmed COVID-19 case rates. Potential hazards may arise when human fecal contamination is present in stormwater and facilitates future investigation on the threat of viral outbreaks via surfaces waters where fecal contamination may have occurred. Future studies should investigate whether humans are able to contract SARS-CoV-2 from surface waters and the factors that may affect viral longevity and transmission.
Introduction
In December 2019, the novel coronavirus , an illness caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was first detected in Wuhan, China (WHO, 2020a(WHO, , 2020b. The ongoing COVID-19 outbreak, declared a public health emergency by World Health Organization (WHO) on March 11 th , 2020, poses a significant risk to international public health with more than 187 million confirmed cases worldwide and 4.04 million deaths as of July 2021 (WHO, 2020a(WHO, , 2020b. At present, there are two known primary methods of transmission for COVID-19: person-to-person contact and respiratory droplets emitted during exhalation (Cui et al., 2019;Kang et al., 2020). Contraction of this disease is not limited to humans, as it can also infect other mammals including domesticated felines and canines, commodity animals such as mink, and wildlife species such as deer and non-human primates (Kitajima et al., 2020;Newman et al., 2020;Munnink et al., 2021;Palmer et al., 2021).
To date, little is known about whether water is a possible method of transmission, including transmission via contact with or accidental ingestion of wastewater or stormwater. Studies have shown that SARS-CoV-2 can be shed in fecal matter of infected individuals (Ahmed et al., 2020a). Because this virus can continue to be intact and viable in water under specific conditions (e.g., turbidity, temperature, and time in solution impact viability), research is needed to determine whether SARS-CoV-2 can be found in stormwater, its potential as a source of infection, and what variables in water influence its viability (Sayess et al., 2020). Published studies currently show that SARS-CoV-2 is detectable and viable for up to two weeks in sewage, and thus could cause COVID-19 infections (Zaneti et al., 2020). Zheng et al. (2020) found that SARS-CoV-2 was detectable in human stool for a median 22 days. Another study found that fecal shedding of SARS-CoV-2 could continue up to J o u r n a l P r e -p r o o f Journal Pre-proof seven weeks past the cessation of COVID-19 symptoms (Kitajima et al., 2020). This is concerning since the virus can potentially survive for multiple days in wastewater and be transmissible once the original host is no longer contagious (Orive et al., 2020;Zheng et al., 2020). Even less is known about the ability of SARS-CoV-2 to persist in stormwater. Additional research is necessary to determine SARS-CoV-2 transmissibility in waters, as current data only gives insight into whether it is culturable from fecal matter .
Since the outset of the COVID-19 pandemic, efforts to understand how the virus spreads, infects, and persists have been substantial. Recent studies found that the virus sheds in stool in sufficiently high levels for successful detection and analysis (Gao et al., 2020;Holshue et al., 2020;Wurtzer et al., 2020). Stormwater exists as a potential conveyance mechanism for SARS-CoV-2 due to sanitary sewer overflows, as well as fecal matter that it may interact with stormwater through animal feces or water in separate sanitary and storm sewers through leaks or otherwise accidental cross-connection. Herein, we define stormwater as runoff discharging from separated storm sewers into streams, lakes, and rivers (i.e., not combined sewers which convey both waste and stormwaters); since recreation often occurs in these locations, there is a potential public health risk if interaction with viable SARS-CoV-2 occurs (King, 1995). Population density has been positively correlated to higher transmission rates of SARS-CoV-2, making urban environments more likely hotspots for the virus (Liu, 2020). It is well known that infection rates grouped by county cannot account for the wide disparities of diseases between adjacent neighborhoods; evidence is mounting that infection rates in the US may be significantly higher in impoverished neighborhoods than their higher income counterparts (Adhikari et al., 2020).
Parks and lakes are common summertime destinations and have increased in popularity as worldwide shutdowns reduce the number of available indoor activities (Venter et al., 2020; J o u r n a l P r e -p r o o f Journal Pre-proof Geng et al., 2021). Surface waters near stormwater outfalls may pose potential risk of exposure of SARS-CoV-2 to humans. Monitoring microbial quality at storm sewer outfalls has been used in the U.S. to determine whether downstream waters are safe for swimming and recreation (Dorevtich et al., 2015). This is typically done through fecal indicator bacteria (FIB) as a proxy for estimating recreational waterborne disease risk (Marion et al., 2010). Fecal coliforms are routinely found in stormwater at relatively high concentrations Sauvé et al., 2012;Mallin et al., 2016), suggesting that stormwater may harbor and convey SARS-CoV-2.
Infiltration and inflow between the sanitary and storm sewer via the 'urban karst' phenomenon (Bonneau et al., 2017), fecal matter from wild animals, and accidental or illicit wastewater connections to the storm sewer may result in the transport of SARS-CoV-2 by stormwater.
Wastewater sewage leaking into stormwater sewers is an established issue where aging infrastructure conveying human sewage may leak into the storm sewer, discharging it with minimal treatment to surface waters, potentially at a location where the public may interact with the contaminated water (Ahmed et al., 2020b). Human and animal fecal contamination in stormwater collected from urban areas has already been confirmed, possibly making stormwater a new SARS-CoV-2 transmission pathway . . One study in Spain reports SARS-CoV-2 infections in two free-ranging mink and posits that the mink were exposed to SARS-CoV-2 via surface waters and gives insight into the potential threat that is SARS-CoV-2 transmission via surface waters (Aguiló-Gisbert et al., 2021). Mink are semiaquatic and highly susceptible to SARS-CoV-2, and this report provides preliminary and plausible support for the potential threat that is SARS-CoV-2 transmission via surface waters (Aguiló-Gisbert et al., 2021). A recent study conducted in the United States also highlighted a previously undocumented phenomenonwide-spread evidence of SARS-CoV-2 in free-ranging J o u r n a l P r e -p r o o f Journal Pre-proof white tailed-deer: 33% of the wild deer sampled in Pennsylvania, Michigan, New York, and Illinois harbored SARS-CoV-2 antibodies in their serum (Animal and Plant Health Inspection Service [APHIS], 2021b). There is no current evidence for deer-to-human SARS-CoV, but it is possible that deer may be an reservoir for the virus, potentially along with bats or mink or some non-human primates.
To date, few conclusions have been drawn about how serious a risk SARS-CoV-2 may pose in water, particularly stormwater where concentrations may be dilute. Closely related coronaviruses, including Severe Acute Respiratory Syndrome (SARS-CoV), and Middle East Respiratory Syndrome (MERS-CoV), have been reported to persist in water with SARS-CoV confirmed to have strong survivability in water (Duan et al., 2003). Some enveloped viruses have demonstrated to be stable in water environments, among those are the coronaviruses MERS-CoV and SARS-CoV (Wigginton, Ye, and Ellenberg, 2015). Given its status as an enveloped coronavirus, it is important to investigate the mechanics of potential water-based presence, survival, and transmission of SARS-CoV-2, which are at present poorly understood. Studies on the persistence of SARS-CoV-2 in wastewaters have identified many possible parameters that affect the viability of virus transmission in water including temperature, duration of time in water, the presence of other chemicals, pH, and virus concentration (Liu, 2020;WHO, 2020b). Lab studies estimated that SARS-CoV-2 could survive under ideal conditions outside the human body for as little as three days to multiple weeks (WHO, 2020b;Tran et al., 2020). Data suggests that the likelihood of contracting SARS-CoV-2 from treated wastewaters is low (WHO, 2020b;Tran et al., 2020), but this does not account for stormwater that is subject to minimal treatment that may directly discharge to surface waters during wet weather.
J o u r n a l P r e -p r o o f
The main objective of this study was to determine if SARS-CoV-2 was detectable in stormwater to lay the foundation for determining whether stormwater could be a potential transmission pathway. To this end, we collected stormwater samples from three communities with varying population density in central Ohio, USA. To measure the extent of human and animal fecal contamination in stormwater, we conducted microbial source by targeting hostspecific fecal bacterial genetic markers. In addition, stormwater-related parameters were compared against SARS-CoV-2 target gene concentrations to examine their potential relationships. This study highlights the importance of the One Health paradigm as urban stormwater provides a connected and tight interface between humans, animals, and the environment, while addressing the need for managing this possible transmission route now and in the future.
Sewershed descriptions and stormwater sample collection
Ten storm sewer catchments (hereafter sewersheds) were monitored from May 10 th to July 24 th of 2020 in Ohio for the presence of SARS-CoV-2 in stormwater runoff discharging from their respective separate storm sewer network and are summarized as follows: Columbus (high density) in Franklin County (population 1.317 million), Xenia (moderate density) in Greene County (population 168,937), and Springboro (low density) in Warren County (234,602) (Ohio Development Services Agency, 2019; U.S. Census Bureau, 2019). Sewersheds are defined as a portion of land that drains to the same storm sewer to a single, defined outfall and were characterized by distinctive land use, sewershed area, and imperviousness (Table 1). All sewersheds were representative of a single, dominant land use (i.e. residential, commercial, industrial, etc. covering ≥75% of the sewershed area) in an urban or suburban setting. Land use and sewershed boundaries were defined in GIS using aerial imagery and LiDAR data. A total of 25 stormwater samples were collected for SARS-CoV-2 analysis from single family residential (18 samples), light industrial (3 samples), commercial (2 samples), and multi-family residential land uses (2 samples). 14 of the samples were collected in Columbus, two in Xenia, and nine in Springboro. Urban sewersheds were the focus of this work and water quality samples were only collected during wet weather events. runoff, the automated samplers were programmed to take runoff-volume proportional sample aliquots. The sample trigger was set such that up to 50 sample aliquots were captured during a 50 mm rain event of variable duration; each aliquot was 350 mL allowing for a maximum collection volume of 17.5 L. Aliquots were suctioned using the automated sampler's peristaltic pump into an 18.9L composite bottle. Samplers were programmed with an enable such that baseflow was disregarded and only wet weather flows were sampled; when conditions returned to baseflow after the cessation of flow, the sampler ceased collecting samples.
Composite samples selected for analysis described greater than 80% of the pollutograph (U.S. EPA, 2002). To prevent degradation of the virus, samples were collected within 24 hours of cessation of rainfall. Upon collection, the 18.9L containers were vigorously shaken and subsampled into sterile Nalgene bottles (Nalgene, Fisher Scientific, USA) bottles.
Samples were immediately placed on ice in a cooler during transit to the laboratory. Twenty-five composite samples were collected across the ten sewershed outfalls between May 10 th and July 24 th , 2020.
Data concerning the daily new confirmed COVID-19 cases for the counties where stormwater sampling occurred were obtained from the Ohio Department of Health COVID-19 dashboard (ODH, 2021) for the day of and week immediately preceding each sample collection.
These data were collected using the Ohio Disease Reporting System and case numbers were reported using date of illness onset when known or earliest known date of symptoms if unknown. per 100 mL after considering the dilution factors and filtration volumes.
Microbial source tracking processing and gene quantification
To further determine possible fecal indicator bacteria sources, all sample water was processed for MST analyses. Two host markers, human fecal (HF183) and ruminant (Rum2Bac), were targeted for downstream analyses. These genes were used due to previous studies from the Columbus sewer outfalls that confirmed the dominance of human-and ruminant-associated fecal bacteria in stormwater . For microbial filtration, 100 mL of stormwater sample was filtered in triplicate through a sterile 0.22 μm membrane filter (Cat. No. GTTP04700, Millipore Sigma, Burlington, MA, USA). The membranes were folded, placed in a sterile screwcap microcentrifuge tube and stored at -20°C for approximately 1-2 weeks until further analysis could be undertaken. Microbial DNA extractions were conducted using a DNeasy PowerSoil Kit Two individual monoplex assays were conducted to target HF183 and Rum2Bac, with primers and probe sets previously used in stormwater studies . Droplet digital PCR (ddPCR) was employed for gene quantification. Gene amplifications were conducted using 20 μL reactions containing ddPCR supermix for probes (Cat No. 1863024, Bio-Rad), DNase-& RNase-free water, 900 nM of forward and reverse primers, 250 nM of probe, and DNA templates. Following droplet generation using the QX200 Droplet Generator (Bio-Rad), a Bio-Rad C1000 Touch Thermal Cycler (Bio-Rad, Hercules, CA, USA) was used to amplify the targets with the following conditions: 94°C for 10 minutes, 40 cycles of denaturation and annealing/extension at 94°C for 30 seconds and 60°C for 60 seconds, respectively, followed by 98°C for 10 minutes and then a final hold of 4°C. Following amplification, target gene concentrations were determined using a QX200 droplet reader (Bio-Rad) and QuantaSoft (V 1.7; Bio-Rad).
Viral concentration procedure
The viral concentration protocol was modified from the USEPA Method 1615 (USEPA, 2014). Briefly, 600-800 mL of stormwater sample was passed through a positively charged ViroCap filter (Scientific Methods, Inc., Granger, IN, USA) using a peristaltic pump at a rate of 0.5 L/min. 150 mL of 1.5% beef extract (Cat. No. 211520, Becton, Dickinson and Company, USA) containing 0.05 M glycine (pH 9) (Cat. No. G8898, Sigma-Aldrich, USA) was added to the filter for elution. The eluent was soaked for 30 minutes then circulated using the peristaltic pump for 5 min at room temperature prior to elution. Secondary concentration was performed via organic flocculation. The eluent was pH adjusted to 3.5 ± 0.1 using small additions of 1.2 M J o u r n a l P r e -p r o o f hydrochloric acid (HCl) while slowly mixing at room temperature, followed by a 30-min slow mixing period. Next, the adjusted eluent was centrifuged for 15 minutes at 2,500 × g at 4°C and then the pellet containing the flocculated virus was resuspended in 30 mL of 0.15 M sodium phosphate (pH 9) (Cat. No. 255793, Sigma-Aldrich, USA). For complete dissolution, the precipitate was then shaken at room temperature at 160 rpm for 10 min on an orbital shaker. The sample was centrifuged again at 4,000 × g for 10 minutes at 4°C to remove impurities, and the virus-containing supernatant was pH adjusted to 7.0-7.5 using 1.2 M HCl. Lastly, the viruscontaining solution was filtered through a 0.22 μm sterile filter (Cat. No. SLGPM33RS, Millipore, USA) and transferred to a Vivaspin 20 unit (30,000 MWCO, Sartorius Stedim, Cat. No. VS2022, Germany) for tertiary concentration. The filtrate was centrifuged at 4,000 × g at 4°C until the final volume was less than 400 μL. The solution was washed with 1 mL of sterile 0.15 M sodium phosphate (pH 7-7.5) and centrifuged at 4,000 × g at 4°C for a final volume of 200 μL (Ijzerman, Dahling, & Fout, 1997). The concentrated virus filtrate was used for RNA extraction or stored at -80°C until further analysis. Table 2. Primers and probes used in the SARS-CoV-2 ddPCR assays.
SARS-CoV-2 RNA extraction and viral quantification
Parallel to the gene quantifications of the MST targets, droplet generation using the QX200 Droplet Generator (Bio-Rad) was followed by amplification of SARS-CoV-2 genes using a Bio-Rad C1000 Touch Thermal Cycler (Bio-Rad, Hercules, CA, USA) with the following conditions: 94°C for 10 minutes, 40 cycles of denaturation and annealing/extension at 94°C for 30 seconds and 60°C for 60 seconds, respectively, followed by 98°C for 10 minutes and then a final hold of 4°C. Following amplification, target gene concentrations were determined using a QX200 droplet reader (Bio-Rad) and QuantaSoft (V 1.7; Bio-Rad). The limit of detection J o u r n a l P r e -p r o o f (LOD) for all assays conducted in this study was 667 GC/L. For a sample to be considered SARS-CoV-2 positive, a single gene, either E or one of the two detection pathways for the N2 gene, must be detected in the samples with concentrations greater than the LOD.
Data analysis
Statistical analysis was performed using R statistical software (R Core Team, 2021).
Normality was assessed using the Shapiro-Wilk test for all datasets. All data were non-normally distributed except for the E. coli data. Because most data were nonparametric, the Spearman correlation analysis was used to determine relationships between SARS-CoV-2 genes and indicator bacteria. Instances where the SARS-CoV-2 E and N2 genes were not detected were included as zero concentrations in analyses. A correlation analysis was used to assess relationships between gene concentrations and rainfall patterns (depth, duration, and antecedent dry period) and between indicator bacteria concentrations and rainfall patterns.
Presence of SARS-CoV-2 in stormwater
Out of 25 analyzed stormwater samples, 22 samples (88%) were found to have detectable levels of SARS-CoV-2. E. coli was found present in all 14 samples analyzed, ranging from 5.00 × 10 2 to 1.05 × 10 6 CFU/100 mL; these concentrations were consistent with previous studies exploring fecal contamination of stormwater Sidhu et al., 2012;Schoen et al., 2017). Mean human-specific fecal markers (8.58 × 10 3 GC/L) were more than twenty times greater than the mean Rum2Bac (3.56 × 10 2 GC/L), the fecal marker associated with ruminant J o u r n a l P r e -p r o o f Journal Pre-proof fecal material, suggesting that a sizeable portion of the fecal contamination of stormwater from these 10 sewersheds is from human sewage (Table 3).
SARS-CoV-2 gene-to-gene, MST, and E. coli correlation analyses
The SARS-CoV-2 (E gene) and the log HF183 concentrations were significantly correlated (p = 0.03; Table 4). Significant correlations were also observed between the SARS-CoV-2 (US CDC N2) gene and log E. coli concentrations, with a correlation coefficient of 0.63 (Table 4). When comparing between the SARS-CoV-2 genes, there was a significant correlation between the N2 and E genes with a correlation coefficient of 0.41.
SARS-CoV-2 and FIB concentrations in land use
The in the E graph due to significant correlations (Table 4) between these data and the genes in the plots they are depicted in.
SARS-CoV-2 concentration and rainfall characteristic analyses
A significant positive correlation (ρ = 0.6, p = 0.001) was observed between antecedent dry period (ADP) and the US-CDC N2 gene. Combinations of all other SARS-CoV-2 genes or indicator bacteria with rainfall characteristics were not significant. Likely, the lack of significant correlation between rainfall characteristics and SARS-CoV-2 data is the result of the low concentration of the virus in stormwater, the high LOD, and the relatively small sample size in J o u r n a l P r e -p r o o f this study. Improved extraction processes of viral RNA and increased sample size may help elucidate these connections in future studies.
SARS-CoV-2 concentration data analysis grouped by county and city data
Utilizing the Kruskal-Wallis test, no significant differences were observed between the SARS-CoV-2 genes and the following groupings: single land uses, SFR grouped against all other land uses, and SFR and MFR grouped against LI and Comm land uses. The Kruskal-Wallis test showed significant differences between the following groupings and the SARS-CoV-2 E gene but not the N2 genes: Columbus, Xenia, and Springboro grouped against each other (p = 0.049), and Columbus grouped against Dayton (p = 0.014). A Dunn's test post-hoc analysis on the Columbus, Xenia, and Springboro groupings showed that Columbus had a significantly higher (p = 0.01) concentration of the SARS-CoV-2 E gene than Springboro. Both Dunn and Wilcoxon post-hoc tests showed that Columbus had a significantly higher concentration (p = 0.007 and p = 0.016 respectively) of the SARS-CoV-2 E gene than Dayton.
Discussion
Of the samples collected between May 10 th and July 24 th , 88% had detectable levels of at least one SARS-CoV-2 gene. Of the samples taken from SFR land uses during this same sampling window, 94% had detectable levels at least one SARS-CoV-2 gene. The concentrations of SARS-CoV-2 genes in these stormwater samples were lower than what was detected in wastewater samples collected during the same period of the pandemic in northeastern United States, likely due to the dilute nature of fecal matter in stormwater (Peccia et al., 2020).
The presence of human-specific fecal contamination in stormwater is a potential cause for concern as a vehicle of transmission for SARS-CoV-2. Given that only two of the 25 samples collected for SARS-CoV-2 analysis had detectable levels of the Rum2Bac gene while 21 of the J o u r n a l P r e -p r o o f Journal Pre-proof 25 samples tested had detectable levels of HF183, we concluded that the majority of fecal bacteria detected in this study were likely human-associated. It is important to note that while the correlation between E. coli and HF183 was high (ρ = 0.85), the data were not significant.
Further, data from Csiszar et al. (2020) suggests that animals, including ruminants, make up a minority of SARS-CoV-2 infections (CDC, 2021). Based on this, it is unlikely that non-human mammals are the source of the SARS-CoV-2 genes detected in this study. However, whole genome sequencing would need to be completed to confirm these findings.
Theoretically, all genes within the SARS-CoV-2 genome should correlate with one another; this was not the case herein. It was determined through correlation analysis that only the CN-CDC N2 and E genes correlated with one another. The employed detection and quantification methods were less sensitive to low levels of SARS-CoV-2 present in these samples. As the limit of detection is lowered and gene copies per liter of SARS-CoV-2 in solution increases, gene-to-gene and gene-to-HF183 correlations should increase past the threshold of significance. This study was conducted in the early months of the SARS-CoV-2 outbreak and utilized less sensitive concentrating methods (Ai et al., 2021) since few published methodologies existed at the time. Although concentrating methods varied, it was clear within the scientific community at the time what primers and probes proved sufficient for SARS-CoV-2 detections, supported by both WHO and CDC scientists around the world, and we ruled out poor primer and probe specificity as a possible cause for lack of detection of SARS-CoV-2 genes in the collected samples (CDC, 2020;Ai et al., 2021). Moreover, increased sample sizes could also improve potential correlations which can be explored in future studies.
As we detected SARS-CoV-2 and other fecal markers in stormwater, albeit with low sensitivity, this study unveils the possible strength in utilizing wastewater-based viral J o u r n a l P r e -p r o o f Journal Pre-proof concentration methods for other water types, such as stormwater. As the science around viral concentration in water developed throughout the pandemic, concentration methods improved to better process larger volumes of dirtier water and to better fit the needs of SARS-CoV-2 surveillance research (Ai et al., 2021;LaTurner et al., 2021). With the relatively low viral concentrations observed in the collected stormwater, it would be possible to process larger volumes of water using better-understood methods for the same purpose of viral surveillance moving forward. Stormwater as a matrix of interest during current and future public health crises may grow and confirming feasible methods for this research is important. Knowing what we now understand as more appropriate concentrating methods, and confirming that these methods do overlap for both wastewater and stormwater, will support best practice in the future.
A single correlation between rainfall characteristics (ADP) and SARS-CoV-2 genes (US-CDC N2) was present in our data. Rainfall depth, duration, and intensity influenced concentrations of SARS-CoV-2 in stormwater less than ADP in this study because as time between storms increases, pollutant loads accumulate in the sewersheds, including genetic material from SARS-CoV-2. The very strong correlation between the US-CDC N2 gene and ADP (ρ = 0.6, p = 0.001) suggests that buildup and wash-off processes may be at play. Further research is required to elucidate further relationships between viral genes and rainfall characteristics.
The surface of the SARS-CoV-2 envelope is positively charged; however, the spike proteins which protrude from the envelope and are responsible for binding net an overall negative charge (Pawlowski, 2021;Hassanzadeh et al., 2020). The charge of particles strongly influences sorption to sediment surfaces (Björklund & Li, 2018;Chen, Feng, and Huang, 2013), with negatively charged particles typically repelled by sediment in stormwater. First flush events J o u r n a l P r e -p r o o f Journal Pre-proof are a phenomenon observed in stormwater runoff where early stages of the hydrograph contain disproportionately high pollutant loads compared to the remainder of the hydrograph (Perera et al., 2021). High sediment loads are commonly documented in first flush events (Chow & Yusop, 2014;Hathaway & Hunt, 2011). Other pollutants are present in first flush events as well and of these, most are positively charged and are bound to the negatively charged sediment surfaces (Holzmann, Simeoni, & Schäffer, 2021;Taebi & Droste, 2004). Rainfall characteristics often significantly impact the first flush, especially intensity, duration, and depth of rainfall (Tiefenthaler & Schiff, 2001;Zuraini & Alias, 2020). Given that SARS-CoV-2 has negativelycharged spike proteins that are unlikely to bind to negatively-charged sediment surfaces, concentrations of the virus in stormwater might not be significantly influenced by rainfall characteristics that cause the first flush. This could explain why SARS-CoV-2 did not correlate with rainfall depth, duration, or intensity. J o u r n a l P r e -p r o o f Stormwater can serve as a key matrix of study for future exploration of pathogens in the environment. The severity and frequency of emerging infectious disease outbreaks are expected to increase in the future due largely to the effects of increasing extreme weather events (Redding et al., 2019;Hertig, 2019;Sanderson and Alexander, 2020) and increased population density (Liu, 2020;Aabed & Lashin, 2021). Given the lack of data surrounding the potential transmission pathway of SARS-CoV-2 via contaminated surface waters, the increasing risks of outbreaks because of climate change, continued urbanization worldwide, and aging sewer infrastructure, it is imperative that stormwater be explored as a real and present reservoir of SARS-CoV-2 and other potential pathogens and contaminants in the future.
Conclusion
This study was one of the first to detect SARS-CoV-2 in stormwater from the early waves of the COVID-19 pandemic in the United States, between May 10 th and July 24 th , 2020.
This study confirmed the presence of SARS-CoV-2 in stormwater. Of the MST data collected, the majority of fecal contamination present in the samples did not come from ruminant sources, but from human sources. The viability of the virus in surface water and wastewater is still poorly understood, and further analysis is necessary to better understand the relationship between the virus and the water it may contaminate.
With respect to how viral load relates to rainfall characteristics, given the small data set, wide variation in land use, and relatively high LOD, it is possible that any of the SARS-CoV-2 genes could correlate with a wide array of rainfall characteristics, though this is speculation.
Likely, the lack of significant correlation between rainfall characteristics and SARS-CoV-2 data is the result of the low concentration of the virus in stormwater, the high LOD, and the relatively small sample size in this study. Improved extraction processes of viral RNA and increased J o u r n a l P r e -p r o o f sample size may help elucidate these connections in future studies. A larger, more robust data set is required to fully investigate the relationship of viral load and rain characteristics.
This study makes no claims about transmissibility or the likelihood of contracting SARS-CoV-2 from surface waters, only investigating whether it is detectable. Follow up studies should investigate whether SARS-CoV-2 is intact, viable, infectious, and transmissible through fecaloral (enteric) routes. Stormwater is a conveyance mechanism for a variety of pathogens and is one cause of increased risks associated with waterborne diseases in increasingly populated urban areas. This study showed that urban stormwater is subject to contamination with SARS-CoV-2, among other pathogens, and should be considered as a potential public health threat. Future work should focus on strategies to reduce bacterial and viral contamination of stormwater prior to discharge to surface waters.
Funding
This work was partially supported financially as a Targeted Investment by The Ohio State University Infectious Diseases Institute. We also acknowledge funding support by Ohio Environmental Protection Agency, Ohio Water Development Authority, and the City of Columbus.
Acknowledgements
This work could not be possible without the help of Yuehan Ai in the lab of Dr. Jiyoung Lee at The Ohio State University, for her support on developing methodologies and for viral analyses. We would also like to thank sampling team members Deirdre Wetmore and Emily Wilson for their help in the field.
Declarations of Competing Interest
J o u r n a l P r e -p r o o f | 2021-10-19T13:03:47.956Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "376638146034374a18a75b14e0dd9d4e7d6d0a42",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8522674",
"oa_status": "GREEN",
"pdf_src": "ElsevierCorona",
"pdf_hash": "376638146034374a18a75b14e0dd9d4e7d6d0a42",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251611889 | pes2o/s2orc | v3-fos-license | Elastic Dynamic Sling on Subluxation of Hemiplegic Shoulder in Patients with Subacute Stroke: A Multicenter Randomized Controlled Trial
Background: Shoulder subluxation occurs in 17–64% of hemiplegic patients after stroke and develops mostly during the first three weeks of hemiplegia. A range of shoulder orthoses has been used in rehabilitation to prevent subluxation. However, there is little evidence of their efficacy. AIM: This study aimed to investigate whether there is a difference in the subluxation distance, pain, and functional level of the hemiplegic upper extremity among patients with two different shoulder orthoses. Design: This is a prospective, randomized controlled trial with intention-to-treat analysis. SETTING: Multicenter, rehabilitation medicine department of two university hospitals in South Korea. Population: Forty-one patients with subacute stroke with shoulder subluxation with greater than 0.5 finger width within 4 weeks of stroke were recruited between January 2016 and October 2021. Methods: The experimental group used an elastic dynamic sling while sitting and standing to support the affected arm for eight weeks. The control group used a Bobath sling while sitting and standing. The primary outcome was to assess the distance of the shoulder subluxation on radiography. The secondary outcomes were upper-extremity function, muscle power, activities of daily living, pain and spasticity. Result: The horizontal distance showed significant improvement in the elastic dynamic sling group, but there were no significant differences in the vertical distance between the elastic dynamic and Bobath sling groups. Both groups showed improvements in upper-extremity movements and independence in daily living after 4 and 8 weeks of using shoulder orthoses, and the differences within the groups were significant (p < 0.05). However, there were no significant differences in upper-extremity movements and independence in daily living between the two groups. Conclusions: The subluxation distance showed better results in the elastic dynamic sling, which has both proximal and distal parts, than in the Bobath sling, which holds only the proximal part. Both shoulder orthoses showed improvements in the modified Barthel index, upper-extremity function, and manual muscle testing.
Introduction
In stroke patients, shoulder subluxation is a common complication. Weakness of the upper extremity of the affected side and the weight of the dependent arm cause a downward displacement of the humeral head from the shallow glenoid fossa, causing shoulder subluxation [1]. The etiopathogenesis is unclear, but it has been suggested that weak muscles around the shoulder joint interrupt the mechanical integrity and stability of the joint, resulting in a palpable gap between the acromion and humeral head. In the first three weeks of hemiplegia, the affected arm is flaccid or hypotonic; hence, the shoulder muscles cannot anchor the humeral head within the glenoid cavity. The incidence of shoulder subluxation on the hemiplegic side ranges from 17% to 64% [2][3][4][5].
Stroke can cause shoulder subluxation and may lead to hemiplegic shoulder pain, resulting in shoulder contracture and secondary irreversible damage to the muscles, ligaments, joint capsules, nerves, and blood vessels. Pain and joint contracture caused by shoulder subluxation can have a negative impact on the recovery of upper-extremity function in patients with stroke [6]. It can lead to serious limitations in activities of daily living, balance, mobility, and upper-limb and hand functions. It is associated with a higher incidence of depression, both during and after rehabilitation [7,8].
The underlying hypothesis for the association between shoulder subluxation and pain is that gradual stretching of the capsule and tendons causes them to become ischemic and painful. In addition, the weight of the arm and sustained stretching of the soft tissues can cause damage and inflammation [9].
To prevent and treat shoulder subluxation, arm rest, shoulder orthosis, shoulder taping, and functional electrical stimulation, botulinum toxin, peripheral nerve stimulation (PNS), transcutaneous electrical nerve stimulation (TENS), and neuromuscular electrical stimulation (NMES) are being performed [10][11][12][13][14][15]. Among them, orthoses may be implemented to provide a low-load prolonged stretch to prevent length-associated changes in muscles and connective tissue that can limit the function of the affected limb after stroke [16]. An orthosis is a removable device that immobilizes joints for therapeutic purposes by applying a prolonged static stretch to the muscles. The proposed benefits of orthoses in individuals with neurological impairments include decreasing spasticity, improving function, preventing contracture, minimizing pain, and decreasing swelling [17].
Various types of shoulder orthoses are used to prevent and treat subluxations. Based on a 2005 Cochrane review [10], there was insufficient evidence to conclude whether shoulder slings could prevent vertical subluxation or decrease shoulder pain. The authors recommended that randomized controlled trials be conducted to evaluate the efficacy of devices to support the shoulder. This expert consensus also recommended that such devices should be trialed immediately when the patient could be positioned upright and continued for a period of four to six weeks as research is lacking on the immediate post-stroke period.
In this study, two different shoulder orthoses were used in patients with subacute stroke. The elastic dynamic shoulder sling, a new orthosis with proximal and distal attachments, was compared with the commonly used Bobath roll sling. The purpose of this study was to investigate whether there is a difference in the subluxation distance, pain, and upper-extremity function between the two shoulder orthoses.
Design
This was a prospective, randomized, controlled, multicenter trial. The patients who experienced stroke for the first time and receiving inpatient treatment with dislocations greater than 0.5 finger width were recruited between January 2016 and October 2021 at the Rehabilitation Department of the Kyunghee University Hospital at Gangdong and Chungnam National University Hospital.
Randomization
The scientific validity of the clinical trial was ensured by maximizing the comparability of the experimental (elastic dynamic shoulder sling group) and control (Bobath sling group) groups by implementing a randomization method and preventing the interference due to the subjectivity of the research team. Using a random function in Excel, a stratified randomization code was generated with sex and institution as stratification variables. The ratio of the test and control groups was 1:1.
Participants
Patients were included if they were within 4 weeks of their first stroke, had a shoulder dislocation greater than 0.5 finger width and has cognitive function with the ability to express pain. Patients were excluded if they had shoulder weakness before stroke (which may be due to spinal cord injury and myopathy), inability to evaluate pain (as is seen in patients with total aphasia and cognitive decline), history of shoulder joint disease before stroke, and age < 18 years. The average age of the participants was 64.19 ± 13.48 years. The study population consisted of 26, 15, 2 and 39 patients with infarction, hemorrhage, brain stem lesions, and non-brain stem lesions, respectively.
Intervention
The experimental group received elastic dynamic shoulder sling ( Figure 1) and the control group received Bobath sling ( Figure 2) to support affected upper extremity. Both groups wore their orthoses immediately after transfer to the Department of Rehabilitation Medicine within four weeks from stroke onset. They wore the orthoses for a period of 8 weeks during the active time of the day, but not when lying in bed or during formal therapy sessions. All patients, independent of the assigned study group, under-went the same standard rehabilitation program. The therapy program focused on avoiding complications related to the severely impaired upper limbs.
Examinations and evaluations, including radiography, were performed during clinical follow-up visits. The timing of the procedures are as follows ( Figure 3). T1 (immediately transferred to the Department of Rehabilitation Medicine within four weeks of stroke onset).
Outcomes
(1) Primary outcomes Subluxation distance: Measured with true anteroposterior X-ray. It brings the scapula of the injured side parallel to the X-ray plate. This avoids overlapping of the humerus head and the glenoid.
After each participant was seated on a chair, a true anteroposterior (AP) simple radiographic examination of both shoulder joints was performed in an upright posture, with the arm in a neutral position hanging down under gravity.
(2) Secondary outcomes FMA: To evaluate the recovery of motor function in stroke patients, the upperextremity motor function was evaluated using the FMA scale. The maximum score is 100 points, with 66 points for upper-extremity motor function and 34 for lower-extremity motor function. In this study, only upper-extremity motor function was assessed of three points, with 0 being unable to perform, 1 being partially able to perform, and 2 being completely capable of performing. This test is known to have high reliability between test and retest and high inter-examiner reliability and validity [18].
K-MBI: The degree of dependence of the patient when performing activities of daily living was evaluated in five categories: complete independence, little help, moderate help, much help, and complete dependence. The evaluation consisted of ten areas: eating, dressing, dressing up, bathing, moving in a chair/bed, moving and using the toilet, walking (or moving a chair car), using stairs, and controlling bowel movements [19].
Pain: The degree of shoulder pain at each time point was indicated by a visual analog scale (VAS, 0-10), which is commonly interpreted as a reasonably valid report of subjective pain. Each participant was asked to rate the presence and degree of pain in the affect-ed shoulder on a scale of 0 (no pain experienced) to 10 (worst pain imaginable) during evaluation.
MAS: It is the most universally accepted clinical tool that is used to measure increase in muscle tone. Spasticity was defined by Jim Lance in 1980 as a velocity-dependent increase in muscle-stretch reflexes associated with increased muscle tone as a component of the upper motor neuron syndrome [20].
MMT: It is the most commonly used method for documenting impairments in muscle strength [21]. The muscle power of the shoulder deltoid muscles was examined. Shoulder forward flexion and abduction were tested by manual muscle testing procedure. Average was recorded.
Data Analysis
Three analysts measured and analyzed the radiographs in a random order to reduce measurement bias. Distance measurements of shoulder subluxation from a single radiograph were used, as described by Brooke et al. [22]. The central point of the glenoid fossa of the scapula was determined by marking the most distant vertical and horizontal edges. Height and width measurements were then bisected to determine the location of the central point of the glenoid fossa. The central point of the humeral head was determined by measuring the greatest distance that could be horizontally obtained across the head. This line was bisected to provide the central point of the humeral head. The inferior acromial point was determined by identifying the most inferior point on the acromial and lateral surfaces of the acromioclavicular joint. The vertical distance (VD) was measured from the acromial point to the central point of the humeral head. The horizontal distance (HD) was measured from the central points of the humeral head and the glenoid fossa ( Figure 4).
Statistical Analysis
Statistical analysis was performed using Statistical Package for Social Sciences (version 25.0; SPSS Inc., Chicago, IL, USA). The analysis was conducted by an independent scientist and statistician. p-value of <0.05 was considered significant.
Independent-samples t-test was used to confirm differences in the degree of subluxation between the groups such as the difference between the radiographic test results in T2 − T1 (∆T1) and T3 − T2 (∆T2) and the difference in the radiographic test results before and after wearing a shoulder brace in T1. The difference in radiological examination results in T2 − T1 (∆T1) and T3 − T2 (∆T2) was calculated. A linear mixed model was used to confirm changes within the group over time [23].
With the power set at 80% and an overall p < 0.05, we needed 21 subjects per group. To allow for dropouts, we planned to recruit 36 participants per group. Post hoc power analysis showed that group sample sizes of 21 and 20 achieved 80.940% power to reject the null hypothesis of equal means when the population mean difference was µ1 − µ2 = 2.28 − (−0.08) = 3.08, with standard deviations of 3.11 for group 1 and 3.66 for group 2, and with a significance level (alpha) of 0.050 using a two-sided two-sample unequal variance t-test. Effect size was 0.909 [24].
Results
The flow of participants during the trial is summarized in Figure 5. From January 2016 to October 2021, 241 patients with stroke were assessed for eligibility. A total of 125 patients did not meet the inclusion criteria, and 44 declined to participate. A total of 72 patients participated in this study, of whom 31 dropped out for reasons such as refusal to wear the Bobath sling (1.3%), stroke recurrence (1.3%), change in the Bobath sling to an elastic dynamic sling (2.7%), and early discharge and follow-up loss due to Coronavirus disease (COVID-19) (37.5%). Finally, 41 patients were included in the final study. Comparisons and statistical analyses between the groups were performed at baseline, four weeks, and eight weeks. Table 1 shows the baseline characteristics of the participants. The average age of the participants was 64.19 ± 13.48 years. The study population consisted of 26 patients with infarction, 15 with hemorrhage, two with brain stem lesions, and 39 with non-brain stem lesions. There were no significant differences (p < 0.05) between the two groups in terms of baseline characteristics, including sex, age, stroke, location of the lesion, and baseline measurements.
Comparisons of the primary outcomes are shown in Tables 2 and 3. There were no significant differences in the vertical distance between the elastic dynamic sling and Bobath sling groups. Horizontal distance was significantly reduced in the elastic dynamic sling group compared to that in the Bobath sling group at eight weeks after sling usage (p = 0.006). As shown in Table 4, the horizontal distance of the affected shoulder gradually increased in the Bobath sling group. Comparisons of secondary outcomes within the groups are shown in Table 5. All participants demonstrated an increase in MBI, FMA scale, and MMT of the shoulder after four weeks and eight weeks of intervention without significant improvement in pain.
The comparisons between the groups are shown in Table 6. There were no significant differences in MBI, FMA scale, MMT of the shoulder, and pain.
Discussion
The results showed a significant difference in the horizontal subluxation distance at 8 weeks compared to 4 weeks, which indicates that the effectiveness of the elastic dynamic sling increased with longer periods of use.
The Bobath sling used in the control group only provided proximal support. In light of the study results, the main benefit of the Bobath roll is the alignment of the upper limb as a whole, avoiding flexion and internal rotation [22,25]. The arm is supported in a pattern of abduction and extension; therefore, flexor spasticity throughout the whole upper limb is potentially reduced. The limb is free of function and is important for balance. This position allows for increased motor activity, symmetry, and bilateral upper-extremity activity. The support remains aesthetically acceptable and covered by garments [26].
Radiological evidence indicated that the Bobath sling caused significant distraction of the humerus in the horizontal plane. Other studies on the Bobath shoulder sling also identified, through the use of radiographs, that the Bobath sling produced a significant lateral displacement of the head of the humerus [27,28]. This study showed similar results as those of a previous study, which showed that horizontal distance gradually increased over time ( Table 4).
The elastic dynamic shoulder sling showed a similar effect on vertical subluxation as that of the Bobath sling. It is made of a stretchable material, due to which it can adjust shoulder subluxation in both the horizontal and vertical axes. Therefore, it was possible to correct the deflection in the horizontal direction, which the Bobath sling could not. In addition, the proximal and additional distal support allows the patient to freely use their hands and wrists while wearing an orthosis during rehabilitation. This result correlates with that of a systematic review by M. Nadler in 2017 [14] on shoulder orthosis, which showed that orthoses with proximal and distal attachments are more effective in preventing shoulder subluxation.
In a previous study, horizontal shoulder subluxation was found to cause supraspinatus tendinitis. The supraspinatus tendon is one of the major sites of soft tissue injuries and lesions. It may cause more pain and poor upper-limb motor function, combined with impaired sensation and shoulder spasticity [29]. In this study, the horizontal distance gradually increased in the Bobath sling group. However, no significant improvement in pain was observed in either group. The VAS score, used in this study as an indication of pain, was a subjective index. It was difficult to make a clear comparison of pain before and after wearing the shoulder orthosis. This is because most patients with stroke have cognitive impairment. The VAS score that was expressed before wearing the orthosis was not provided to the patients during the survey after wearing it.
Both groups showed improvements in upper extremity-function and activities of daily living ( Table 5). The goal of rehabilitation therapy for patients with hemiplegia is to restore independence in limb movements and everyday activities. We selected the upperextremity Fugl-Meyer assessment (FMA) to reflect improvements in upper-limb activity, and the Korean-modified Barthel index (K-MBI) to measure independence in performing everyday activities.
However, there is an uncertainty regarding how support devices can improve mobility. Several factors may have influenced this finding. One reason for this is that the device could maintain the paralyzed upper limb in a reflex inhibition pattern, which could prevent the development of inefficient movement and ensure that the normal position is maintained in the paralyzed limb. Normal position of the paralyzed upper limb may contribute to functional recovery. The application of dynamic shoulder sling and Bobath sling may encourage patients to exercise properly.
Rehabilitation in this study combined physical exercise with position correction. Therefore, we cannot conclude that the elastic dynamic sling and the Bobath sling could improve limb and body activity function by themselves, but they could be beneficial when combined with physical exercise in the recovery and rehabilitation progress.
In addition, there are no standard measurements for evaluating shoulder subluxation. New methods are being developed to accurately measure subluxation, such as diagnostic ultrasound, using clearly defined landmarks. Although research on this is ongoing, the measurement method using ultrasound is not yet the standard measurement method.
By preventing the common complications of subluxation and hemiplegic shoulder pain, patients may be able to participate more extensively in upper-limb rehabilitation, enabling them to maximize their functional recovery and independence.
The limitations of this study should be noted for correct interpretation of the present results. First, 27 patients were lost to follow-up. The coronavirus disease (COVID- 19) pandemic was the primary cause. Second, the patients who participated in this study were stroke patients, and cognitive deficits were present in over 70% of the stroke survivors [30]. When conducting a questionnaire study on pain at follow-up after eight weeks of wearing the brace, the patient did not inform the value of the initial response. Therefore, it was difficult to accurately judge whether there was an improvement or deterioration compared to the results based on the previous questionnaire. Third, there is no precise method for measuring shoulder subluxation. Lastly, there is a lack of studies on long-term follow-up of patients with hemiplegia and horizontal shoulder subluxation. Further studies are required to address this issue.
Conclusions
In a previous study, shoulder orthosis with both distal and proximal parts showed better effects on patient function and pain than orthosis with only the proximal part [10]. In this study, the distance of the horizontal subluxation was adjusted better in the elastic dynamic shoulder sling, which has both proximal and distal parts, than in the Bobath sling, which holds only the proximal part. It may reduce the incidence of supraspinatus tendinitis and may reduce the pain. Both shoulder orthoses showed improvements in MBI, upper-extremity function, and MMT. The application of shoulder orthoses could also improve upper-limb motor function and daily activities in stroke patients. However, no clear differences were observed between the two groups and further research is required. Informed Consent Statement: Informed consent was obtained from all subjects.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy matters. | 2022-08-17T15:16:45.460Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "af18197664c545c6b600ee58b811626f5ce5fb9b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/16/9975/pdf?version=1660312141",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d74c34c0951f06729454b0600fc57412be2456a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245290139 | pes2o/s2orc | v3-fos-license | Exercise and Nutrition Impact on Osteoporosis and Sarcopenia—The Incidence of Osteosarcopenia: A Narrative Review
Osteoporosis and sarcopenia are diseases which affect the myoskeletal system and often occur in older adults. They are characterized by low bone density and loss of muscle mass and strength, factors which reduce the quality of life and mobility. Recently, apart from pharmaceutical interventions, many studies have focused on non-pharmaceutical approaches for the prevention of osteoporosis and sarcopenia with exercise and nutrition to being the most important and well studied of those. The purpose of the current narrative review is to describe the role of exercise and nutrition on prevention of osteoporosis and sarcopenia in older adults and to define the incidence of osteosarcopenia. Most of the publications which were included in this review show that resistance and endurance exercises prevent the development of osteoporosis and sarcopenia. Furthermore, protein and vitamin D intake, as well as a healthy diet, present a protective role against the development of the above bone diseases. However, current scientific data are not sufficient for reaching solid conclusions. Although the roles of exercise and nutrition on osteoporosis and sarcopenia seem to have been largely evaluated in literature over the recent years, most of the studies which have been conducted present high heterogeneity and small sample sizes. Therefore, they cannot reach final conclusions. In addition, osteosarcopenia seems to be caused by the effects of osteoporosis and sarcopenia on elderly. Larger meta-analyses and randomized controlled trials are needed designed based on strict inclusion criteria, in order to describe the exact role of exercise and nutrition on osteoporosis and sarcopenia.
Osteoporosis is a silent disease, without any clear clinical symptoms, until a fracture occurs. Fractures are a major public health burden, as they are the main causes of morbidity, impairment, decreased quality of life and mortality [1]. Osteoporosis has a lot of medical, economic and social consequences. The total burden of osteoporosis is estimated to grow by 50% with more than 3 million incident fractures by 2025, a cost that is translated into almost USD 25.3 billion per year in the US [2].
Worldwide, 200 million people suffer from osteoporosis and 8.9 million fractures occur every year [3]. By 2050, hip fractures may exceed 21 million cases [1]. The prevalence of osteoporosis is 18.3% globally, and it is greater in women than in men (23.1 and 11.7, respectively) [4]. The direct cost of treating these osteoporotic fractures in five European countries (France, Germany, the United Kingdom, Italy and Spain) is EUR 29 billion, while for the 27 EU Member States as a whole it is EUR 38.7 billion, a cost that is expected to increase by 25% until 2025 [5].
Sarcopenia
Another additive syndrome which also affects humans is sarcopenia. Sarcopenia is not a syndrome which affects only daily life, reducing quality of life and strength, increasing likelihood of falls and loss of autonomy [6], but also leads to osteoporosis, obesity and impairs metabolic health [7]. It has been shown that the loss of muscle mass that accompanies sarcopenia leads to increased insulin resistance, which promotes the development of metabolic syndrome and obesity [8].
The origin of sarcopenia is multifactorial and its clinical significance, although universally recognized, it is not universally accepted [9]. According to the European Working Group of Sarcopenia in Older People (EWGSOP2), sarcopenia requires the presence of both low muscle mass and low muscle function. This group defines sarcopenia as an age-related syndrome characterized by a progressive and generalized loss of skeletal muscle mass and strength with adverse effects on human health.
A concomitant increase in fat mass may also be present. The EWGSOP2 set two standard deviations below a healthy population the recommended cut-off points for muscle mass. Cut-off values of gait speed and handgrip strength were <0.8 m/s and <30 kg in men and <20 kg in women, respectively [10]. SDOC suggested sex-specific cut-off points for muscle weakness (low handgrip strength) and slowness (slow gait speed) and depict higher values for muscle strength cut-off points, while they do not take into account muscle mass (<35.5 for men and <20 for women and 0.8 m/s for gait speed) [11].
The prevalence of sarcopenia varies between different populations, with reported rates of 5-50% in people over 65 years of age [12]. These variations depend on factors such as the specific techniques used to measure muscle mass and muscle function, the population under study and diagnostic criteria [12]. Sarcopenia appears in 5-13% in the seventh decade of life and can increase up to 11-50% by the age of 80 years. Furthermore, it is predicted that it will affect more than 500 million elderly people by 2050 [13].
Over time, the lifestyle factors of physical activity and nutrition contribute to the downregulation of many syndromes and disease symptoms. Thus, the purpose of the present narrative review is to underline the effects of physical exercise and nutrition on osteoporosis and sarcopenia and the incidence of osteosarcopenia.
Narrative Review Construction
The present narrative review was organized through the "Narrative Review Checklist" which is proposed by the Academy of Nutrition and Dietetics. Thus, we carried out specific checks of manuscripts' structures and a careful selection of the articles which are included in the manuscript [14].
Studies Selection
The search was carried out in three electronic databases PubMed, Scopus and Google Scholar. The search strategy included studies published from January 2015 until now. However, some included review studies contained results from high-quality studies which in some cases date before 2005. The pre-defined search terms were: "osteoporosis" or "sarcopenia" or "osteosarcopenia" and "bone mass loss" or "muscle mass loss" or "bone mass density" or "muscle strength" and "physical exercise/performance" or "physical function" and "nutrition". For a more targeted and comprehensive search, the above words were combined with other, more specific terms such as "vitamins" or "supplements/dietary" or "resistance training/resistance exercise training" or "aerobic exercise" or "endurance training/endurance exercise training" or "proteins and hormones".
All published studies retrieved from the literature databases were collected and sorted by date and type of intervention in order to reduce heterogeneity, and then double entries were removed. This review included both observational cohort, case-control, cross-sectional studies, systematic reviews and meta-analyses, and randomized, double-blind studies (randomized controlled trials, RCTs).
Animal and experimental model studies were excluded from the review. In addition, studies with a small sample size were excluded, as well as studies which did not adequately specify the selection criteria or included groups of subjects receiving medication for another disease that affected bone or muscle metabolism. Finally, case reports, editorials, letters to the editor and conference proceedings were excluded from the review. A total of 100 references were included in this review.
Osteoporosis Mechanism
Osteoporosis is mostly caused by an imbalance between the action of osteoblasts and osteoclasts. The three main components which are affected in osteoporosis are osteoblasts, osteocytes and osteoclasts [15]. However, the decrease in estrogen seems to be a prevalent mechanism in osteoporosis, notably in menopause. The reduction in estrogen production causes a sequence of alterations on the T-cells and the subclass T regulatory cell (T-reg).
Thus, it is observed to be associated with an increase in pro-inflammatory cytokines (IL-1, IL-6 IL-17, TNF-α) secretion, which inhibit osteoclast regeneration. Moreover, a similar mechanism was found for B-cells whose upregulation is correlated with osteoporosis. Furthermore, gut microbiomes (GM) correlated with osteoporosis. The absorption of nutrients via intestine is vital for human tissues regeneration. Therefore, the poor nutrition or prebiotic diets may affect the GM's secretion, causing osteoporosis. The last possible factor is the senescence-associated secretory phenotype (SASP). According to the mechanism, the increase in senescent cells contributes to the appearance of osteoporosis [16].
Exercise
Although there is heterogeneity between studies, most of them suggest that both weight-bearing and resistance exercises have the optimal effect on prevention and treatment of osteoporosis in older people [17][18][19][20][21]. However, according to the existing literature, it is concluded that these data are so far very limited and further research is needed in order to draw clear conclusions.
A review study of Harding and Beck (2017) demonstrated that bone-targeted programs acted positively on bone mineral density (BMD) and bone mineral content (BMC) of loaded bones [22]. Exercise influences bone strength and mass at all ages. Thus, regular physical activity promotes bone mass increase and bone geometry optimization during childhood and puberty, contributes to bone mass maintenance during adulthood, and reduces the decrease in bone mass loss and strength during old age, preventing osteoporotic fractures in the elderly [23].
However, high intensity and high volume of training together with low energy availability can lead to menstrual dysfunction and decreased bone mineral density and delayed bone growth [24]. Thus, the contribution of nutrition on different types of exercise and especially on resistance exercise is vital for the increase of bone formation.
The National Osteoporosis Foundation (NOF) suggested high or low impact weightbearing and muscle-strengthening exercises to prevent osteoporosis [25]. These types of exercises include jumping, jogging and aerobics as high impact exercises as well as walking and step aerobics as lower impact exercises. In addition, muscle strengthening exercises include lifting weights, using elastic exercise bands and exercises including some resistance against gravity. Despite the benefits of walking on body composition and cardiometabolic health, it has marginal or no effects on the prevention of osteoporosis [26]. On the other hand, the LIFTMOR study shows that the combination of high-intensity progressive resistance and impact weight-bearing training has more benefits for BDM at the lumbar spine than a home-based low-intensity program in postmenopausal women [27]. On postmenopausal woman, long-term resistance or aerobic exercise contributes to the increase in bone formation and mass [28,29]. Specifically, an exercise program accompanied by weight-bearing and resistance activities tends to increase markers of bone formation, namely pro-collagen type 1 N-terminal peptide (P1NP) levels and osteogenic cells (OCs), whereas it little or no increase in the markers of bone resorption was observed [28]. Therefore, aerobic exercise is efficient in both attenuating bone resorption raise and enhancing the bone formation as well. Another type of exercise which has been investigated is aquatic exercise. Swimming generally is associated with little or no effect on BMD [30,31]. In addition, Moreira et al. investigated a high-intensity exercise program showing that it was efficient in bone formation increasing the formation marker P1NP and simultaneously limiting the increase of bone resorption marker [29]. Further research is necessary due to the lack of research regarding the role of aquatic exercise and its effects on BDM. Additionally, a multidimensional strength training program that stimulates daily activities has the best effect on improving activities that require fast and explosive muscle contractions, fast reaction, muscle coordination and balance [32].
Nutrition
The consumption of milk and dairy products reduces the risk of osteoporosis. Malmir et al. (2020) [33] in a meta-analysis study confirmed that dairy consumption is not associated with prevention against osteoporosis and fractures. However, protein and vitamin D supplementation in older people seems to prevent osteoporosis and fractures by increasing bone density [34]. On the other hand, RCTs studies do not confirm a reduction in the incidence of falls and fracture after vitamin D supplementation [35]. The PROVIDE study with 380 sarcopenic older adults depicts the important role of leucine and vitamin D. More specifically, a group with both higher baseline values for vitamin D 25(OH)D concentration and protein intake had the best outcome in muscle gain [36].
Although the importance of proper nutrition in older people has long been recognized, research evaluating the effects of dietary habits on muscle and bone mass is relatively recent. As age increases, there is a decline in energy intake, which can reach as high as 16-20% in elders >65 years [37]. Older people may eat more slowly, eat fewer and smaller meals and have a reduced appetite [38]. However, in addition to reduced food intake, the quality of the diet also plays an important role in muscle strength in elders [39]. Dietary patterns such as the Mediterranean diet, i.e., rich in vegetables, fruits, fish and good fats, enhance muscle strength and functionality in older people [39].
A recent meta-analysis of Tai et al. (2015) [40] showed that BMD of the lumbar spine, total hip, femoral neck, total body was slightly increased (up to 1.85) by increasing dietary sources of calcium or taking calcium supplements. However, BMD increases were small and non-progressive without providing any reduction in BMD loss rates over one year. These results suggest that BMD was not beneficially affected by the non-calcium dietary components. Similarly, a meta-analysis of Reid et al. (2014) [41] showed that vitamin D monotherapy did not affect BMD and thus was inappropriate for preventing osteoporosis in a population without vitamin D deficiency.
In contrast, another study contributed to research on the anti-osteoporotic properties of vitamin K2, showing that MK-7 supplements can prevent bone loss at the lumbar spine and femoral neck in postmenopausal women and have a positive effect on bone strength [42]. Several micronutrients seemed to be implicated in bone metabolism. Thus, except for calcium, vitamin D and vitamin K, zinc, copper, magnesium and manganese were presumed to be important for osteoporosis prevention, while intake of fluoride and strontium seem to be of critical importance in stimulating osteoblasts and inhibiting osteoclasts [43]. A recent literature review indicates that high protein intake may have a protective role in bone density at the lumbar spine, compared to low protein intake in adults [44]. Furthermore, adequate intake of fruits and vegetables seems to have a positive effect on bone density [45] (Table 1). Table 1. Studies investigating the effect of diet and/or exercise on the prevention, onset, and progression of osteoporosis.
Authors
Type of Article Examined Results
Sarcopenia Mechanism
Sarcopenia is a multifactorial syndrome; thus, the explanation of its cause is still under study, although it has been correlated with the appearance of many symptoms. Inflammation is one of them through the secretion of interleukins (IL-1, IL-6), CRP and tumor necrosis factor-α (TNF-α) [48]. The inflammatory response induces a reduction on satellite cells production causing degradation of muscle tissue [48]. Furthermore, inflammatory upregulation in relation with the increase of E3 ubiquitin and MuRF-1 provokes the decrease of the ubiquitin-proteasome system (UPS) which is connected to the degradation of muscle tissue [48].
Another metabolic path through which inflammatory response is upregulated and satellite cells are decreased is through the increase of p38 MAPKs which upregulate the p16Iu4a [49]. Satellite cells seem to play a central part in sarcopenia; this can be enhanced due to the fibroblast growth factors (FGFs) mechanism. The increase of FGF2 and decrease of FGF6 induces a downregulation of satellite cells, again causing muscle tissue degradation [50]. Reactive oxygen species (ROS) is another central mechanism in which its upregulation negatively affects the mitochondrial function, causing myosteatosis, a state where adipose tissue depots in skeletal muscle [51]. Furthermore, mitochondrial downregulation is connected with the increase of the autophagy [52], a catabolic process which has been found to cause sarcopenia [48].
Exercise
The majority of studies found that exercise improves muscle mass, strength and function, so may have a protective and beneficial role against sarcopenia, through the increase in muscle mass and strength and mobility improvement while less active individuals have an increased risk for developing sarcopenia or increasing its severity [53][54][55][56][57][58][59].
Both muscle size and architecture change with advancing adulthood. A previous study reported a reduction in muscle mass followed by a 30% to 40% decline in the number of muscle fibers between the second and eighth decade [60]. The size of muscle fibers is also affected, but to a lower degree. Type II muscle fibers are 10-40% smaller in older than in younger people [61], whereas type I muscle fiber size is largely unaffected [48]. On the other hand, the increase of the cross-sectional area of type I and II muscle fibers, and lean body mass in elderly individuals, leads to the increase in muscle strength [62].
A multidimensional strength training program that simulates daily activities has the best effect on improving daily activities requiring fast and explosive muscle contractions, fast reaction, muscle coordination and balance [32]. It is well known that resistance training enhances cross-sectional area and size of muscle fibers, particularly types IIa and IIx (fasttwist fibers) rather than type I [63]. Beckwee et al. suggested a high intensity resistance training program in order to achieve maximum strength gains, while a low intensity resistance training program is adequate to cause an increase in strength [53]. On the other hand, aerobic exercise training increases mitochondrial biogenesis [64] and could enhance muscle hypertrophy and strength [65]. Furthermore, moderate load eccentric exercises have been shown to be as effective as conventional strength training in increasing muscle volume and strength [66] and consequently reducing the risk of falls and improving both mobility and quality of life [67].
Last but not least, balance exercises and specifically postural types of training on unstable and stable surfaces seem to contribute to the improvement of body balance [68]. Interventions which last more than 8 weeks and include static balance training and the strengthening of lower limbs act beneficially on the improvement of dynamic balance, resulting in a greater stability. This improvement positively affects walking ability and walking speed but reduces the single leg stance phase [68].
Balance ability is in accordance with the types of muscle fibers. Thus, a high percentage of type II fibers contribute to a fast reaction but a quick fatigue, whereas a high percentage of type I fibers are effective for standing abilities without early fatigue. So, this statement demonstrates that instability in elderly populations is caused because of frailty and muscle mass degradation, factors which are affected due to the switch of muscle fibers from I to II. Therefore, the importance of the type of exercises for sarcopenia syndrome is vital for patients [69].
According to various studies reviewed by Marty et al. (2017), the types of physical activities improve muscle mass, strength and function [70]. The combination of exercise and proper nutrition induces mitochondrial biogenesis and function and increases the number/function of satellite cells, while inhibits inflammatory cytokines, leading to increased protein synthesis and decreased protein degradation [71].
In addition, regular exercise can combat muscle dysfunction as well as neuromuscular damage caused by ageing. Non-mass-dependent muscle factors, such as muscle fiber length and tendon stiffness, were also increased by 10% and 64%, respectively, after exercise interventions in the elderly [72]. Various types of exercise can have a positive effect on an individual's health, with resistance exercises having the best results [17,21].
Nutrition
A balanced diet plays an important role in overall health and bone health, providing energy, macronutrients, vitamins and minerals. However, older people consume less energy and protein compared to younger people, even though their nutritional needs are often higher [73]. Both inadequate nutrient intake and physical inactivity increase the likelihood of falls and fractures or osteoporosis and sarcopenia [74]. Physical frailty as a result of a decline in multiple biological systems in functioning, together with stress factors, increases the risk of osteoporosis and sarcopenia [75].
Dietary patterns, such as the Mediterranean diet, probably have a positive effect on muscle maintenance, as it provides antioxidants that reduce oxidative stress, one of the main causes of sarcopenia [76,77]. Many RCTs studying the effect of protein or vitamin supplementation on muscle maintenance and the progression of sarcopenia confirm their essential role in preventing the disease. It is observed that especially proteins rich in leucine play an important role because they have anabolic properties [54,78,79]. Leucine supplementation leads to increases in protein synthesis rate, body mass and lean mass in the elderly [80]. HMB, a leucine metabolite, signals mTOR pathway leading to an increase in protein synthesis and simultaneously lower ubiquitin pathway, resulting in decreased protein degradation, while through muscle cholesterol provides an increased substrate for cell membrane repair [81].
ESPEN suggests a diet consisting of at least (a) 1.0-1.2 g protein/kg body weight/day for healthy elderly people, and (b) 1.2-1.5 g protein/kg body/day weight for elderly people with chronic or acute illness. However, many health professionals often express concern that protein-rich diets will overwhelm and exacerbate disturbed kidney function in the elderly. According to guidelines regarding the elderly with healthy kidney function or mild dysfunction, the aforementioned protein recommendation is safe. In patients with a moderately reduced glomerular filtration rate (GFR) or other forms of chronic kidney disease (CKD), health professionals should take into account the balance between risks of immobility due to falls and death and risk of developing final-stage kidney disease, while applying the clinical guidelines, in order to make the right decision. It is noticeable that patients diagnosed with severe CKD are usually recommended a lower amount of 0.6-0.8 protein/kg body weight/day [82].
Regarding vitamin D, there are many studies showing an association with sarcopenia. However, the level and frequency of dosage has not yet been clarified, nor the duration of treatment that may help improve muscle mass and function. Dietary interventions including whey protein, essential amino acids and vitamin D improve muscle mass and physical performance [83].
Although some studies have been carried out, the effectiveness of the combined action of exercise and diet in improving fitness and preventing disease in older people has not yet been established. The SPRINTT (Sarcopenia and Physical Frailty IN older people: multi-component Treatment) clinical trial is the largest and longest-running study designed to evaluate the effectiveness of complex non-drug therapeutic interventions to prevent motor difficulties in older sarcopenic patients [84]. It is to some extent a continuation of the concept of the LIFE study, which we have previously reported on, and will have a duration of 36 months trying to evaluate the effect of diet and exercise on patients' physical activity [85] (Table 2). Table 2. Studies investigating the effect of diet and/or exercise on the prevention, onset and progression of sarcopenia.
Type of Article Examined Results
Ganapathy
Osteosarcopenia
Over the last six years, researchers have studied the pathophysiological mechanisms of a new term which is called osteosarcopenia or sarco-osteopenic, a condition in which both symptoms of osteoporosis and sarcopenia are observed [93]. Osteosarcopenia is a syndrome described by the co-existence of osteoporosis and sarcopenia, with the same clinical and biological features [94]. The relationship between osteoporosis and sarcopenia is reasonable in the context of the bone-muscle subunit. Both tissues are derived from a common mesenchymal projector stem cell [70]. Muscle cells secrete bone-regulated cytokines, while bone cells secrete IGF-1, which has potential muscle-stimulating properties [70]. Osteoporosis and sarcopenia are two conditions which share many similarities, including high prevalence, high socioeconomic costs, mechanisms of action and crucial effects on patients' quality of life [9]. In addition, both lead to losses in bone mass and muscle quality, respectively, which are age-related but exacerbated by the presence of these diseases [9]. Furthermore, sarcopenic obesity, which is observed in elders, may increase the risk of cardiometabolic diseases, disability, and mortality and accelerate the decrease of physical function, because of synergistic complications from both sarcopenia and obesity [95]. Obesity, sarcopenia and osteoporosis may coexist as an entity called "osteosarcopenic obesity", with patients experiencing health problems more severely than individuals with only one of these disorders [96] (Table 3). Table 3. Studies investigating the effect of diet and/or exercise on the prevention, onset and progression of both osteoporosis and sarcopenia (osteosarcopenia).
Authors
Type
Conclusions
Osteoporosis and sarcopenia are major health problems that occur during ageing. Their prevention is particularly important as they are associated with an increased risk of fractures, loss of muscle mass and functional failure. In addition, the ever-increasing prevalence of these diseases is a major public health issue. There is a positive association between resistance/strengthening exercises and the prevention of osteoporosis and sarcopenia. In addition, protein and vitamin D supplements, as well as other vitamins and/or trace elements, seem to help in the better management of these diseases. However, due to the small number of samples available in most studies, it seems necessary to carry out randomized studies and meta-analyses of large population size and strictly defined criteria in order to draw valid conclusions on the effect of these interventions on osteoporosis and sarcopenia and to determine the length of time they should be applied in order to obtain long-term benefits in older people. There is also a need to further investigate the interaction between the bone tissue and the musculoskeletal system to enable the development of therapeutic regimens that target osteoporosis and sarcopenia simultaneously. In conclusion, the effects of exercise and nutrition on osteosarcopenia may suggest new prospects about the reduction of biomarkers which are secreted and act in both syndromes synergistically causing bone fractures and muscle degradation.
Author Contributions: Conceptualization and design of study, S.K.P. and D.P.; Methodology and data collection, K.P., G.V. and O.Z.; Interpretation and Analysis of results, S.K.P., G.V., E.T. and E.G.; Draft Manuscript preparation, S.K.P., E.G., E.T. and O.Z.; Supervision, S.K.P. and D.P. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-19T16:06:28.392Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "a84928a31f0fbbfca891caa97ebf06c71f4ad275",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f987a6ba1e8e725c571c543a0d55fab0afab85af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202211022 | pes2o/s2orc | v3-fos-license | An Experimental Investigation on the Acoustic and Thermal Properties of Copper Reinforced Sustainable Foam
Organic foams with different proportions of copper powder are fabricated using a general mixing process. The physical, chemical properties and the sound absorption co-efficient of these foams are investigated. The FTIR test results indicate the presence of different organic functional groups in the flexible organic sustainable foam. Acoustic shielding test compares the sound absorption coefficient. Lee disc method is used to find the thermal shielding (Thermal Conductivity) of the samples fabricated. The sound absorption coefficient is found to increase with increase of copper percentage in the foam. The mechanical behavior is approximately same for the reinforced foams with different percentage of coppers and is similar to that of unreinforced foam. The thermal conductivity is found to increase with the increase in the percentage of copper in the foam thus unreinforced foam has better thermal shielding property. KeywordsSustainable organic foam, Energy absorption, Sound absorption coefficient, Porosity, Copper Powder, Acoustic Interference (AMI) Shielding, Frequency, thermal conductivity.
INTRODUCTION
In Modern World Noise is one of the big problems of the society therefore any unwanted sound is called Noise. There are numerous cases where decreasing noise level is of great significance. Loss of hearing is just a single impact of ceaseless introduction to excessive noise levels. Noise can interfere with sleep, speech, and cause discomfort and other non-auditory related issues [1][2][3]. Additionally, high levels of noise and vibration lead to structural failures as well as reduction in life span in many industrial equipment's. For instance, in control valves, the vibration caused by flow instability occasionally defects the feedback to the control system and resulting in extreme oscillations. The significance of noise issues could be well understood by looking at regulations that have been passed by governments to restrict noise production in society. Industrial machinery, air/surface transportation and construction activities are assumed to be main contributors in noise production or so called "noise pollution" [4]. Therefore, thin, light weight and low cost composite materials that will absorb sound waves in wider frequency regions are strongly desired. In present situation, the vast majority of the acoustic protection material is produced using synthetic fibers which are risky to human health and environment [5]. Thus a sustainable material is required and organic foam can be that material. The organic foam synthesized is to be tested for shielding of Acoustic interference. The merits of using organic foam are that it is lightweight, flexible and eco-friendly. Four test samples are synthesized one of which is a flexible organic foam without additives, three of them are flexible foams with varying proportions of copper powder by weight, Various techniques are used to obtain the properties of the samples. An Acoustic shielding test Setup is used for testing the acoustic shielding capability of the material. Global warming is an another big [6] issue with noise pollution therefore there is a requirement of a versatile material which has not only good sound absorption coefficient but also is a bad conductor of heat so that it not only protects a confined space from noise but also from heated outside. Sustainable PU foam is a bad thermal conductor of heat [7]. So both the problems can be sorted out by this material. Lee's Method is performed to find the thermal conductivity of the samples to insure that there is no significant increase in the thermal conductivity of the samples with the increase in the sound absorption coefficient if any. Though PU foams have sufficient strength [8] for its current applications but its strength may change with addition of different percentage of copper powder. This material can be used for many purposes ranging from sound insulation material for walls of buildings or cars to inner insulating material of airplanes and space shuttles. Even after being so much versatile the manufacturing cost of the material is very cheap and manufacturing very simple and very less time consuming. As is it sustainable so it's less harmful to the environment as well, which makes it a very good material with very vast range of applications.
EXPERIMENTAL PROCEDURE A. Fabrication of copper Foam
The flexible foams had a common mixture of isocyanides and polyol in the ratio of 2:3. First of all, organic foam with no additives was prepared by adding the two compounds in a cuboidal mold with one face open and letting the reaction take place. The reaction is exothermic and carbon dioxide gas is released. The mixture is allowed to expand and is then removed from the mold and is prepared according to the required dimensions. Now, three different samples were prepared with different weight by weight percentages of copper powder, i.e., 3%, 5% and 10%, in a similar manner as more than 10% of copper powder reaction were not able to complete and mixture was collapsed. The sample for preparation of rigid foam containing the copper powder was not able to complete the reactions and collapsed [10].
B. Acoustic shielding test Setup
The sound abortion co-efficient is the absorbed fraction of the energy of a plane sound-wave when incident on the sample material. The co-efficient describes the ability of the material to absorb sound in a given frequency band. The sound absorbing coefficient of the material is measured using Impedance Tube Apparatus, which is a system consisting of a solid brass tube containing a speaker at the one end and the material sample whose properties are to be measured at the other end. The system has a pair of microphones separated by a finite distance connected to the brass tube with the help of microphone holders. These microphones are connected to a digital signal analyzer via signal conditioners (pre-amplifiers) and a data acquisition system. A function generator is used to power the speaker in the impedance tube. For the absorption coefficient a rigid backing is also used.
. Thermal conductivity test setup
Lee's method is used to find the thermal conductivity of bad conductors. Its setup has two parts as shown in figure 11. The lower part C is circular metal disc. The experimental specimen G is placed on it. The diameter of G is equal to that of C and thickness is uniform throughout. A steam chamber is placed on C. The lower part of the steam chamber, B is made of a thick metal plate of the same diameter as of C. The upper part is a hollow chamber in which two side tubes are provided for inflow and outflow of steam. Two thermometers T1 and T2 are inserted into two holes in C and B, respectively. There are three hooks attached to C. The complete setup is suspended from a clamp stand by attaching threads to these hooks. The presence of Copper powder has made a positive effect over the result of sound absorption coefficient. The sound absorption coefficient of 10% Cu reinforced foam is maximum among all the samples. 10% Cu reinforced foam is more porous in comparison to the other samples in fact the porosity of the sample increases with the increase in the Cu percentage thus increasing the inclusion of air gaps and so increasing the sound absorption coefficient. Also the sound absorption coefficient is more at higher frequencies in comparison to lower frequencies. At frequencies tending to 6000Hz the sound absorption coefficient tends to 1 and for frequencies tending to 200Hz the sound absorption coefficient is around 0.1. Table 3., we can see that the thermal conductivity of reinforced foam is greater than that of non-reinforced foam. This can be accounted from the fact that copper is a good conductor of heat and its inclusion in the foam increases the net thermal conductivity of the sample. But due to increase in the porosity of the foam with increase in the Cu percentage creates more air gaps in 10% reinforced sample than 5% sample and 3% sample so there is a decrease in the thermal conductivity with increase in the Cu percentage [9].
CONCLUSIONS •
Copper powder reinforced forms with three different proportions of copper were prepared. The flexible foams primarily contain polyol and isocyanides with 2:3 proportions. • The physical and chemical properties of the samples are characterized and the sound absorption coefficient is characterized using Acoustic performance test setup. The maximum sound absorption coefficient of the flexible foams with copper powder was observed to be around 0.9952 for 10% Cu reinforced at 6300 Hz. The sound absorption coefficients were lower at lower frequency sound range. • Though there were increase in the thermal conductivity of Cu powder reinforced foams but still due to more porosity the thermal conductivity of 10% Cu reinforced was less in comparison to 3% and 5% Cu reinforced foams.
• So finally, 10% Cu reinforced foam has better Sound absorption coefficient as well as less thermal conductivity so it can be used for insulating building walls from sound and heat. Also it can be used as an insulation of walls of vehicles or even that of spaceships. | 2019-09-11T02:02:48.899Z | 2019-08-10T00:00:00.000 | {
"year": 2019,
"sha1": "472dc81bec66b759e1a52ea80ad64066c2532eb5",
"oa_license": "CCBY",
"oa_url": "https://www.ijert.org/research/an-experimental-investigation-on-the-acoustic-and-thermal-properties-of-copper-reinforced-sustainable-foam-IJERTV8IS080053.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0689e28284dc831a86e74b8bee77b2f580e3f8b1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
244643390 | pes2o/s2orc | v3-fos-license | The Exploration on Cooperative Learning Mode in Mixed Classes of Ethnic and Han Students: Take Students of Hetian Normal College as an Example
As a beneficial supplement to the traditional learning mode, cooperative learning is more and more widely used in teaching practice and favored by researchers, teachers and students. In teaching activities, many front-line teachers also find that cooperative learning has brought immeasurable changes in improving students’ performance, interpersonal skills, emotional attitudes and the classroom teaching atmosphere.Taking Hetian Normal College as an example, this paper analyzes the class formation, influencing factors and function of cooperative learning in mixed classes of ethnic and han students, and concludes that the cooperative learning mode of mixed classes of ethnic and han students is an effective way to improve the quality of talent cultivation in ethnic areas.
Beizhou HE Hetian Normal College
Abstract: As a beneficial supplement to the traditional learning mode, cooperative learning is more and more widely used in teaching practice and favored by researchers, teachers and students. In teaching activities, many front-line teachers also find that cooperative learning has brought immeasurable changes in improving students' performance, interpersonal skills, emotional attitudes and the classroom teaching atmosphere.Taking Hetian Normal College as an example, this paper analyzes the class formation, influencing factors and function of cooperative learning in mixed classes of ethnic and han students, and concludes that the cooperative learning mode of mixed classes of ethnic and han students is an effective way to improve the quality of talent cultivation in ethnic areas. One of the goals of the elementary education curriculum reform is to change the current situation of curriculum implementation's emphasizing too much on reception learning, rote learning and mechanical training, to advocate the learning style of active participation, happy exploration and deligent working, to cultivate students' abilities of collecting and processing information, acquiring new knowledge, analysizing and solving problems, as well as the ability of communicating and cooperating. [1] However, in the actual teaching pratice, most teachers attach more importance to knowledge than ability, to results than process. Students' learning methods haven't been well cultivated, so it is very difficult for students to further adapt to the higher-level learning and lay a good foundation for lifelong learning. Cooperative learning is an important way to cultivate students' spirits of active exploration, cooperation and innovation. Therefore, it is particularly important to make good use of cooperative learning in classroom teaching.
Cooperative learning emerged in the United States in the mid-1970s. It has been praised as "the most important and successful teaching reform in the last decade" and attracted more and more attention due to its remarkable achievements in improving the classroom atmosphere, enhancing students' academic performance in a wide range and promoting students' non-cognition ability. At present, there are increasing scholars studying cooperative learning. More researchers are also committed to applying cooperative learning to practical teaching, exploring operational methods of cooperative learning, and putting these methods into teaching practice. [2] Since 2011, some universities in Xinjiang have explored and practiced the integrative teaching of ethnic and han, and acquired some experience. Hetian Normal College is among them. By the end of October 2020, there are 7,858 students coming from 29 different ethnic groups, in which 5,493 are ethnic minority students, accounting for 69%, and 2,365 are Han students, accounting for 31%. In order to better promote the intercommunication and mutual learning of excellent culture among students of all ethnic groups, Hetian Normal College began to implement the training mode of mixed class and combined class teaching for ethnic and Han students in 2015.
(1) Principles for the establishment of mixed classes of ethnic and han students
In principle, mixed class teaching is carried out in the same grade and the same major. Students of all ethnic groups in mixed classes will live in mixed dormitaries and make friends in pairs, enjoying unified training standard, unified training program, unified teaching syllabus, and teaching in the nation's common language. Students of all ethnic groups are guaranteed equal access to teaching resources and equal opportunities to participate in activities. All students in college are treated without discrimination in reward and punishment system, graduation qualifications, etc..
(2) Students analysis
There are totally 35 students in the two experimental classes. According to the requirements of pilot scheme of the mixed classes, the author undertook the teaching task of one of the mixed classes. Ethnic students of our college will have one year of Mandarin preparatory course, which will improve the students' level of professional courses and professional Chinese proficiency. The influence on Chinese proficiency of science students is particularly noticeable than that of liberal arts students. According to the Chinese proficiency level, the Uygur students in the class are divided into three categories: one is the students who take an examination of the han people. They have been studying in Chinese schools since childhood. And their Chinese proficiency is almost the same as that of the students whose mother tongue is Chinese. The second group is Uygur students. They are bilingual students in junior and senior high schools (i.e. uygur language and Chinese are used for the Chinese courses during senior high school and other courses are taught in Chinese).The third group is the students whose courses in the senior high school are all taught in Uygur language. They will have some difficulties in receiving Chinese lessons. More than half of the han students in the class come from different prefectures in Xinjiang, while the rest come from the inland areas.
(3) Construction of cooperative learning personnel training system
The teacher and the student should comb and define their respective tasks and the goals to achieve. They can communicate and interact with each other in various ways. With certain talent training process, they can finally achieve the four "together goals" required by the autonomous region: learning together, living together, acting together and growing together.
The Exploration on Cooperative Learning Mode in Mixed Classes of Ethnic and Han Students: Take Students of Hetian Normal College as an Example
Figure 1 Cooperative teaching tasks and objectives
According to the students' ethnicity, gender, personality, family background and Chinese proficiency, the author divides the students into different groups basing on the principle of heterogeneity in same group and homogeneity in different groups. In class, learning contents are assigned to each group, and group members are arranged to prepare lessons, give lectures and answer questions. In order to ensure the classroom effect, give play to the main role of students and mobilize the participation enthusiasm of each student, sometimes it is necessary to make a clear internal labor division for each group, and give students one or several questions to ask them to speak in turn, so as to enhance their team consciousness of mutual learning, mutual help and mutual improvement.
Analysis of Influencing Factors of Cooperative Learning
As for the influencing factors of cooperative learning, different scholars have different opinions, among which the representative ones include Slevin's three factors theory, Kagan's four factors theory and Johnsons' five factors theory. [3] To sum up, the following four aspects are the core elements that affect cooperative learning.
(1) Group Goal
Group goal can also be called positive interdependence. In order to achieve the learning goals of each group, each group members is required to pull together and work together. Group goal becomes the internal driving force of cooperative learning groups. According to this factor theory, the grouping of members usually plays a very important role.To make the group collaborative, the teacher should fully consider the heterogeneity of each member in the group and the homogeneity of different groups while grouping. Group members have their own strengths and cooperate with each other, so that the class will not become the "representive class" of top students. By this way, both students with good grades and students with poor grades can all take an active part in cooperative learning. As time passes, all students can develop a good habit of cooperative learning unconsciously. They can fully involve in cooperative learning, and truly master their own learning, not making cooperative learning a mere formality.
(2) Responsibility
Conscientiousness is what keeps cooperative learning going. Clear assignment distribution makes each student undertake a certain learning task. The task means responsibility. In order not to drag the group down, each student has to complete their own learning tasks. When other partners are in trouble, he/she should help them rather than stand by. The good completion of each learning tasks can guarantee the achievement of the final goal of the group. Only in this kind of relationship are the honor and disgrace of the group members interrelated. And in such a learning atmosphere can students' learning interest be greatly improved. Nobody is willing to lag behind.
(3) Social Skills
In the theories of cooperative learning, only the Johnson brothers included social skills into cooperative learning. The author thinks social skill is an indispensable ability for cooperation, otherwise cooperative learning would be fragmented and unable to be carried out. Therefore, the teacher should teach students some communication skills before cooperative learning, so that students can learn to trust and accept each other, clearly express their ideas, gradually learn to study, live and play together with other students, especially students from different ethnid groups. All students should be humble, good at listening to others and able to resolve the conflict moderately. They can learn to "learn in cooperation, cooperate in learning". In addition, in the learning process, ethnic and han students can also gain friendship, which will truly integrate ethnic and han students together, reaching the highest state of "heart and heart together". As a result, the author The Exploration on Cooperative Learning Mode in Mixed Classes of Ethnic and Han Students: Take Students of Hetian Normal College as an Example believes that cooperation skills are reflected not only in the classroom, but also in the after-class interpersonal communication. These social skills can also help students to get well along with each other. This is of great significance in the college of ethnic and han students.
(4) Degree of Fairness
In cooperative learning, it is necessary to ensure that every student can fairly participate in the teaching task and provide every student with a fair chance for success. In the initial practice, the excellent students in some groups are always in charge of communicating and reporting while the other group members will not take the initiative to participate in the discussion. To change this situation, the author asks everyone to be the "spokesperson" in turn, which gives each member the opportunity to communicate and report the results of the group discussion to the whole class. Participating in the communication and discussion, the teacher can timely guide or restrain some students. Meanwhile, teacher's participation can shorten the distance between teacher and students, and also help the teacher to better understand each group member.
(1) To improve learning efficiency and to better implement cooperation
When students join in a cooperative group, they have entered into a unique small society. They must exert their individual initiative in the group, and develop and improve themselves to adapt to the small group with the group's help and by serving the collective. In the cooperative learning, students should know each other, communicate with each other, and understand each other. Each student should put forward their views, opinions and foundations. They also need to analyze the viewpoints of other group members. By this means, they can realize the exploration and practice of new knowledge as well as improve their learning efficiency. After class, the teacher can assign some tasks or projects that the students are interested in. These assignments will make the students get to practice or consult some literature, communicate in the group. And the teacher will give guidance and comments on their performance. In addition to the cooperation value, these tasks can also be distributed to group members, making every member of the group participate jointly and have something to do! With a clear learning task, students can avoid the blindness of cooperative learning, fully experience the effectiveness of group cooperative learning, and enhance their cooperative awareness.
Journal of International Education and Development
Vol.4 No.8 2020
(2) To enhance students' self-confidence
Comparing with han students, ethnic students in daily teaching have the following characteristics: first, their expression ability of oral Chinese is relatively poor; Second, they lack theoretical knowledge of literature, art, history and psychology, making it difficult to understand the content and background of textbooks; Third, different family education concepts lead to different way of thinking, which makes their understanding of knowledge and digestive ability a little weak; Fourth, their thought is relatively simple and they are easy to get emotional fluctuation; Fifth, they are enthusiastic, optimistic, proactive and good at self-expression. In the actual teaching, the improper teaching methods may cause some students to study halfheartedly. They usually lack ambition and study perfunctorily. They can't keep up with the course progress and gradually lose interest in learning, becoming self-abandoned. Some students are afraid of difficulty and lack of the spirit of assiduous study. The purpose of cooperative learning is to let every student take the initiative to participate in learning within a limited time, so that students can build confidence in independent learning, develop good learning habits, and make effective learning strategies. The greatest strength of cooperative learning lies in the cultivation of students' cooperative spirit and the improvement of problem-solving ability. Cooperation is the basic form of human interaction, and no development of human development is independent of others. It is a teacher's bounden duty to cultivate students' ability to cooperate with others. For example, the chemistry experiment cannot be completed with the efforts of one person. Only with people's cooperation and hard work can the experiment succeed.
During the cooperation, everyone should play their wisdom to achieve win-win results. The significance of this cooperation goes far beyond learning itself. In cooperative learning, there is a mutually beneficial relationship between students. For example, the understanding of "top student" to the problem may be the proximal development area of "poor student". The words of "top student" are easier for other students to understand and master than that of teacher, because they have the same age characteristics. Through the mutual help, students' logic of thinking and organizing ability of language will be greatly improved. [4] (
3) To reform the learning evaluation system and stimulate students' learning enthusiasm
Cooperative learning is rewarded and evaluated by the performance and achievement of a group. Different from the previous individual evaluation criteria, this evaluation mechanism converts individual competition into group competition, thus promoting the unity and mutual assistance among the group members and enhancing the intrinsic motivation of individuals to make contributions to the group. The highest state pursued in cooperative learning is "not for everyone to succeed, but for everyone to make progress". This situation of "everyone in the group participates, and each group competes with each other" also makes the relationship between students more harmonious. In the actual practice, the outstanding individual and the outstanding group can be appropriately rewarded, which can also improve students' learning interest, stimulate their learning enthusiasm and promote the next stage of cooperative learning. After a period of cooperative learning, groups can be adjusted appropriately, making the power between groups more balanced, guaranteeing some students' opportunity to cooperate and exchange, inputting a new strength into the group to make the activity group full of vigor and vitality. At the same time, students' self-evaluation and mutual evaluation should be strengthened. Students should be evaluated from multi-perspectives, such as learning participation, task completion, cooperative performance and efforts for the group. The former situation of single evaluation by teachers is changed. And every student can become the "top student" in the eyes of teachers. Everyone can participate and express their opinions. And a scene of contention of a hundred schools of thought is presented.
Summary and Suggestions
Admittedly, cooperative learning has many advantages. However, in the practical application, teachers should choose appropriate teaching methods according to their own teaching experience, students' situation and teaching content, as well as the local actual conditions. As for a multi-ethnic university in Xinjiang, the purpose of cooperative learning of ethnic and han students is not only to promote the Chinese proficiency, but more importantly, to make students of all ethnic groups live, study, play and communicate together harmoniously, to promote the social stability and long-term peace of Xinjiang, truly achieving ethnic unity. This also requires the teacher to explore unceasingly, observe the students' situation and complete the student analysis. During the teaching process, teachers should pay attention to the use of teaching strategies and mobilize all positive factors to make teaching optimal. It is a long-term systematic project to cultivate students' cooperative awareness. It is impossible to achieve the expected results through the study of one course. And it requires many teachers to participate in the reform of teaching methods. In order to cultivate students' advanced teaching idea, teachers should first dare to try new teaching methods under the guidance of teaching ideas. Only with practical proof can they have more say. Cooperative learning can better promote the cooperation, exchange, mutual respect and mutual understanding between ethnic and han students. This learning mode plays a positive role in | 2021-11-26T17:00:08.255Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "0e18215dfd2e2e9b91bdec477afde502f3797a6a",
"oa_license": null,
"oa_url": "http://wsp-publishing.com/Public/attached/202105/13.%20hebeizhou.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9697e774b0374ce0e425b98f1b3709fc84f03866",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
199586818 | pes2o/s2orc | v3-fos-license | The Value of Scientific Knowledge Dissemination for Scientists A Value Capture Perspective
: Scientific knowledge dissemination is necessary to collaboratively develop solutions to today’s challenges among scientific, public, and commercial actors. Building on this, recent concepts (e.g., Third Mission) discuss the role and value of di ff erent dissemination mechanisms for increasing societal impact. However, the value individual scientists receive in exchange for disseminating knowledge di ff ers across these mechanisms, which, consequently, a ff ects their selection. So far, value capture mechanisms have mainly been described as appropriating monetary rewards in exchange for scientists’ knowledge (e.g., patenting). However, most knowledge dissemination activities in science do not directly result in capturing monetary value (e.g., social engagement). By taking a value capture perspective, this article conceptualizes and explores how individual scientists capture value from disseminating their knowledge. Results from our qualitative study indicate that scientists’ value capture consists of a measureable objective part (e.g., career promotion) and a still unconsidered subjective part (e.g., social recognition), which is perceived as valuable due to scientists’ needs. By advancing our understanding of value capture in science, scientists’ selection of dissemination mechanisms can be incentivized to increase both the value captured by themselves and society. Hence, policy makers and university managers can contribute to overcoming institutional and ecosystem barriers and foster scientists’ engagement with society.
Introduction
Developing solutions to handle today's challenges such as the climate crisis, demographic changes, migration, or digitalization requires the recombination of knowledge from different public, scientific, and commercial stakeholders.To this end, knowledge dissemination is a necessary condition to make knowledge accessible to relevant stakeholders.However, there are a multitude of different dissemination mechanisms, which vary in their degrees of knowledge accessibility (i.e., the number of actors that can access the knowledge) and, consequently, in their value created and captured by the knowledge-using parties (use value) and knowledge-producing parties (exchange value).Stimulating (open) dissemination beyond the boundaries of academia and, thus, increasing the use value, has become a central task for policy makers and scientific institutions (for example, as codified in universities' "Third Mission" or "Quadruple Helix" concepts) [1].To achieve these organizational-level and ecosystem-level goals, it is crucial to recognize how individual scientists capture value from are perceived as valuable due to different reasons.Hence, our results add to the understanding of how and why scientists capture value from scientific knowledge dissemination.First, the subjective value comprises of outcomes such as social recognition, reputation, or social acceptance.Second, these outcomes are considered valuable due to scientists' individual needs, such as the struggle for academic survival (e.g., position), ego-identity needs (e.g., social desirability), as well as the desire to make a societal impact.In contrast, objective value mainly comprises of measurable outcomes (e.g., monetary rewards).In essence, the results show how scientists' needs are met by the objective and the subjective exchange value explaining scientists' willingness to disseminate and their selection of dissemination mechanisms.Moreover, they contribute to understanding what drives scientists' engagement in the scientific knowledge production in the first place.
These individual-level findings hold meaningful implications for both the organizational-level and policy-level.First, we contribute to improving our understanding of scientists' value capture processes from different scientific knowledge dissemination activities.We identify what scientists consider valuable beyond the monetary reward and, hence, add to an important and still under-researched aspect, i.e., the value of scientific knowledge dissemination from the scientists' perspective [14,15].Second, our findings contribute to the discussion surrounding how policy makers, research funders, university managers, or institutions can incentivize scientists' engagement in Third Mission or Quadruple Helix activities, which aim to achieve high societal impact by fostering knowledge transfer between academia and society [1,16].The opening up of knowledge dissemination transforms scientific knowledge into a commodity good [6,17,18], with substantial consequences for the value captured by society (i.e., the use value), due to an increased number of knowledge users facilitating new knowledge generation [2,3,6,15,19,20].In other words, by disseminating scientific knowledge, other actors are able to use and recombine it and, thus, create additional knowledge (i.e., value) to address relevant societal challenges.Therefore, such (open) dissemination activities, that go beyond the current discipline-dependent evaluation schemes in academia are able to increase both, the value captured by the public and the value captured by the scientists.Third, we add to the small body of literature that studies individuals as the unit of analysis in the science context and how their cognitive and emotional behaviors play a role in the context of open scientific knowledge production and dissemination [3,21,22].Not only do we underline the importance of paying attention to macro-level factors, but we also highlight the importance of considering the micro-foundations of scientific knowledge production, since individuals (e.g., scientists) are important decision makers.Understanding what scientists consider valuable allows policy makers and university managers to optimize incentive schemes to stimulate scientists' individual selections of dissemination activities that achieve higher societal impact.
Theoretical Background
This section starts by briefly reviewing the literature on value creation and value capture in the context of scientific knowledge production.In this vein, use value and exchange value are outlined.Following this, we describe the value creation and value capture processes in this context and identify inefficiencies that lead to our exploratory research question.
Creating and Capturing Value from Scientific Knowledge Production
Understanding value creation and value capture has received considerable attention in management research [2,3,7,23].The underlying assumption is that innovations (i.e., new products, services, or processes) create value that is distributed among different stakeholders [24,25].Thereby, value creation and value capture must be understood as interdependent processes [2,7].To appropriate value from its innovations, firms need to apply value capture mechanisms (e.g., licensing, patenting, and sales) that allow them to realize innovation rents from one particular and subsequent innovations [20,26].
Value cannot only be created on an organizational level, but also on an individual, collective, and societal level [3,10].Building on the understanding of the knowledge-based view, knowledge from individuals is considered an important resource in creating value [10].Accordingly, scientific knowledge production has been considered as a value creation process in previous work [5,10,18].In the following, we consider scientific knowledge as the value created by an individual scientist [3,5].Thus, we focus on value creation on an individual level, in the context of scientific knowledge production.
Capturing value from scientific knowledge production has received comparably less attention.One way to capture value from scientific research for society, the economy, and the scientists themselves [2,3,18] is its transformation into innovation.This transformation has been addressed by two major literature streams.First, university-industry collaboration, and, second, science-based entrepreneurship [4].Whereas, in the first case, scientific knowledge (value created) is exchanged with another economic actor (e.g., a firm).In the second case, it might be transformed into an innovation by the scientist her/himself.Figure 1 graphically depicts the theoretical understanding of value creation and value capture, in general, and in the context of university-industry collaboration and science-based entrepreneurship in particular.
Publications 2019, 7, x FOR PEER R EVIEW 4 of 23 knowledge production has been considered as a value creation process in previous work [5,10,18].In the following, we consider scientific knowledge as the value created by an individual scientist [3,5].Thus, we focus on value creation on an individual level, in the context of scientific knowledge production.
Capturing value from scientific knowledge production has received comparably less attention.One way to capture value from scientific research for society, the economy, and the scientists themselves [2,3,18] is its transformation into innovation.This transformation has been addressed by two major literature streams.First, university-industry collaboration, and, second, science-based entrepreneurship [4].Whereas, in the first case, scientific knowledge (value created) is exchanged with another economic actor (e.g., a firm).In the second case, it might be transformed into an innovation by the scientist her/himself.Figure 1 graphically depicts the theoretical understanding of value creation and value capture, in general, and in the context of university-industry collaboration and science-based entrepreneurship in particular.Value is captured by two types of actors when it is exchanged.While, in the case of an innovation, the exchange occurs between buyers of a new product or service and its producers [2], the buyers might be firms that decide to collaborate with a scientist in the context of scientific knowledge.However, to describe value capture mechanisms and strategies first requires the definition of value.Building upon Teece's model [24,25] of the overall value captured by an innovation, Bowman and Ambrosini [2] differentiate between use value and exchange value.While the latter is the (monetary) price paid to obtain a good, the first is the buyer's surplus.The surplus describes the comparison buyers make between products, their needs, and the feasibility of other offerings such as comparisons that resource suppliers make between the deal with the firm and possible other deals.While these authors focus on the organizational level, Lepak, Smith, and Taylor [3] broaden this understanding by accounting for a societal and individual-level perspective.Value creation, thereby, "depends on the relative amount of value that is subjectively realized by a target user (or buyer) who is the focus of value creation-whether individual, organization, or society-and that this subjective value realization must at least translate int o the user's willingness to exchange a monetary amount for the value received" [3].Thereby, the value created must be a contribution that is perceived to be valuable by members of a target group [3,10].Hence, the value created must exceed the perceived utility of any other alternative presented to the target group by either lowering the cost or creating a higher value.However, while the use value is considered to be subjectively perceived, the exchange value lacks such a subjective aspect [2].Value is captured by two types of actors when it is exchanged.While, in the case of an innovation, the exchange occurs between buyers of a new product or service and its producers [2], the buyers might be firms that decide to collaborate with a scientist in the context of scientific knowledge.However, to describe value capture mechanisms and strategies first requires the definition of value.Building upon Teece's model [24,25] of the overall value captured by an innovation, Bowman and Ambrosini [2] differentiate between use value and exchange value.While the latter is the (monetary) price paid to obtain a good, the first is the buyer's surplus.The surplus describes the comparison buyers make between products, their needs, and the feasibility of other offerings such as comparisons that resource suppliers make between the deal with the firm and possible other deals.While these authors focus on the organizational level, Lepak, Smith, and Taylor [3] broaden this understanding by accounting for a societal and individual-level perspective.Value creation, thereby, "depends on the relative amount of value that is subjectively realized by a target user (or buyer) who is the focus of value creation-whether individual, organization, or society-and that this subjective value realization must at least translate into the user's willingness to exchange a monetary amount for the value received" [3].Thereby, the value created must be a contribution that is perceived to be valuable by members of a target group [3,10].Hence, the value created must exceed the perceived utility of any other alternative presented to the target group by either lowering the cost or creating a higher value.However, while the use value is considered to be subjectively perceived, the exchange value lacks such a subjective aspect [2].
The exchange of scientific knowledge happens through different knowledge dissemination activities.Typically, scientific knowledge dissemination happens through scientific publications, conference presentations, book presentations, interviews, and so forth.Thereby, the selection of a dissemination activity is influenced by field-specific norms.Commercial value capture mechanisms are also applied by scientists, across all fields, that allow them to commercialize their knowledge such as patenting, licensing, consulting, or academic entrepreneurship [4,27].In the context of university-industry collaboration and science-based entrepreneurship, this means that the right to use the knowledge is given (e.g., through licensing, consulting, and patenting) in exchange for money.Thereby, whatever utility the buyer perceives is the uniquely realized use value.This use value can differ for any actor who uses the scientific knowledge.The monetary value received by the scientists can be considered as a further realized exchange value.Hence, in the case of a publication, every reader realizes an individual use value, as well as the publisher who normally owns the rights.In this case, the realized exchange value is the royalty fee the scientists get from the publisher, based on the sales of the publication, if any 2 .In the case of a licensing-deal, the firm that licenses the scientific knowledge can create new innovations (use value in terms of future realized exchange value), while the scientists receive the licensing fees paid by the firm (exchange value).However, whether a scientist is willing to engage in the value creation process (i.e., knowledge production and dissemination) in the first place, depends on the anticipated exchange value [3] (i.e., the anticipated value to be captured by the scientist and not only on the pure ability to engage).
Value capture mechanisms, therefore, describe actions that allow scientists to capture exchange value from their scientific knowledge production and dissemination.These mechanisms can be structured, according to their level of formalization [23].Thereby, formal mechanisms include but are not limited to patenting, collaborative research, consultancy, or licensing and informal mechanisms describe networking activities or ad-hoc advices for practitioners [4].By applying these mechanisms, scientists are able to realize the exchange value from the disseminated scientific knowledge.
However, only a fraction of the dissemination mechanisms for scientific knowledge allow the scientist to capture such an exchange value.Scientific knowledge is a durable public good [18].Its dissemination is a necessary condition for the exchange and recombination of information [18] and, hence, for the realization of use value and exchange value [2].Considering that some dissemination activities are associated with a lower anticipated exchange value for the scientist, the question arises regarding why such knowledge dissemination activities are used at all.This leads to the following dilemma.While disseminating scientific knowledge to more users would increase the use value (and, thus, the overall value captured), it does not necessarily increase the scientist's exchange value, which represents a low incentive to apply these mechanisms.
Theoretical Framework for Analyzing the Dilemma
Despite commercialization through university-industry collaborations and science-based entrepreneurship, scientists most commonly disseminate their scientific knowledge through publications, conferences, or teaching.Understanding the exchange value as monetary rents in exchange for the scientific knowledge leads to a paradoxical situation.Such dissemination strategies lead to no or only a very limited exchange value.However, the anticipated exchange value needs to exceed a critical threshold for scientists to be willing to (further) create scientific knowledge in the first place and, thus, create use value for other individuals, organizations, and society.Consequently, scientists either act irrationally because they engage in a value creation process where the costs (i.e., the required effort) exceed the anticipated exchange value, which leads to self-destruction.Or the exchange value for scientific knowledge consists of more than monetary rewards.The value of scientific knowledge has received considerable attention [5,6,17,18].Most authors have focused on the description of the (realized) use value of scientific knowledge.Hence, they argue what value applied vs. basic scientific knowledge has for society and organizations [17], or describe why economic actors invest in the creation of scientific knowledge [18].One pioneering exception is Dedrick and Kraemer [5] who describe how value creation by science-based innovation is distributed among all stakeholders-including the national ecosystems and the scientists.They point out that awarded prizes and prestige can be considered rewarding for scientists.
However, knowledge about what is considered valuable by scientists remains scarce [5].The largest body of research focuses on the challenging environment for young scientists (see, for example, a special issue of the Journal Science in September 1999).It is no secret that the career decision to stay in academia is often related to several sacrifices such as job insecurity due to short-term employment, limited work-life balance, lower average wages, above-average working hours, and consequently also a rather hostile environment for family development, especially for female scientists [28][29][30].Therefore, what drives people to engage in scientific knowledge production?A few studies on scientists' work motivation [12,[31][32][33] provide initial answers.For example, Gibbs and Griffin [33] found that the main reason for staying in academia is the flexibility and freedom to research.Furthermore, the ability to engage in externally focused values (e.g., improving the societal status quo) were mentioned, as well as the influence on students.Combining these insights with the value capture perspective, we propose that the exchange value also consists of a non-monetary component.
In the following, we, therefore, want to explore what scientists consider as a desirable exchange value, i.e., what they want to receive in exchange for disseminating the scientific knowledge that they have produced, and what value capture mechanisms (i.e., dissemination activities) they apply for doing so.Identifying what is considered as a desirable exchange value also allows us to understand why the anticipated exchange value sufficiently drives the scientist's willingness to further engage in value creation processes (i.e., scientific knowledge production).Accordingly, our exploratory research aims to address the following research question: How do individual scientists capture value from their scientific knowledge production and dissemination activities?
Figure 2 depicts the process of value creation and value capture with the red circles highlighting the foci of the study.We first want to explore, what mechanisms (i.e., dissemination activities) scientists apply to capture value from their knowledge production.Second, what value is captured by the scientists, and third, why do scientists consider the realized exchange value valuable.what value applied vs. basic scientific knowledge has for society and organizations [17], or describe why economic actors invest in the creation of scientific knowledge [18].One pioneering exception is Dedrick and Kraemer [5] who describe how value creation by science-based innovation is distributed among all stakeholders-including the national ecosystems and the scientists.They point out that awarded prizes and prestige can be considered rewarding for scientists.However, knowledge about what is considered valuable by scientists remains scarce [5].The largest body of research focuses on the challenging environment for young scientists (see, for example, a special issue of the Journal Science in September 1999).It is no secret that the career decision to stay in academia is often related to several sacrifices such as job insecurity due to short-term employment, limited work-life balance, lower average wages, above-average working hours, and consequently also a rather hostile environment for family development, especially for female scientists [28][29][30].Therefore, what drives people to engage in scientific knowledge production?A few studies on scientists' work motivation [12,[31][32][33] provide initial answers.For example, Gibbs and Griffin [33] found that the main reason for staying in academia is the flexibility and freedom to research.Furthermore, the ability to engage in externally focused values (e.g., improving the societal status quo) were mentioned, as well as the influence on students.Combining these insights with the value capture perspective, we propose that the exchange value also consists of a non-monetary component.
In the following, we, therefore, want to explore what scientists consider as a desirable exchange value, i.e., what they want to receive in exchange for disseminating the scientific knowledge that they have produced, and what value capture mechanisms (i.e., dissemination activities) they apply for doing so.Identifying what is considered as a desirable exchange value also allows us to understand why the anticipated exchange value sufficiently drives the scientist's willingness to further engage in value creation processes (i.e., scientific knowledge production).Accordingly, our exploratory research aims to address the following research question: How do individual scientists capture value from their scientific knowledge production and dissemination activities?
Figure 2 depicts the process of value creation and value capture with the red circles highlighting the foci of the study.We first want to explore, what mechanisms (i.e., dissemination activities) scientists apply to capture value from their knowledge production.Second, what value is captured by the scientists, and third, why do scientists consider the realized exchange value valuable.
Methodology
Due to the exploratory nature of this study and the previously mentioned research question, a qualitative approach was applied to gather in-depth data and rich information on the phenomenon [34].Qualitative approaches are known to be particularly useful for understanding the theory underlying the observed relationships in data [35].In our case, exploratory research is considered to be the best option, since the inner content of value creation and value capture processes-what is happening in a real scientist's life-is an underexplored research domain that potentially shapes a new understanding of the phenomenon [36].Therefore, given the "how" nature of the research question and the focus on underlying factors associated with value creation and value capture in science rather than studying them in isolation, a qualitative study with multiple phases of data collection is required [34].Moreover, an inductive-deductive approach was chosen for this study.
Methodology
Due to the exploratory nature of this study and the previously mentioned research question, a qualitative approach was applied to gather in-depth data and rich information on the phenomenon [34].Qualitative approaches are known to be particularly useful for understanding the theory underlying the observed relationships in data [35].In our case, exploratory research is considered to be the best option, since the inner content of value creation and value capture processes-what is happening in a real scientist's life-is an underexplored research domain that potentially shapes a new understanding of the phenomenon [36].Therefore, given the "how" nature of the research question and the focus on underlying factors associated with value creation and value capture in science rather than studying them in isolation, a qualitative study with multiple phases of data collection is required [34].Moreover, an inductive-deductive approach was chosen for this study.First, we want to inductively explore how scientists capture value from their knowledge production and dissemination activities.Second, we use value capture theory to make sense of the data and embed our findings.Using this mix enabled us to (1) make the best use of our empirical data (i.e., let the data speak for itself, (2) incorporate pre-existing theories that study this phenomenon, and (3) enrich pre-existing theory by adding novel explorations and interconnecting elements of other theories.
Data Collection and Research Context
Data was collected during 2017, using two inquiry techniques over three phases (see Figure 3).
First, we want to inductively explore how scientists capture value from their knowledge production and dissemination activities.Second, we use value capture theory to make sense of the data and embed our findings.Using this mix enabled us to (1) make the best use of our empirical data (i.e., let the data speak for itself, (2) incorporate pre-existing theories that study this phenomenon, and (3) enrich pre-existing theory by adding novel explorations and interconnecting elements of other theories.
Data Collection and Research Context
Data was collected during 2017, using two inquiry techniques over three phases (see Figure 3).Workshops: Two workshops aimed at providing participants with frameworks and tools to develop and implement mechanisms and processes of how to capture value from their scientific knowledge.The workshops were meant to be an opportunity for participants to work on their own institutes' future approach to value capturing by developing knowledge, skills, and competencies.
In-depth interviews (11 scientists): The interviews aimed to better understand the process of value capture and to take a deep dive into different mechanisms, antecedents, and outcomes, from a scientist's perspective.Phase 1: The first workshop was designed to inform and educate participants about value capturing in science and it helped the research team to get a better understanding of the phenomenon of value capture in science from the scientists' perspective.The information gathered in the first workshop helped to construct an interview guideline for the second phase, based on a deeper and a more practical level of understanding the phenomenon.
Phase 2: Building upon phase 1, we invited future participants of the second workshop to participate in an in-depth interview prior to their participation.Semi-structured interviews were conducted, using an open-ended interview protocol.The semi-structured interviews allowed informants to offer their comments freely, which allowed us to collect in-depth and field-specific insights.In drafting the interview questions, we focused on mechanisms extracted from the literature, as well as value creation and capture theory.However, as is common in explorative research, new factors and mechanisms started to reveal themselves during the interviews.
In total, we conducted 11 interviews with scientists, each took between 1 hour to 1.5 hours (see Table 1 for interviewees' information).Interview participants were scientists from different fields, working in different institutes of a large research organization with a thematic focus in medicine, the life sciences, social sciences, and cultural studies.Participants originated from seven different countries including Austria, Belgium, Bosnia, Herzegovina, Hungary, Italy, Poland, and the UK.The research organization is one of the largest research institutions in Austria, with 30% of its budget being publicly funded.
Interviewee Position Research Field Years after PhD A
Administrative head Health/life 5 Workshops: Two workshops aimed at providing participants with frameworks and tools to develop and implement mechanisms and processes of how to capture value from their scientific knowledge.The workshops were meant to be an opportunity for participants to work on their own institutes' future approach to value capturing by developing knowledge, skills, and competencies.
In-depth interviews (11 scientists): The interviews aimed to better understand the process of value capture and to take a deep dive into different mechanisms, antecedents, and outcomes, from a scientist's perspective.
Phase 1: The first workshop was designed to inform and educate participants about value capturing in science and it helped the research team to get a better understanding of the phenomenon of value capture in science from the scientists' perspective.The information gathered in the first workshop helped to construct an interview guideline for the second phase, based on a deeper and a more practical level of understanding the phenomenon.
Phase 2: Building upon phase 1, we invited future participants of the second workshop to participate in an in-depth interview prior to their participation.Semi-structured interviews were conducted, using an open-ended interview protocol.The semi-structured interviews allowed informants to offer their comments freely, which allowed us to collect in-depth and field-specific insights.In drafting the interview questions, we focused on mechanisms extracted from the literature, as well as value creation and capture theory.However, as is common in explorative research, new factors and mechanisms started to reveal themselves during the interviews.
In total, we conducted 11 interviews with scientists, each took between 1 h to 1.5 h (see Table 1 for interviewees' information).Interview participants were scientists from different fields, working in different institutes of a large research organization with a thematic focus in medicine, the life sciences, social sciences, and cultural studies.Participants originated from seven different countries including Austria, Belgium, Bosnia, Herzegovina, Hungary, Italy, Poland, and the UK.The research organization is one of the largest research institutions in Austria, with 30% of its budget being publicly funded.Prior to interviews, interviewees were informed that the questions would mostly focus on their individual dissemination activities.However, while the questions related to rewards brought up in accordance with each dissemination mechanism, the categorization of the type of reward was done in the analysis phase.
The interview guideline consists of three sections.In the first part, the interviewees discussed in detail the value capture mechanisms they currently employ, before going into more detail with the antecedents of implementing such mechanisms and the expected outcomes.These open questions addressed but were not limited to (a) participants unique careers (projects, research line, perspectives, experiences, and technical details of their research), (b) identifying mechanisms and elaborating on each mentioned mechanism in detail (when, experiences, successes, challenges before, during, and after implementing each mechanism), (c) questions considering different mechanisms mentioned earlier, such as "Why do you consider the [mechanism] to be valuable?","What do you perceive as satisfactory about the [mechanism]?","How do you think scientific and non-scientific communities evaluate the [mechanism]?"and "In your opinion, can you measure the value of the [mechanism] and if yes, how?".
Phase 3: In the last phase, the research team discussed the results of the interviews with the participants and validated the main understanding of value capture processes in science.The topics discussed with the participants in this phase were: (a) capturing value from science, (b) Open Innovation search and collaboration approaches to creating and capturing value from science, (c) intellectual property (IP), IP rights, and strategies in Open Innovation and Open Science, (d) opportunities, risks, and contingency factors related to applying Open Innovation/Open Science as a scientist, (e) identification and selection of external partners for commercializing science, (f) opportunities and challenges involved in partnering with externals, (g) the role and value of tech-transfer offices in supporting the commercialization of science, (h) working with/using intermediaries and platforms for the commercialization of science (e.g., scientists as suppliers to platform challenges), and (i) good-practice examples and case studies related to external partnering in the commercialization of science.
The data from both workshops was collected and analyzed in a systematic way by means of observation (e.g., participant's presentations and flip charts presenting value capture strategies for their research).The workshop data was triangulated with the interview data.In total, we collected 60 h of the workshop material and 15 h of in-depth interviews with scientists (approximately 300 pages of transcribed raw data).
Data Analysis Procedure
The transcribed data was then processed to provide a clean case for each participant.The initial analysis was mainly conducted by two researchers, based on triangulation of the data sources (workshops, documents, observations, and interviews) for each scientist.The first round of analysis was structured along the lines of the guideline used for data collection.In other words, we categorized the individual answers, according to the thematic open-ended questions.This open and inductive coding approach contributed to obtaining a comprehensive and general picture of value capture mechanisms in science.All transcripts were reread multiple times with the following questions in mind: what mechanisms are used for capturing value from research?Why and how were these mechanisms used?This step was done with the least consideration given to predefined concepts and categories.
The second round of analysis contained a more detailed and analytical approach for each transcript.By iteratively analyzing data, literature, and concepts, different categories began to emerge [37].This step helped us conceptually refine and connect each identified category to relevant contexts.
We used descriptive codes to identify and cluster data related to each existing and emerged concept.We then drew on a set of theoretical concepts that reflect the interplay between the main emerging concepts such as value, human behavior, motivation, and action.
Since the main purpose of our research was opening the black box of capturing value from scientific knowledge production and dissemination, we started to interpret what we considered to be valuable for scientists.Lastly, in the last analytical task, existing and emerging code concepts were categorized and shaped patterns related to different stages of value capture processes in science.
Validity and Reliability of the Data
Although our interviews were selected from different fields, with different nationality backgrounds, the issue of generalizability has always been present in doing our qualitative research.For example, while the scientists were from different fields, the sample comprises a smaller fraction of scientists from social sciences and humanities (each n = 2).It is, therefore, worthwhile to mention that the main purpose of our study is to broaden theory and reflect on the phenomenon rather than generalizing from the sample to the population.In addition, pilot interviews were conducted to construct a secure and reliable basis in terms of content and duration for the formal data collection process.Lastly, one of the two data analysts did not participate in the data collection process and started the analysis from raw transcripts.This provided a non-biased interpretation of the raw data, which was aligned with the first analyst's interpretations.
Analysis and Results
This section first highlights the design of value capture mechanisms in relation to their monetary and non-monetary outcomes.We then move beyond these mechanisms by shedding light on how they contribute to the value that scientists capture and "why" these mechanisms have been used by scientists as antecedents of value capture mechanisms.Our data indicates that dissemination mechanisms in science can be considered as value capture mechanisms.This is due to the characteristics of the exchange value that is realized by disseminating the scientific knowledge.Our data indicates that the realized exchange value is not only of monetary but also of a non-monetary nature.The non-monetary exchange value is considered by scientists as valuable due to the scientists' needs pyramid.Therefore, we open three black boxes of the scientists' value capture process: their dissemination mechanisms, their realized exchange value, and the underlying reasons why the value captured is considered valuable.However, since the nature of our analysis is explorative, some facets might appear more often than others during the open coding process.This does not necessarily imply a higher importance.
Mechanisms for Value Capture from Scientific Knowledge
Analyzing our dataset, we found formal and informal sets of value capture mechanisms (i.e., dissemination mechanisms) employed by scientists to capture value from their research.Formal sets of mechanisms can be identified as mechanisms that have a naturally formal structure.By formal structures, we mean the employment of dissemination mechanisms that bring monetary and non-monetary outcomes to individuals, institutions, and society.Our interviewees reported sets of formal mechanisms such as patents, publications, conferences, and teaching.These formal mechanisms were found to have both monetary and non-monetary outcomes for scientists.However, it is worth noting that formal mechanisms are discipline-dependent.
According to our interviews, it is institutional-level factors that influence the decision to choose a formal (vs.informal) mechanism in life and health science institutes.By contrast, in the humanities and social sciences, this decision seems to be more flexible and more influenced by individual-level factors.
"Towards the research community, they are disseminated almost exclusively in conference talks, article publications, book publications, and book reviews, so I guess literary studies produces literature.Then, to the non-research community, we do things like book presentations, or exhibitions in museums."(Interviewee I) "[we disseminate] by publications, obviously.Then going to conferences.[ . . .] We have the aim that everybody is able to go to the major conferences, and to present.[ . . .] We always published consensus manuscript, as a result of this conference, like our recommendation how to classify a disease, what a stem cell is, or sometimes to recommend certain modifications of the standard treatment."(Interviewee A) "[...] If there is a new method, which can be patented, and it can be used later by the scientific community, I think it can be interpreted as if it was directed to the scientific community."(Interviewee B) Publication is the most frequently used formal mechanism to capture value from research for two reasons.First, because of the scientists' position in the scientific network and, second, because of the indirect monetary exchange value this mechanism creates for scientists.Indirect means that this mechanism brings about a better position for scientists in their scientific community, which promotes and enhances career opportunities.
In addition to formal sets of mechanisms, scientists reported that they disseminate their knowledge by employing various informal mechanisms.Informal mechanisms are identified as being driven by an informal structure or no-structure.Informal structures are structures that are encouraged by the scientists' institutions, but they are not predetermined tasks of scientists.Mechanisms with no structure as their basis are solely driven by the scientists themselves with no involvement from their institutions or their environment.
"Yesterday I got an invitation for this Science Slam in November.I should also present my project there; let's see if I can realize it."(Interviewee D) While formal sets of mechanism result in monetary and non-monetary outcomes, our data shows that informal mechanisms to capture value from scientific knowledge production have mostly non-monetary outcomes.Non-monetary outcomes are outcomes that have a mostly indirect impact on scientists' survival in academia.
Table 2 shows various formal and informal mechanisms that are used to capture value from scientific knowledge and their monetary and non-monetary outcomes for the producer of the scientific knowledge.For example, while the primary (**) outcome of patents is monetary (e.g., license is purchased), it also includes a secondary (*) indirect non-monetary outcome (e.g., future career opportunities).To give another example, while the primary (**) outcome of media use is non-monetary (e.g., visibility by public organization), it does not have a monetary outcome (-).
Exchange Value from the Scientist's Perspective
Our results indicate that the exchange value in science seems to be more complex than previously assumed in the literature on the commercialization of scientific knowledge.Although it mirrors the typical structure of value capture mechanisms in terms of monetary outcomes, new insights into what scientists perceive as sufficient to engage in knowledge production (i.e., value creation) and dissemination evolve.Besides the monetary outcomes that we consider as the objective exchange value, scientists recognize non-monetary-rewards that we perceive as the subjective exchange value (see Figure 4).Narratives of scientists' value capture mechanisms in their specific context are, therefore, considered to represent a dynamic structure that weaves a subjective exchange value together with an objective exchange value.These two elements combined indicate what individuals consider as a desirable exchange value when implementing any mechanisms.
Exchange Value from the Scientist's Perspective
Our results indicate that the exchange value in science seems to be more complex than previously assumed in the literature on the commercialization of scientific knowledge.Although it mirrors the typical structure of value capture mechanisms in terms of monetary outcomes, new insights into what scientists perceive as sufficient to engage in knowledge production (i.e., value creation) and dissemination evolve.Besides the monetary outcomes that we consider as the objective exchange value, scientists recognize non-monetary-rewards that we perceive as the subjective exchange value (see Figure 4).Narratives of scientists' value capture mechanisms in their specific context are, therefore, considered to represent a dynamic structure that weaves a subjective exchange value together with an objective exchange value.These two elements combined indicate what individuals consider as a desirable exchange value when implementing any mechanisms.Our study indicates that subjective exchange values are a prevalent but unconscious part of the bigger picture of exchange value structures in science.This means that when scientists perceive value resulting from an action, they simultaneously acknowledge the subjective nature of the value.This expands the current understanding of value and its monetary nature in the science context.Hence, Our study indicates that subjective exchange values are a prevalent but unconscious part of the bigger picture of exchange value structures in science.This means that when scientists perceive value resulting from an action, they simultaneously acknowledge the subjective nature of the value.This expands the current understanding of value and its monetary nature in the science context.Hence, scientists' desire to capture value from their knowledge dissemination using formal and informal mechanisms could be considered as a recognition of the importance of satisfying their needs.These needs are ultimately fed by the subjective value rather than the objective value they receive.
Scientists' Objective Exchange Value
Based on our data, we identify a comparably small proportion of the exchange value that determines those outcomes that can be objectified into monetary rewards.Objective values are those related to the measurable output they receive from implementing value capture mechanisms, such as increase in salary, career promotions, and research funding."[ . . .] then, of course, it promotes the career, because you need publications in order to be able to apply for additional funding, or, in this case, for the prolongation of the cluster, for example, and then depending on the topic.Again, it is the contribution to the scientific field, so that you gain recognition, but you are also able to promote the work of others that can build up on your work."(Interviewee A) "Whatever draws attention to your research helps you because these are the things that are quantified, citations [...]" (Interviewee C) It's worth noting that, in line with the theoretical foundation outlined in value capture studies, our participants never mentioned basic salary as a monetary reward of their dissemination activities, but they did perceive salary increases or research funding as rewards."[...] if you get a grant, then you get an extra piece of that to support your salary.So, it translates directly.It is worth getting a grant because you are going to earn more [...] salary schemes that are in Austria are absolutely non-motivating, so I basically am motivated and propelled by my love of science and intellectual curiosity."(Interviewee I) We found that individuals received less objective value than they had expected when capturing value from employing formal and informal mechanisms.However, there still might be a subjective judgement on what is perceived as an objective value for scientists.
"For example, if you are looking for a new job, or if you are writing proposals, the reviewers would see if you are really good.If you fit into this project, if you are the right person to work on this project.And then they look in the publications."(Interviewee E)
Scientists' Subjective Exchange Value
In the context of science, the subjective exchange value that individuals receive in exchange for disseminating their research is driven by cognitive and socio-psychological factors.The concept of subjective exchange value, however, is a broader term for non-monetary rewards resulting from a dissemination action in academia.Subjective value is, by nature, something positive (e.g., 'feeling of satisfaction', confidence, or pride) [38].In our study, we found that it is mostly the subjective exchange value, or the subjective judgement of an objective value that satisfies scientists' cognitive and socio-psychological needs.Subjective value can be seen as the best available intuition about an objective action [38].Therefore, it is not surprising to see that objective actions (formal and informal mechanisms) are first evaluated through a subjective exchange value.For example, one's willingness to appear on social media feeds the scientist's ego-identity status, which, in return, results in the feeling of satisfaction and of being recognized.
"With the general, how to say, environment, the feeling of the public towards your research, if this research is important for the well-being of the people, or if this research is just important for itself.Of course, it's always much better, if the people feel, and know that in the end, there will be something that affects their lives, or our lives in this case."(Interviewee A) "I think that is very satisfactory . . .Then to get responses, and yeah, visibility I think is very satisfactory.It's a requirement.So, you are judged based on your publications.Whatever you have published is kind of yours, so to say.So, your publication list will always be your publication list.It is kind of like your output, your personal out of your personal value.It is the way to sell yourself, of course, to people.This is how people are going to evaluate you, based on what you published."(Interviewee G) Based on our data, we, therefore, argue that implementing dissemination mechanisms leads to capturing objective as well as subjective value.Moreover, scientists' perceived subjective value outweighs the objective value when implementing any dissemination mechanism.In the following, we explore why the previously mentioned subjective and objective exchange value is considered to be a sufficient driver for scientists to engage in value creation processes.
Scientists' Need Pyramid
We go beyond describing research dissemination mechanisms and value capture in isolation and address why the realized exchange value drives individual scientists to engage in knowledge production and dissemination.We identified three categories of antecedents that drive mechanism selection, namely: survival in academia, ego-identity status validation, and societal impact.These needs form a pyramid in our results, labeled as the "scientists' needs pyramid" (see Figure 5).In the pyramidal form, the different sizes are intended to express different amounts of the exchange value required to satisfy these needs.Based on our data, we, therefore, argue that implementing dissemination mechanisms leads to capturing objective as well as subjective value.Moreover, scientists' perceived subjective value outweighs the objective value when implementing any dissemination mechanism.In the following, we explore why the previously mentioned subjective and objective exchange value is considered to be a sufficient driver for scientists to engage in value creation processes.
Scientists' Need Pyramid
We go beyond describing research dissemination mechanisms and value capture in isolation and address why the realized exchange value drives individual scientists to engage in knowledge production and dissemination.We identified three categories of antecedents that drive mechanism selection, namely: survival in academia, ego-identity status validation, and societal impact.These needs form a pyramid in our results, labeled as the "scientists' needs pyramid" (see Figure 5).In the pyramidal form, the different sizes are intended to express different amounts of the exchange value required to satisfy these needs.
Academic Survival
Our data show that the main driver for producing and disseminating knowledge is survival.We found scientists' first and most important driver to use both formal and informal strategies is to survive in their academic life and continue research in their core area of interest.The desire to meet academic career goals serves as the basis of value capture from scientific knowledge ( e.g., through research funds or promotion).For example, disseminating knowledge by means of attending conferences secures the individuals' unique position in their network and, therefore, signals to competitors and enhances individual bargaining power when it comes to opportunities.In the context of science, considering individuals as a separate entity explains how this strategy creates an isolation mechanism in the individuals' own network.
Academic Survival
Our data show that the main driver for producing and disseminating knowledge is survival.We found scientists' first and most important driver to use both formal and informal strategies is to survive in their academic life and continue research in their core area of interest.The desire to meet academic career goals serves as the basis of value capture from scientific knowledge (e.g., through research funds or promotion).For example, disseminating knowledge by means of attending conferences secures the individuals' unique position in their network and, therefore, signals to competitors and enhances individual bargaining power when it comes to opportunities.In the context of science, considering individuals as a separate entity explains how this strategy creates an isolation mechanism in the individuals' own network.
"Well, the conferences are something really, really big and we do not have articles despite maybe the first idea would be to connect it with exhibitions.[ . . .] It is really an important moment for the life of a scholar.In a conference, my research is evaluated by my colleagues, and also, this is the situation, this is the moment where I can make relationships with other colleagues, other scholars.
Maybe organize another conference in two years, or a collaboration, or a book together.Conferences are absolutely one of the most important aspects of our research, absolutely."(Interviewee J) "Sometimes it is very valuable to have international partners.Also, in some cases, like we have also been able to found societies, for example, that are international societies, for example placenta stem cells.We kind of bring the community closer, so to say.There are few people working in this field worldwide, it is a way to bring them closer together.Then this is a nice way of meeting them again at conferences or making our own meetings and conferences."(Interviewee G) Most of our interviewees use publications as their main knowledge dissemination mechanism.However, they do not primarily aim to share their research insights but rather use publications to make themselves identifiable to their peers and, thus, generate career opportunities.
"It is always good to be recognized by the simple fact that somebody considered your work valuable of publishing, and we, of course, look into it to publish either in top journals [ . . .] then, of course, it promotes the career, because you need publications in order to be able to apply for additional funding, or, in this case, for the prolongation of the cluster, for example."(Interviewee G) Interestingly, the story is similar for informal mechanisms.One interviewee reported the ultimate driver for presenting a book to the public is not only about getting noticed by the larger community but rather about absorbing public funds to continue research.
"If we consider that our research is important, and we ask for money for that because we need money, less than positions, but we still need money, and we ask societies, companies, or governments to fund our research, we should demonstrate that this research is important.I know that it is important but not enough that I know, I have to demonstrate it.The only way to demonstrate that our research is important is to show an interest among the society."(Interviewee J) From the resource-based view, the survival and performance of an entity strongly depends on its ability to leverage distinctive capabilities that lead to competitive advantages [39].In the context of science, these capabilities are translated to research output for each individual and, thus, their survival in their scientific community.In our exploratory study, disseminating research through formal and informal mechanisms first and foremost serves to create an entry barrier for other individuals, which increases the likelihood of career survival of established scientists.
Taking the attention from macro-level factors such as institutions and economy, individuals develop their own survival mechanisms to sustain themselves in the science industry.Capturing value from science strongly influences individuals' survival.However, the big question arises here: should academic survival be the main motivator of utilizing value capture mechanisms?
Scientists' Ego-Identity Status Validation
The second category of underlying reasons for the perceived importance of subjective exchange value is scientists' validation of their own ego-identity status.We found that two types of ego-identity validation processes have a direct impact on the utilization of value capture mechanisms among scientists: personal ego-identity status and professional ego-identity status.Personal ego-identity refers to the individuals´own definition of "who I am," while professional ego-identity status explains both one's awareness of being an employee doing a particular job and one's identification with its own group and social categories to which s/he relates by means of her/his job [40].
Recognition by the scientific community and then society is found to be one of the main pillars in ego-identity status validation when it comes to using value capture mechanisms.Especially in employing informal mechanisms, scientists' work being recognized by the public was the main antecedent.
"They [book presentations] should help people to get to know my name, and they should get people to know my research, "Oh, I did a book presentation," to people."(Interviewee I) "It is important that the public knows what research is doing or what it can actually do.Yeah, it is always tricky to find something that we could present there because many topics are not good to, yeah, as I said, grab the attention within a few seconds or minutes.But it is always very nice to go there and talk to mainly kids or teenagers."(Interviewee E) Our data shows that appearing in public places, giving public talks, presenting research to the public, and appearing in the media satisfied individuals' desire for social approval.The reason behind these types of behavior are explained as normative social behavior of beings: "To the extent that injunctive norms are based on individuals' perceptions about social approval, an underlying assumption in the influence of injunctive norms is that behaviors are guided, in part, by a desire to do the appropriate thing" [41].For example, an interviewee claimed the reason why publishing a book was considered to be valuable is the "ego-boosting" effect of the action.Another important factor that is strongly associated with an ego-identity status is self-efficacy.Self-efficacy is defined as a personal judgement of "how well one can execute courses of action required to deal with prospective situations" [42].Moreover, according to Cervone [43], individuals actively evaluate the relation between their perceived skills and the demands of tasks when thinking about their capabilities for performance.We observe various elements of self-efficacy when scientists explain their motives for involvement in dissemination activities.
"But by putting words on the piece of paper, you have to make sure that you are certain about what you are writing.So I do a lot of research to make sure that what I am writing is waterproof... is watertight, is foolproof [ . . .]And by doing this, I also educate myself a lot, because I read about things that I, otherwise, maybe would not read, just to be really sure, and this gives me a lot of satisfaction because I am constantly educating myself and I really enjoy writing my own paper, or correcting somebody else's paper, to make sure that the structure is really perfect, because I really enjoy" (Interviewee C, formal mechanism) "Probably, because it is just again a skill training for presentations but, finally, if you get to the audience, the testimony then you have to show them that you are the man for the project.That could be quite good."(Interviewee G, informal mechanism) Hence, some scientists apply various types of dissemination mechanisms to perform a self-evaluation of their performance and their capabilities.In previous studies, self-efficacy was found to motivate scientists to perform research.Bandura [44] claims that research done by faculty needs noticeable creativity and scientists' motivation built on a strong sense of efficacy that their efforts are considered to be successful, which also depends on the field-specific demands.In our case, self-efficacy was found to be highly relevant in disseminating the research results.
Societal Impact
In the context of science, we tend to assume having a societal impact is potentially an antecedent for implementing value capture mechanisms.However, in our data, we found limited evidence of 3 A habilitation is the highest qualification level issued by universities and is a requirement for full professorship in many European countries.
this.The desire to make a societal impact was not a primary driver for our study participants when answering the question of "why do you use certain mechanisms?".
Societal impact is mainly being driven by individuals' personal belief in research as a public good.In this vein, the outcome of research must directly benefit society.In our dataset, two scientists reported using media to create societal impact."[ . . .] So, to bring awareness to the public.But the content, what we are actually doing.Well at the end, it is public money that we use.And yeah.Especially when there are elections like there have been now.There might be changes in how big the share that goes to research.And then if people have no clue what research is actually doing for them, they cannot understand why they should give us a share."(Interviewee E)
Conclusions and Implications
This section briefly summarizes our six major results and discusses emerging theoretical contributions as well as practical implications for scientists, university managers, policy makers, and research funders.
First, scientists use dissemination mechanisms to capture value from the scientific knowledge they have produced.By disseminating their scientific knowledge, scientists empower academic and non-academic actors (e.g., the general public, firms, and policy makers) to capture use value from their knowledge if users pick it up.Second, the realized exchange value consists not only of a monetary dimension as conceptualized in previous studies, [2] but also of a subjective dimension that includes social recognition, reputation, and the validation of the ego-identity status.This finding is in line with prior research indicating that softer factors such as access to knowledge, reputation, or other non-monetary rewards might represent resources and, thus, value, on its own [15].Third, the realized subjective exchange value is considered as valuable due to scientists' needs.Figure 6 illustrates this relationship.While the objective exchange value (direct or indirect monetary rewards) serves primarily to satisfy scientists' needs for academic survival, it also satisfies ego-identity needs.Receiving a meaningful grant can, for example, provide scientists with the desired funding to increase the chance of academic survival, while, at the same time, it pushes scientists' self-efficacy needs.Furthermore, while the realized exchange value can be clearly differentiated into objective and subjective rewards, the rewards' effect on satisfying needs is subjective (indicated by the blurred line in the needs' pyramid in Figure 6).Based on our data, we argue that some scientists ascribe a different utility to different types of rewards (e.g., while some scientists might ascribe a high utility to social recognition or salary increase, others might not).This perception might also be driven by disciplinary differences in what is considered in the evaluation scheme for scientists.
Publications 2019, 7, x FOR PEER REVIEW 16 of 23 this.The desire to make a societal impact was not a primary driver for our study participants when answering the question of "why do you use certain mechanisms?".Societal impact is mainly being driven by individuals' personal belief in research as a public good.In this vein, the outcome of research must directly benefit society.In our dataset , two scientists reported using media to create societal impact.
" […] So, to bring awareness to the public.But the content, what we are actually doing.Well at the end, it is public money that we use.And yeah.Especially when there are elections like there have been now.There might be changes in how big the share that goes to research.And then if people have no clue what research is actually doing for them, they cannot understand why they should give us a share."(Interviewee E)
Conclusions and Implications
This section briefly summarizes our six major results and discusses emerging theoretical contributions as well as practical implications for scientists, university managers, policy makers, and research funders.
First, scientists use dissemination mechanisms to capture value from the scientific knowledge they have produced.By disseminating their scientific knowledge, scientists empower academic and non-academic actors (e.g., the general public, firms, and policy makers) to capture use value from their knowledge if users pick it up.Second, the realized exchange value consists not only of a monetary dimension as conceptualized in previous studies, [2] but also of a subjective dimension that includes social recognition, reputation, and the validation of the ego-identity status.This finding is in line with prior research indicating that softer factors such as access to knowledge, reputation, or other non-monetary rewards might represent resources and, thus, value, on its own [15].Third, the realized subjective exchange value is considered as valuable due to scientists' needs.Figure 6 illustrates this relationship.While the objective exchange value (direct or indirect monetary rewards) serves primarily to satisfy scientists' needs for academic survival, it also satisfies ego-identity needs.Receiving a meaningful grant can, for example, provide scientists with the desired funding to increase the chance of academic survival, while, at the same time, it pushes scientists' self-efficacy needs.Furthermore, while the realized exchange value can be clearly differentiated into objective and subjective rewards, the rewards' effect on satisfying needs is subjective (indicated by the blurred line in the needs' pyramid in Figure 6).Based on our data, we argue that some scientists ascribe a different utility to different types of rewards (e.g., while some scientists might ascribe a high utility to social recognition or salary increase, others might not).This perception might also be driven by disciplinary differences in what is considered in the evaluation scheme for scientists.Fourth, based on the previously mentioned findings, we can say that, not only traditional mechanisms (e.g., patenting, licensing) can be seen as value capture mechanisms, but rather all kinds Fourth, based on the previously mentioned findings, we can say that, not only traditional mechanisms (e.g., patenting, licensing) can be seen as value capture mechanisms, but rather all kinds of dissemination mechanisms are able to realize exchange value-both objective and subjective exchange value-to different degrees.Thus, the overall picture of value capture mechanisms and realized exchange value needs to be considered as being more complex than previously assumed.Fifth, based on our results, neither the subjective nor the objective exchange value triggered societal impact.Surprisingly, societal impact was not explicitly mentioned as an underlying need.This becomes especially critical regarding the universities' Third Mission efforts.This might be because societal impact moves more into the background when compared to academic survival and ego-identity needs, so that the effects of the realized exchange value are not consciously observed.Sixth, the findings of this exploratory study emphasize the importance of considering individual-level factors when researching value creation and value capture processes, which is in line with Lepak, Smith, and Taylor [3].Individuals, in this case, scientists, are the ones deciding to further engage in value creation processes or which value capture mechanism to apply.Hence, their cognitive and emotional processes play an important role in the overall scientific system.Our findings underline the importance of not only paying attention to macro-level factors, but also, considering the micro-foundations of scientific knowledge production, as individuals (e.g., scientists) are important decision makers.
These findings lead to three theoretical contributions.our findings contribute to the understanding of value capture in science while focusing on the dissemination mechanisms, the realized exchange value, and the circular relationship with engagement to create value.If value capture rationales are applied in the context of scientific research, realized exchange value cannot only be considered in monetary terms.This would lead to an insufficient driver to further engage in value creation (i.e., scientific knowledge production).For scientists, the largest fraction of realized value is subjective.Since the realized exchange value influences the anticipated exchange value [3], and this is a major driver to engage in knowledge production, it is critical to assess whether the realized exchange value is able to satisfy the underlying needs.Our findings uncover these needs (i.e., academic survival, ego-identity, and societal impact) that influence the individual perceived utility from the realized subjective exchange value.In turn, this individual utility, is influenced by the personal needs' structure and environmental factors such as disciplinary habits.Support for these findings can be found in the work motivation literature pointing out the importance for scientists to meet basic needs before their strong need for self-actualization can be pursued [12,32].In addition to this literature stream, taking the value capture rationale as point of departure allows us to propose a categorization of value capturing mechanisms in the context of science.This also adds to the literature on knowledge production processes by shedding light on underlying reasons for producing and disseminating scientific knowledge and, thus, overcome the struggle of transferring scientific knowledge into practice [21].This understanding helps design more beneficial negotiation opportunities, which leads to successful exchange and underlies the central role of universities in the knowledge production process [45].This, in turn, is essential for the realization of exchange value (captured by the knowledge producing scientist), as well as the use value (captured by knowledge-using actors such as the general public).Please see Figure 7, which summarizes our contributions toward the understanding of value capture in science.The second theoretical contribution adds to the discussion about the scientist's social engagement and contribution to Open Science, Third Mission, and the Triple and Quadruple Helix.Open Science, that is open knowledge production (e.g., citizen science) and open knowledge dissemination (e.g., open access journals) [46], can be described by its degree of openness, based on the possibility to participate and on the disclosure of intermediate inputs [46].There is an increasing amount of evidence that the dissemination of scientific knowledge in terms of Open Science, i.e., sharing, reusing, recombining, and accumulating knowledge is more rewarding for individuals, institutes, a research field, and organizations compared to disclosed knowledge [47,48].Discussing this reward against the background of differentiating use value and exchange value allows for a more precise incentivization for value creation and a better understanding of value distribution.Considering scientific knowledge as a commodity good allows us to disconnect realized use value from the exchange value.If the knowledge is freely available, each additional user of the knowledge increases the accumulated use value without increasing the exchange value-if the exchange value would be purely objective.But considering the subjective dimension of the exchange value opens opportunities to save scientists a part of the value-without decreasing the value captured by users.For example, the citation system recognizes this relationship.While citations have no costs for the user, the subjective exchange value for the scientists positively influenc es their external performance assessment.By making the subjective value visible, it becomes a currency that scientists can use to meet their need for academic survival.This, in turn, leads to further engagement to create value that can be captured as use value for societies, organizations , and individual users.In line with the understanding of generative appropriability [20], users of scientific knowledge can then create value on their own (e.g., by contributing solutions to deal with today's grand challenges).In terms of open knowledge production, the depicted findings may be able to lower barriers to openly collaborate and The second theoretical contribution adds to the discussion about the scientist's social engagement and contribution to Open Science, Third Mission, and the Triple and Quadruple Helix.Open Science, that is open knowledge production (e.g., citizen science) and open knowledge dissemination (e.g., open access journals) [46], can be described by its degree of openness, based on the possibility to participate and on the disclosure of intermediate inputs [46].There is an increasing amount of evidence that the dissemination of scientific knowledge in terms of Open Science, i.e., sharing, reusing, recombining, and accumulating knowledge is more rewarding for individuals, institutes, a research field, and organizations compared to disclosed knowledge [47,48].Discussing this reward against the background of differentiating use value and exchange value allows for a more precise incentivization for value creation and a better understanding of value distribution.Considering scientific knowledge as a commodity good allows us to disconnect realized use value from the exchange value.If the knowledge is freely available, each additional user of the knowledge increases the accumulated use value without increasing the exchange value-if the exchange value would be purely objective.But considering the subjective dimension of the exchange value opens opportunities to save scientists a part of the value-without decreasing the value captured by users.For example, the citation system recognizes this relationship.While citations have no costs for the user, the subjective exchange value for the scientists positively influences their external performance assessment.By making the subjective value visible, it becomes a currency that scientists can use to meet their need for academic survival.This, in turn, leads to further engagement to create value that can be captured as use value for societies, organizations, and individual users.In line with the understanding of generative appropriability [20], users of scientific knowledge can then create value on their own (e.g., by contributing solutions to deal with today's grand challenges).In terms of open knowledge production, the depicted findings may be able to lower barriers to openly collaborate and share data or intermediate results, but also to involve the public in science processes to increase their scientific literacy.If scientists can be sure of capturing an appropriate piece of the value cake, their fear of not meeting their needs for academic survival and ego-identity can be decreased.Understanding subjective exchange value as a commodity allows for this inclusion of more actors in the value creation process, since they do not need to split the realized subjective exchange value.This is because the subjective exchange value depends on their own utility function.Simultaneously, sharing scientific knowledge among other scientists makes problem solving more likely and efficient [49].Hence, it reduces the necessary effort and resources (e.g., time) to create use value (e.g., knowledge).Concluding, based on our findings, we argue that openly creating and disseminating knowledge does increase the use value of scientific knowledge (which is a major aim of Third Mission activities), but also the realized exchange value.Nevertheless, appropriate ways of making this subjective value more tangible are required.
Third, our findings contribute to the recent discussion on the influence of monetary and non-monetary rewards for knowledge workers by highlighting the importance of non-monetary reward systems driven by intrinsic motivations [50].Our results indicate that salary supplements in any form can be considered as monetary rewards for engaging in Third Mission activities.However, they distinguish between the influence of basic salary versus salary supplements, which is how Frey and Neckermann [51] differentiated money and awards in their work.According to these authors, while money may bring recognition and status, awards are more effective.This is due to the fact that monetary rewards are not publicized, and knowledge on differences in basic salary is restricted to few, if any, close colleagues.In line with this, and the scientists' need a pyramid (see Figure 5), this provides a potential explanation why the basic salary might not function as monetary reward driving scientists to engage in Third Mission activities, while the effect of awards (e.g., research funding) is undeniable.Likewise, dissemination activities that support universities' Third Mission such as public engagement do (for now) limit the captured exchange value as long as the scientists' needs pyramid remains disregarded.
Our findings also bear important practical implications.For scientists, the awareness of this subjective exchange value might help in recognizing the value they receive from their work.The awareness that subjective exchange value is dependent on the individual utility function implies more control and options to receive a value in exchange for the knowledge production process.Furthermore, it allows a more precise estimation of the anticipated exchange value.In other words, being consciously aware of the subjective exchange value increases the actual exchange value scientists receive in exchange for their knowledge.This might increase their willingness to engage in value creation processes in the first place and affects their perceptions of competition arising from Open Science policies.Since the size of the use value is not directly related to the scientist's exchange value captured, new negotiation potential can be exploited.
For policy makers, research funders and university managers of these findings highlight a responsibility and a chance to change the current practices of scientific knowledge production and dissemination.First, the indication that scientists' willingness to create value is mainly driven by their need for academic survival is alarming.Although we do not want to draw any conclusions on scientists' performance, the current incentive system evokes pictures of gladiatorial combat.While many scientists drop out of the system due to a lack of objective exchange value (i.e., money, job), the subjective exchange value is also hardly considered valuable by outsiders.Therefore, new metrics that account for the subjective value and make it visible to outsiders are needed.Whether objective or subjective exchange value is realized seems to strongly depend on the dissemination strategy.However, the type of exchange value is currently not related to the quality of the scientist's work or the created use value.Policy makers should, therefore, be aware of the relationship between value creation, value capture, and the scientists' underlying needs.Second, scientists who contribute to creating substantial use value might currently drop out of academia due to a lack of sufficient objective exchange value, which is needed to survive in their academic career.Consequently, universities with a strong focus on Third Mission and Triple/Quadruple helix efforts should particularly pay attention to avoiding such drop-outs.From a value capture perspective, scientists may need to be considered as entrepreneurs engaging in scientific knowledge production rather than employees.Third, the argument that salary is not directly linked to knowledge production and, hence, value capture, opens negotiation potential for appropriate payment that reduces the need to struggle for academic survival.Policy makers can increase the anticipated subjective exchange value to trigger dissemination strategies that yield higher value for society, which accomplishes the underlying aim of the Third Mission.Lastly, we highlight the need for developing and implementing novel capability building activities to raise scientists' awareness about different value capture strategies, their consequences, and relevant boundary conditions.We believe this is particularly important in training junior scientists and recommend integrating related discussions into the design of PhD programs.Building on recent insights on innovators' preferences for long-term engagement with scientists to collaboratively develop solutions for future, yet unknown problems [52], such capability building activities may need to particularly pay attention to value capture strategies in the context of science-based innovation.Ultimately, this can increase the share of knowledge being picked-up for innovations and, thus, create a sustainable societal impact.
Limitations and Future Research
This study has limitations that will hopefully motivate future research efforts.First, we apply an exploratory qualitative approach.As a next step, a large-scale validation study is required to test strategic patterns and contingencies influencing the scientists' strategic selections for appropriate value capture mechanisms.Identifying direct and indirect effects of different dissemination mechanisms on use and exchange value might provide deeper insights for scientists and policy makers.The simultaneous assessment of the consequences of certain mechanisms for the use value and the exchange value can provide meaningful insights for the creation of future incentive structures to foster Open Science and Third Mission.Second, while this study provides a first overview on a formal and an informal dissemination mechanism, future research would provide additional insights by further differentiating these mechanisms.For example, publications might vary regarding their degree of accessibility to the public (closed vs. open access), which leads to different degrees of use value or they might vary in terms of recognition by the research community (e.g., a publication in a top-tier journal might yield a higher exchange value).Third, this study's sample covers a large variety of different nationalities and scientists at different career stages.However, with the exception of some participants from humanities and social sciences, most participants come from fields related to biomedical scientific disciplines.This disciplinary concentration was suitable to observe heterogeneity in the applied value capture strategies.However, the observed differences between scientists from this field compared to scientists from the humanities and social sciences require further studies focusing on the contingencies resulting from research field related differences.For example, evaluation schemes for tenure positions might vary and, consequently, affect the value scientists capture from (un-)recognized dissemination activities.Diversity in terms of the nationalities of the participants is considered as less limiting (compared to their disciplines) due to the high levels of mobility among scientists and an increasing homogeneity regarding dissemination strategies across the world (e.g., publications in the same publishing houses).Fourth, our sample of study participants mainly consisted of scientists without a permanent position.Considering the importance of academic survival expressed by the scientists in our sample, we urge future studies considering scientists that already have a permanent position.It would be highly interesting to investigate how the scientist's needs and, consequently, what they consider as valuable, change with this event.It is very likely, that there is also an effect on the selection patterns for value capture mechanisms and different valorizations for objective and subjective exchange values.Fifth, the explorative nature of the research, and semi-structured interviews limit researchers in discovering factors if they do not appear during the course of interview.Future research needs to address other relevant behavioral, institutional, and field-specific factors that might influence knowledge dissemination mechanisms by scientists.Sixth, in line with this, we call for future research
Figure 1 .
Figure 1.Graphical summary of the process of value creation and value capture in general and from scientific knowledge.Source: own illustration adapted from Bowman and Ambrosini (2000) [2].
Figure 1 .
Figure 1.Graphical summary of the process of value creation and value capture in general and from scientific knowledge.Source: own illustration adapted from Bowman and Ambrosini (2000) [2].
2
In relation to the first footnote, a tenure position and salary are considered additional values (indirectly) captured through classical dissemination activities.
Figure 2 .
Figure 2. Theoretical conceptualization including the foci of this exploratory study.Source: own illustration adapted from Bowman and Ambrosini [2].
Figure 2 .
Figure 2. Theoretical conceptualization including the foci of this exploratory study.Source: own illustration adapted from Bowman and Ambrosini [2].
Figure 3 .
Figure 3. Three phases of data collection.Source: own illustration.
Figure 3 .
Figure 3. Three phases of data collection.Source: own illustration.
Figure 4 .
Figure 4.The objective and subjective part of the realized exchange value in science.Source: own illustration.Please note: the different sizes of the two fields illustrate the result from our study that scientists tend to receive a higher subjective than objective exchange value from disseminating their research.The fraction of the subjective versus objective value, however, does not indicate a specific ratio.
Figure 4 .
Figure 4.The objective and subjective part of the realized exchange value in science.Source: own illustration.Please note: the different sizes of the two fields illustrate the result from our study that scientists tend to receive a higher subjective than objective exchange value from disseminating their research.The fraction of the subjective versus objective value, however, does not indicate a specific ratio.
"[ . . .] And then I thought, okay maybe for my habilitation 3 I can really start writing a book, and I've already a few, very big book chapters.[ . . .] this is like an ego booster for me, to know that... yeah, I can honestly say this, [ . . .]... because it's an intellectual challenge."(Interviewee C)
Figure 6 .
Figure 6.How the subjective and objective value satisfy scientists' needs.Source: own illustration.Please note that the fraction of the subjective versus objective value does not indicate a specific ratio.
Figure 6 .
Figure 6.How the subjective and objective value satisfy scientists' needs.Source: own illustration.Please note that the fraction of the subjective versus objective value does not indicate a specific ratio.
Figure 7 .
Figure 7. Conceptual model of value capture in science.Source: own illustration.
Figure 7 .
Figure 7. Conceptual model of value capture in science.Source: own illustration.
Table 2 .
Value capture mechanisms in science.Asterisks indicate the strength of the relationship.** Primary influence, * secondary influence.The types of outcome presented in the table are not exhaustive.
: Asterisks indicate the strength of the relationship.** Primary influence, *secondary influence.The types of outcome presented in the table are not exhaustive. Note I think that is very satisfactory… Then to get responses, and yeah, visibility I think is very satisfactory.It's a requirement.So, you are judged based on your publications.Whatever you have published is kind of yours, so to say.So, your publication list will always be your publication list.It is kind of like your output, your personal out of your personal value.It is the way to sell yourself, of course, to people.This is how people are going to evaluate you, based on what you published."(Interviewee G) " | 2019-08-15T22:24:47.410Z | 2019-07-24T00:00:00.000 | {
"year": 2019,
"sha1": "3920f9a2e0eea9b4d9885445b094e1ff1a8215aa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-6775/7/3/54/pdf?version=1563951270",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7589ad9331fe1d64c92c9784288adad5686bbca8",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
58940030 | pes2o/s2orc | v3-fos-license | Study of Word Prediction for Utterance Support System
We studied new utterance support system working on an information terminal for people who have speech handicaps. We made word prediction and added fixed phrases from the dictionary including phrases often used in daily life conversations and class 2-gram using co-occurrence frequencies of parts of speech as additional functions to make it more effective and faster. And we did evaluation experiment with other methods.
Introduction
Speech is very important for us to communicate with others in our daily lives.However, because of speech handicaps, some people find it difficult to do that.The main approaches for the people to communicate with others are writing and sign language.But they are difficult to master and writing needs to carry pens and papers at all times (1) .Nowadays, many new utterance supports method are studied.For example, "GlovalVoice voice support" is a communication tool works on pad (2) .The user inputs desired words and then pad outputs it as a voice by using voice synthesis.
Our main purpose is to study a new utterance support system using an information terminal (ex.Smartphone, pad).And to make this system faster and more effective, we studied a word input method and voice synthesis.We studied a word prediction, especially.It is one of word converting methods and it predicts user's desired words by using the small part of the word entered by the user (3) (4) .To make a word prediction faster and more effective, we used 1) the fixed phrases dictionary and 2) the class 2-gram as additional functions.The fixed phrases dictionary includes phrases often used in daily life conversations and class 2-gram is 2-gram made from co-occurrence frequency of parts of speech (POS) in Japanese.And we did evaluation experiment to find the efficient of both fixed phrases dictionary and class 2-gram.
Whole block diagram of the proposed utterance system
This system consists of some parts in the whole block diagram (Figure 1).
The input part processes the Roman character codes generated by user's keyboard inputs, and then outputs them to word prediction part.Information terminals like smartphones have their own GUI and users generally use it to input words or phrases, but we used line Linux's input method and this might be more difficult than other input methods.
The word prediction part predicts words or phrases that user desired by using Roman character inputs processed in input part.The details of this part are shown in chapter 3.The voice synthesis part processes the words or phrases selected by user in the word prediction part and outputs them as voices.Some kinds of voices will be available and user can choose one that he likes.However, we didn't evaluate this part in this paper.
Detail of the word prediction part
In this chapter, we show the details of the word prediction part.A word prediction is one of word converting methods and it predicts the desired words or phrases by small part of words entered by user.Nowadays this method is used in many information terminals having smaller number of keys. Figure 2 shows the detail of this part.
User's character inputs are used to search words or phrases in the dictionaries.The dictionaries have not only words or phrases but also POS and word frequencies with each of them and they are used to sort and predict candidate words or phrases.The sorting part sorts the candidates according to word and phrase frequencies and shows them to user for selection.The class 2-gram is used to change frequency and generate next candidates.The selector part shows the sorted candidates and the generated next candidates for user and the user choice his desired word from them.We show the details of 1) the dictionaries, 2) the word and phrase frequency and 3) the class 2-gram in following sections.
Figure 2. Word prediction part 3.1 Dictionaries
There are two dictionaries in this word predictionsingle phrase and fixed phrasesand user's input directly searches to both of them to show candidate words or phrases.We show the details of both dictionaries in following.
(a) Single phrase dictionary We mainly used SKK dictionary (M size) (5) to make this dictionary.The SKK dictionary has many nouns but other POS are not so much, thus we use a morphological analysis to Japanese example sentences in English conversation learning site by using Chasen (6) .Morphological analysis parses sentences into words and analyzes it in each POS.We added the words and POS in this dictionary and the total number of words in this dictionary became 13,798.
The single phrase dictionary consists of 1) number, 2) pronunciation, 3) result word and 4) POS.Table 1 shows a concrete example of this dictionary.Pronunciation is used to search for words and all words are written in Roman characters.Roman character input can show candidates in smaller inputs and correct wrong spelling easier than Japanese.We pronounced the words using kakasi (7) and was written in Kunrei-shiki.Result words were shown to user as search result.POS is used to change word frequency in dictionary and show next candidate words.(b) Fixed phrases dictionary Fixed phrases dictionary consist of daily life conversation sentences and Table 2 shows that example.We made this dictionary to decrease user's inputs.
There are 550 phrases in this dictionary and they were from Japanese example sentences in Japanese learning sites for non-Japanese speakers.The structure of this dictionary is same as single phrase dictionary
Word and phrase frequency
Word and phrase frequency changes its own values to learn user's habit.Table 3 shows an example.All of words and phrases in dictionaries have their own frequency.When a word or phrase was selected by user, the frequency which has same number increases one.If a desired word was not in sorted candidates, the user chose nothing from it and all of frequencies of words and phrases in candidates reduce their own frequency by half.After this operation, even user enters same character as latest entry, the candidates are refreshed and different words and phrases are shown up.
Thus the probability of user's desired word or phrase coming up is increased.The frequency is also changed by an additional function of class 2-gram.
Class 2-gram
(a) Class 2-gram Class 2-gram is one kinds of 2-gram and is made from POS co-occurrence frequency (8) .We decided to use the class 2-gram in this word prediction because it has less data than 2-gram.Using 2-gram needs strong machine power because it has a lot of data.If there are 100 words in dictionary, it has 100*100 data.However class 2-gram has only 9*9 data of the number of words in its dictionary.
There are ten POS in Japanese and table 4 shows them.Although some people didn't recognize na-adjective as a POS, thus we use nine POS to make class 2-gram.Figure 3 shows the way to make the class 2-gram.We made it by using 4000 Japanese example conversation texts and pick up POS and analysis their co-occurrence.Table 5 is an example of the class 2-gram.By using the table, two functions are addicted and they help word prediction to predict words and phrases more effectively.The next candidates are automatically predicted by using class 2-gram.The words or phrases have POS which are highest two of the class 2-gram and are automatically selected from dictionaries, sorted according to their frequencies and shown to users as next candidates.For example, if the user chooses a noun, next candidates are particle or noun.
Figure 3. Way of making class 2-gram
(c) Frequency change By using the latest result of the word prediction, word and phrase frequencies are changed by multiplying probabilities of the class 2-gram.For example, if the latest result was noun, then word and phrase frequencies of particle are multiplied by "0.6" and so are others.After this operation, all of POS except particle and noun become difficult to show up.
Experiments
We did experiments to find out our word prediction's performance by adding additional functions.
Methods
In this paper, we did two experimentsnecessary numbers of input (/100 characters) and speed of input (/1 min) -and each result is shown in following sections.We especially paid attention to the efficiency of class 2-gram.Sentence used in these experiences is from 「 86 th Examination in Japanese Word Processing」 and we compare our word prediction with the Microsoft Office IME 2010 (MS IME) and the iPhone.We use Wired Keyboard 600 to input character to the word prediction and MSIME.Table 6 shows the number of subjects of each experiment.
Results
(a) Necessary numbers of input Table 6 shows the experimental result of necessary numbers of input.There are two results of our systemone is itself and other is with class 2-gram.According to Table 7, our word prediction needed less numbers or input than other methods.And adding class 2-gram makes it more effective and it has 40% lead over the iPhone.(b) Speed of input Table 8 shows the result of speed of input.The values indicate how many characters subjects could enter and faster methods have bigger number.The results from the subjects show that using our word prediction is slower than other methods.Even though we added class 2-gram; the iPhone was still twice as fast as ours and some results become slower.
Examination
Necessary numbers of input became lower than other methods by using our word prediction.We conjectured that using two different dictionaries and class 2-gram had a good effect on ours.According to the result, using next candidates and frequency change especially made our system 10% faster.From this, using a class 2-gram is effective to add to word prediction.
The result of speed of input was worse than other methods.The questionnaires we send out after the experiments showed that Linux line input makes our word prediction hard to see candidates.In actuality, only my result was better than others because I am familiar with it.It is conceivable that Kunrei-shiki Roman character input which was the trait of our word prediction made it difficult to input.Actually the questionnaires say that Kunrei-shiki is unfamiliar and it was difficult to confirm entered words.
Conclusion
We study new utterance support system working on an information terminal for the people who have a speech handicap.We use word prediction and added class 2-gram to it.The experience results show that our system needs less numbers of input but slower than other methods.Hereafter, we intend to create more effective dictionaries, improve class 2-gram and develop new GUI to make our system more effective and faster. | 2018-12-15T04:59:43.884Z | 2013-09-24T00:00:00.000 | {
"year": 2013,
"sha1": "322b3d7179f06afb95d804438009267a11c2ec88",
"oa_license": "CCBY",
"oa_url": "https://www2.ia-engineers.org/conference/index.php/icisip/icisip2013/paper/download/192/147",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "322b3d7179f06afb95d804438009267a11c2ec88",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237324839 | pes2o/s2orc | v3-fos-license | Recovery of Bioactive Compounds from Strawberry (Fragaria × ananassa) Pomace by Conventional and Pressurized Liquid Extraction and Assessment Their Bioactivity in Human Cell Cultures
Pressing strawberries for juice generates large amounts of pomace, containing valuable nutrients and therefore requiring more systematic studies for their valorization. This study compared conventional solid-liquid (SLE) and pressurized liquid (PLE) extractions with ethanol (EtOH) and H2O for the recovery of bioactive compounds from strawberry pomace. The composition and bioactivities of the products obtained were evaluated. Among 15 identified compounds, quercetin-3-glucuronide, kaempferol-3-glucuronide, tiliroside, ellagic, malic, succinic, citric and p-coumaric acids were the most abundant constituents in strawberry pomace extracts. SLE-EtOH and PLE-H2O extracts possessed strong antioxidant capacity in DPPH• and ABTS•+ scavenging and oxygen radical absorbance capacity (ORAC) assays. Cytotoxicity, antiproliferative and cellular antioxidant activities in human cells of PLE-EtOH and PLE-H2O extracts were also evaluated. PLE-EtOH and PLE-H2O extracts possessed strong antioxidant activity, protecting Caco-2 cells upon stress stimuli, while PLE-EtOH extract showed higher antiproliferative activity with no cytotoxicity associated. In general, the results obtained revealed that properly selected biorefining schemes enable obtaining from strawberry pomace high nutritional value functional ingredients for foods and nutraceuticals.
Introduction
Strawberries are one of the most popular berries in the world with a global annual production of 9.22 MT. The formation of the cultivated strawberry (Fragaria × ananassa Duchesne) started in the eighteenth century when strawberry culture became increasingly limited to the clones of this hybrid species. The genus Fragaria belongs to the one of the most important economically Rosaceae family. Fragaria is a member of the subfamily Rosoideae and consists of approximately 20 diploid, tetraploid, hexaploid and octoploid species [1].
Freshly harvested strawberries are highly perishable fruits due to fast post-harvest decay, high respiration rate and environmental stress. Shelf life of fresh strawberries is approx. 2-3 days at room temperature [2]. Therefore, a large fraction of harvested strawberries are processed into numerous products such as jams, purees, wine, juice and others. Processing of strawberries in some cases generates by-products; for instance, the residues (pomace) in juice production constitute approximately 4-11% of fruit weight. Currently, a large part of such by-products are used very inefficiently, e.g., for composting or animal feeding. It is well documented that strawberry pomace, consisting of seeds, stalks and pulp, contains valuable nutrients such as phenolic compounds (anthocyanins, proanthocyanidins, ellagic and other phenolic acids, ellagitannins), minerals, dietary fiber and others [3,4]. For instance, the amount of hydrolysable ellagitannins in strawberries, depending on the origin and ripeness of fruits, may reach 637 mg/kg fresh weight on average [5].
Due to the presence of a significant amount of biologically active substances and high nutritional value, strawberry pomace has a great potential to be used as a source for health beneficial ingredients for functional foods, nutraceuticals, cosmeceuticals and other healthy natural products [6,7]. Several studies reported antiproliferative activities of strawberry extracts, while pomace products have also been considered as a possible preventive means against various diseases, such as cardiovascular disorders, cancer and atherosclerosis [8][9][10]. For instance, McDougal et al. [11] suggested that polyphenol-rich strawberry extract might be useful for mitigating diabetes via inhibition of α-glucosidase enzyme and reducing the postprandial absorption of glucose, which is produced in the small intestine via breaking the starch and disaccharides. Kosmala et al. [12] determined that aqueous and aqueous/alcoholic extracts of strawberry pomace showed similar effects on the enzymatic activity in gastrointestinal tract, which is obtained by substituting dietary cellulose with more easily fermentable fructooligosaccharides (FOS). Some studies reported in vivo effects of strawberry pomace products as well. Polyphenol-rich and depleted of phenolic compounds strawberry pomace added to the diet of rats had similar positive effects on gastrointestinal, blood and tissue biomarkers of experimental animals by reducing metabolic complications [13]. Later, Juskiewicz et al. [5], reported that acetone extract of strawberry pomace lowered lipaemia and glycemia indicators in Wistar rats. However, the data on the bioactivities of strawberry pomace extracts are rather scarce, particularly using human cells and physiologically important enzymes.
More systematic and comprehensive studies are also required for valorizing strawberry pomace as a source of various functional ingredients. Biorefining of strawberry pomace into several fractions containing various classes of nutrients (including bioactive compounds), possessing different activities and physical properties is a promising approach. The advantages of such an approach have been recently demonstrated for multistep processing of chokeberry [14,15], cranberry [16], guelder-rose berry [17] and raspberry pomace [18]. Several nutritionally valuable fractions were obtained from each type of pomace achieving a sustainable 'zero waste' processing task. Moreover, promising results were obtained in terms of shifting to green chemistry-based high-pressure extraction/fractionation systems using supercritical carbon dioxide (SFE-CO 2 ), pressurized ethanol and water (PLE). In general, high-pressure-based extractions have been proved to be more efficient than conventional processes such as extract yield, time of extraction, solvent-free extracts (in SFE-CO 2 ) and reduced consumption of solvent (in PLE). For instance, PLE, which uses solvents in their subcritical state, improves the solubility of phytochemicals and their transfer from the solid matrix in a shorter time both using organic solvents and water [19]. However, the composition and distribution of phenolic compounds and other constituents in the pomace derived products highly depend on plant origin; therefore, extraction schemes and conditions should be properly designed and investigated individually for each type of pomace.
The aim of this study was to compare conventional and pressurized liquid extraction methods in developing integrated biorefining schemes for recovery of bioactive compounds and other valuable nutrients from strawberry pomace and to evaluate the extracts obtained by using antioxidant, cytotoxicity and antiproliferative activity assays using human HT 29 and Caco-2 cells, as well as by screening their phytochemical composition by chromatographic and spectroscopic methods. It is expected that such results will provide essential data for valorizing strawberry pomace in the development of various functional ingredients for foods and nutraceuticals.
Preparation of Berry Pomace and Determination of Its Proximate Composition
Strawberry pomace (Fragaria × ananassa) was kindly donated by the company "Anykščių vynas" (Anykščiai, Lithuania) in 2017 immediately after juice pressing. The pomace, containing pulp, stalk and seeds was air-dried in a SENCOR convection dryer at 40 • C for 48 h. After drying the material was ground in a ZM 200 ultra-centrifugal mill, using 1 mm sieve (Retsch, Haan, Germany) at 12,000 rpm. Ground material was stored in tightly closed, dry glass jars in a dark, well-ventilated place. The dried sample of strawberry pomace, which was used in this study, is stored at 4 • C in the Department of Food Science and Technology (no. SP-2017) and is available upon request.
Moisture content was determined by drying at 104 • C to constant mass. The content of ash was determined after incineration in a muffle furnace at 550 • C for 3 h. The crude protein content was measured by the Kjeldahl procedure using nitrogen conversion factor 5.3 (AOAC, 950.09) [20], the crude lipid content was determined by a Soxhlet method (AOAC, 963.15) [20] using hexane. All analyses were performed in triplicate, and the results were expressed as grams per 100 g of dry matter of strawberry pomace.
Extraction of Polar Constituents from Strawberry Pomace
Pressurized liquid extraction (PLE). PLE was performed with ethanol (EtOH) and water (H 2 O) in an accelerated solvent extractor ASE 350 (Dionex, Sunnyvale, CA, USA) equipped with a solvent-controlling unit. Extractions were performed in 65 mL cells at 10.3 MPa and at 90 • C, and 110 • C for EtOH and H 2 O, respectively. Ground strawberry pomace (5 ± 0.001 g) was loaded into the cell with 3 ± 0.001 g of diatomaceous earth above and below the sample to avoid any void spaces, and with two cellulose filters in both ends to avoid particle leak to the system. Then, the cell was placed into the carrousel to start an automatic extraction sequence; it was heated 5 min to the pre-set extraction temperature, ad pressurized for 15 min (3 cycles × 15 min). The total volume of solvent was 120 mL. The EtOH was removed in a rotary evaporator Rotavapor R-114 (Büchi Flavil, Switzerland) under vacuum (0.06 MPa) at 40 • C, aqueous solution was freeze-dried (Maxi Dry Lyo, Jonan Allerod, Denmark). All obtained extracts were stored at −20 • C until analysis.
Solid-liquid extraction (maceration) (SLE). SLE was performed for comparing it with PLE. Ground pomace (30 g) was extracted with 150 mL of EtOH or H 2 O for 24 h at room temperature under orbital shaking at 250 rpm. Afterwards, the contents were centrifuged at 6000 rpm for 10 min and filtered. The solvents were removed as in PLE.
Evaluation of Antioxidant Properties of Extracts and Solid Materials
Total phenolic content (TPC). Folin-Ciocalteu method was applied with minor modifications [21]. Briefly, 150 µL of extract (0.5-2.5 mg/mL) or corresponding solvent methanol (MeOH) or H 2 O as a blank was mixed with 750 µL Folin-Ciocalteu's reagent (2 M), previously diluted with distilled water (1:9, v/v) and after 3 min 600 µL of 7.5% (w/v) Na 2 CO 3 was added. The mixture was kept at 25 • C 2 h in the dark and the absorbance measured in 1 cm path length disposable cuvettes (Greiner Labortech, Alpher a/d Rijn, The Netherlands) at 760 nm in a Genesys 8 UV spectrophotometer (Thermo Spectronic, Rochester, NY, USA). TPC was determined from GA calibration curve (0.025-0.5 mg/mL) and expressed as mg of GA equivalents (GAE) per g of extract.
The ABTS •+ scavenging capacity. The assay was performed according to the method of Re at al. [22]. The ABTS •+ was produced by reacting 75 mM ABTS in PBS (pH 7.4) with 200 µL K 2 S 2 O 8 (70 mmol/L) and allowing the mixture to stand in the dark at room temperature for 15-16 h for color development. The working solution of ABTS •+ was prepared daily by diluting its stock solution in PBS to reach an absorbance value of 0.70 ± 0.20 at 734 nm. Strawberry pomace extracts were dissolved in EtOH and H 2 O, and diluted in PBS to 0.5-2.5 mg/mL concentration. The aliquots of 25 µL of each extract were added to 1500 µL of ABTS •+ solution and the absorbance was read after 2 h at 734 nm in a Genesys 8 UV spectrophotometer. PBS was used as a blank. Trolox Equivalent Antioxidant Capacity (TEAC) values were determined from a Trolox calibration curve built by using 80-1500 µM solutions and the results were expressed as mg of TE per g of extract (mg TE/g) and further recalculated to g of material dry weight (mg TE/g DW).
DPPH • scavenging capacity. The assay was performed by the method of Brand Williams et al. [23]. One thousand µL of freshly prepared MeOH solution, containing 250 µM DPPH • , was added to 500 µL of diluted to 0.5-2.5 mg/mL extracts. The absorbance was measured after 2 h of incubation in the dark at room temperature in a Genesys 8 UV spectrophotometer at 517 nm. MeOH was used as a blank. The values were determined from a calibration curve prepared with 3-100 µM solutions of Trolox and expressed as mg TE/g extract.
Oxygen Radical Absorbance Capacity (ORAC). The assay was performed as described by Prior et al. [24] with minor modifications. A multiple-detection microplate FLUOstar Omega reader (BMG Labtech, Offenburg, Germany) with fluorescent filters (excitation wavelength, 485 nm; emission wavelength, 520 nm) was used. Twenty-five µL of diluted to 0.15-0.75 mg/mL extract or pure MeOH or H 2 O (used as blanks), was mixed with 150 µL of fluorescence probe fluorescein solution (14 µmol/L), preincubated for 15 min at 37 • C followed by the rapid addition of 25 µL of peroxyl radical generator AAPH (240 mmol/L). The fluorescence was recorded every cycle (1 min × 1.1), in total 120 cycles. Trolox was used as a reference antioxidant. Final results were calculated on the basis of the difference in the area under the fluorescein decay curve between the blank and each sample. The ORAC values were determined from its ability to protect the fluorescence of the indicator in the presence of peroxyl radicals.
Antioxidant activity assessment of solid material by the QUENCHER method. TPC and ABTS •+ scavenging of raw plant material and solid residues after extractions were determined by using QUENCHER procedure [25]. Solid dilutions were performed by mixing 10 mg of the sample with microcrystalline cellulose, which was also used as a blank. Further experimental procedures were the same as reported for extracts. In ABTS •+ assay solid dilutions were performed by mixing 10 mg of sample with microcrystalline cellulose, which was also used as a blank. Further material was diluted with 2 mL ABTS •+ (prepared as reported for the extracts), the mixture was vortexed for 120 min to facilitate the surface reaction, centrifuged at 4500 rpm for 5 min, and the absorbance of the supernatant was measured at 734 nm. In TPC assay 10 mg of the sample were mixed with microcrystalline cellulose and with 1.5 mL Folin-Ciocalteu's reagent solution (1:9). The reagents were mixed, kept for 10 min, neutralized with 1.2 mL of 7.5% sodium carbonate, vortexed for 120 min and centrifuged at 14,000 rpm for 5 min. The absorbance was measured at 765 nm. TPC and TEAC values were expressed as mg GAE/g DW and mg TE/g DW of pomace, respectively. All experiments were replicated four times.
Preparation of Cell Culture and Cellular Assays
Caco-2 cells were cultivated in RPMI-1640 medium supplemented with 10% of heatinactivated fetal bovine serum and 1% penincilin-streptomycin at 37 • C with 5% CO 2 in a humidified incubator and routinely grown as a monolayer in 75 cm 2 culture flasks. Strawberry pomace extracts were dissolved in DMSO and EtOH to a final concentration of 100 mg/mL. The prepared samples were stored at −20 • C in the dark. Cell-based assays were performed using a maximum concentration of solvent, namely 1% and 5% for DMSO and EtOH, respectively.
Cytotoxicity assay in Caco-2 cell monolayer. Cytotoxicity was assessed as previously described by Silva et al. [26]. Caco-2 cells in growth medium were placed in 96-well plates at a density of 2 × 10 4 cells/well. After 7 days (during this period the cells were renewed every 48 h) the growth medium was removed and replaced with media containing different concentrations of strawberry pomace extracts. Control wells contained growth medium with no extract. After 24 h of incubation at 37 • C, the cells were washed twice with PBS and cell viability was determined using MTS reagent according to manufacturer's instructions. Reduction in absorbance was measured at 490 nm using a Spark ® 10M Multimode Microplate Reader (Tecan Trading AG, Männedorf, Switzerland) and cell viability was expressed in terms of percentage of living cells relative to the control. Experiments were performed in triplicate.
Antiproliferative assay in HT29 cell monolayer. Antiproliferative effect of extracts and standard compounds was evaluated in HT29 cells, as described elsewhere [27]. Briefly, the cells were placed in each well of a 96-well plate at a density of 1 × 10 4 cells/well. After 24 h the cells were incubated with different concentrations of the samples diluted in culture medium. Control wells contained growth medium with no extract. Cell proliferation was measured after 24 h using MTS reagent, as explained above. The results were expressed in terms of percentage of living cells relative to the control. A minimum of three replicates for each sample was used to determine the antiproliferative activity.
Cellular antioxidant activity (CAA) assay. The CAA of strawberry pomace extracts was assessed using the method of Wolfe and Liu [28]. Caco-2 cells were seeded in growth medium at a density of 2 × 10 4 cell/well in a 96-well microplate. After 6 days the medium was removed and cells were washed twice with pre-warmed to 37 • C PBS. Afterwards, the cells were treated with 50 µL of PBS/sample/standard (quercetin, 2.5-20 µM) solution and 50 µL of DCFH-DA solution (50 µM) were added and incubated for 1 h at 37 • C and 5% CO 2 . Next, 100 µL of AAPH (12 mM) solution were added to each well containing PBS/quercetin standards/samples, while 100 µL of PBS were added to the blank wells. Finally, the 96-well microplate was placed into a Microplate Fluorimeter FLx800 (Biotek Instruments, VT, USA). The emission wavelength at 540 nm was measured after excitation at 485 nm every 5 min for 1 h. CAA values were expressed as µM of quercetin equivalents per g of extract (µM QE/g). Independent experiments were performed in triplicates.
Identification of Bioactive Components Using the Ultra-Performance Liquid Chromatography-Mass Spectrometry (UPLC-MS)
Non-targeted analysis by UHR-Q-TOF-MS. Phytochemicals were identified by nontargeted screening based on high accuracy mass spectra. An Acquity UPLC system (Waters, Milford, USA) was equipped with a Bruker maXis quadrupole time-of-flight mass spectrometer (UHR-Q-TOF-MS) (Bruker Daltonics, Bremen, Germany). An Acquity BEH C18 column; 1.7 µm, 100 × 2.1 mm, i.d. (Waters, Milford, USA) was used for separation, column temperature was maintained at 40 • C. The gradient elution programmed for mobile phases A (1% formic acid) and B (acetonitrile) was as follows: 0 min, 95% A; 1-3 min, 95-85% A; 3-7 min, 85-50% A; 7-10 min, 50-0% B; 10-12 min, 0% A; 12-14 min, 95% A. The flow rate was 0.4 mL/min, the injection volume 2 µL. An electrospray ionization (ESI) source was used, the spectra were recorded in the mass range of m/z 100-1500 in the negative mode the capillary voltage was adjusted to +4000 V. The nebulizer pressure was 2.0 bar and the nitrogen flow rate was 10 L/min. For fragmentation study, a data-dependent scan was performed by deploying collision-induced dissociation (CID) using nitrogen as a collision gas at 30 eV. Fullscan and auto MS/MS were set for scanning data at 2 Hz acquisition speed. Data acquisition, handling and instrument control were performed using Compass 1.3 (HyStar 3.2 SR2) software. The phytochemicals were identified by searching the ChemSpider database based on molecular formulas, calculated from accurate mass-to-charge ration matching of the MS data.
Anthocyanins were quantified using cyanidin-3-glucoside as the external standard [15]. Standard stock solution was prepared in MeOH and subsequently diluted to working concentrations. The amounts of individual compounds were expressed as mg/100 g of strawberry pomace DW.
Quantitative analysis of organic acids by UHR-Q-TOF-MS. Quantitative analysis of organic acids was performed using the same UPLC-MS system and method as described above. Malic, citric, quinic and succinic acids were quantified by integrating extracted ion chromatograms of the m/z values, corresponding to each acid, namely m/z 133.0131 for malic acid, m/z 191.0186 for citric acid, m/z 191.0550 for quinic acid and m/z 117.0182 for succinic acid. The m/z values were extracted with an accuracy of 0.02. Standard compounds were chromatographed in the concentration range from 1 to 50 µg/mL for obtaining calibration curves.
Quantitative analysis of phenolics by TQ-S. Quantitative analyses of phenolics were carried out on a Waters AQCUITY UPLC system, equipped with Waters TQ-S triplequadrupole mass detector (Waters Corp., Milford, MA, USA). The equipment consisted of a quaternary solvent manager, sample manager, column heater, interfaced with a mass spectrometer equipped with an ESI source, operating in negative mode. Instrument control and data processing were performed using MassLynx™ software. Gradient conditions, column parameters and temperature, flow rate and injection volume were the same as described in the section on non-targeted analysis. Nitrogen was used both as drying and nebulizing gas at 1000 and 150 L/h flow for desolvation and at the cone gas, respectively. The desolvation temperature was set at 500 • C. Capillary and cone voltages were 1.8 kV and 25 eV, respectively. The quantification was performed using the external standards (tiliroside, elagic and p-coumaric acids, kaempferol-3-glucuronide, quercetin-3-glucuronide, and catechin). Standard stock solutions were prepared in MeOH and subsequently diluted to working concentrations. The amounts of individually identified anthocyanins were expressed as mg/100 g of DW of strawberry pomace.
Statistical Analysis
MS Excel 2016 was used for calculations of mean values and standard deviations. All data were expressed as mean ± standard errors (SD). Significant differences among the means were determined by ANOVA, using the statistical package GraphPad Prism 6.01 software (2012) to identify significant differences.
Composition of the Strawberry Pomace
Dried pomace (all values in g/100 g) contained 5.6 ± 0.2 H 2 O, 12.03 ± 0.4 fat, 5.3 ± 0.1 ash and 13.3 ± 0.3 proteins. Other macrocomponents were not determined; however, they should consist mainly of various carbohydrates. A similar amount of fat (11.6 g/100 g DW of pomace) was reported Pieszka et al. [29]; while Górnaś et al. [30] determined only 3.4 g/100 g DW. The content of fat in pomace largely depends on seed fraction; Sójka et al. [31], reported that 40% of strawberry pomace consisted of seeds; while the content of ash, depending on the harvesting season, was in the range of 4.0-7.6 g/100 g DW.
Pomace contamination with sand at different seasons may also have significant effect on the content of ash. In general, chemical composition and quality of dried fruit pomace depend on the raw material, which may vary due to the differences in growing and climatic conditions, fertilization and other agronomic treatments, as well as drying method and its parameters [32].
Characterization of Polar Phytochemicals of Strawberry Pomace Extracts
Phenolic compounds possess protective effect due their antimutagenic, antioxidant, antimicrobial, anticarcinogenic and antiinflammatory effects [33]. The results obtained in our study support the bioactive potential of strawberry pomace extracts. The compounds are listed in Table 1 with the relevant identification data, namely retention time, experimental mass m/z, molecular formula, and the Q-TOF-MS/MS fragmentation patterns. All compounds were characterized by the interpretation of their mass spectra recorded by Q-TOF-MS and comparing it with the data available in the literature and open databases (Chemspider, MetFusion). When available, the identification was supported by using authentic standards. In total, 15 phenolic metabolites were identified including organic/phenolic acids. The compounds 1, 2, 3, 4, belonging to organic acids were detected in the all analyzed extracts; these compounds co-eluted together and were identified as malic, citric, succinic and quinic acids, which agrees with the previously reported data [30] and open access databases (quantitative analysis is shown in Figure 1). The compounds 5, 6, 10, 11, 13 and 15 were identified as catechin, p-coumaric acid, ellagic acid, quercetin-3-glucuronide, kaempferol-3-glucuronide and tiliroside. Their structures were confirmed by authentic standards and fragmentation patterns ( Table 1) Qualitative analysis of organic acids. Accumulation of organic acids in the cells of berries is one of the major factors, that play an important role in strawberry quality. It was reported that the degree of ripeness and timing vary greatly among species and varieties and is highly susceptible to the climatic conditions [36]. Sugars and organic acids are important contributors of strawberry taste and flavor [37]. In our study citric and quinic acids were the main organic acids determined in the strawberry pomace extracts (Figure 2). The concentration of citric acid in the all analyzed extracts was from 911.4 to 2585.3 mg/100 g DW; wile the concentration of other organic acids such as malic and succinic was lower; for instance, malic and succinic acids in SLE-EtOH/PLE-EtOH extracts were present at 269.3/407.3 and 23.1/58.7 mg/100g DW, respectively. The amount of quinic acid in SLE and PLE extracts was from 720.7 to 2417.4 mg/100 g DW. To the best of our knowledge, quinic acid was not previously quantified in the strawberry pomace. In general, the concentrations of organic acids were higher in aqueous extracts both isolated by PLE and SLE methods. Górnaś et al. [30] reported that the content of citric, malic and succinic acids in the dry strawberry pomace was 47, 27, and 13 g/kg DW, respectively. According to Reißner et al. [38] the amount of organic acids in pomace highly depends both on berry composition and the effectiveness of juice extraction, which influences the transfer of acids into the juice. Individual content of phenolic metabolites. In order to quantify the phenolic metabolites generated in the all obtained extracts, tandem MS/MS was used due to its specificity, sensitivity and selectivity. The generated metabolites were determined in multiple reaction monitoring (MRM) mode. The available standards were used for quantification. The concentrations of the quantified phenolic compounds are presented in Figure 1. Ellagic acid was the main component in the all analyzed samples, varying between 8.7 and 64.1 mg/100 g DW. In general, our results agree with the data of Šaponjac et al. [3], who reported in strawberries approx. 2.7 mg/100 g fresh weight (FW) of ellagic acid. Aaby et al. [4] reported that the content of ellagic acid varied from 0.2 to 87.3 mg/100 g FW in the extracts of strawberry flesh and achenes obtained with acetone and H 2 O. The other predominant compound was quercetin-3-glucuronide, which was found in the highest amounts in PLE-EtOH and PLE-H 2 O extracts, 38.9 and 36.5 mg/100 g DW, respectively. The amount of kaempferol-3-glucuronide and tiliroside was in the range of 0.3-35.1 and 1.9-10.3 mg/100 g DW, respectively. SLE-H 2 O extract had the lowest content of the identi-fied compounds; consequently, their solubility could be influenced by the lower extraction temperature. Previous results on individual phenolic compounds quantified in strawberry pomace extracts showed that quercetin and kaempferol derivatives were the main components, constituting 18.4-37.9 and 10.2-39.7 mg/100 g DW, respectively [31]. The content of some other reported phytochemicals, recovered by the traditional and pressurized extraction methods were lower; e.g., the content of p-coumaric acid and catechin in the SLE-H 2 O extract was only 1.9 and 0.1 mg/100 g DW, respectively.
Šaponjac et al. [3] determined higher amount of catechin, which, depending on strawberry species and cultivar, varied from 19.6 to 135.2 mg/100 g FW. It indicates that the variation of polyphenolic composition in strawberry pomace depends on the numerous factors, such as genetic characteristics of fruits (cultivar/genotype), environmental peculiarities (geographical cultivation site, climatic conditions), ripeness and processing aspects.
Qualitative analysis of organic acids. Accumulation of organic acids in the cells of berries is one of the major factors, that play an important role in strawberry quality. It was reported that the degree of ripeness and timing vary greatly among species and varieties and is highly susceptible to the climatic conditions [36]. Sugars and organic acids are important contributors of strawberry taste and flavor [37]. In our study citric and quinic acids were the main organic acids determined in the strawberry pomace extracts ( Figure 2). The concentration of citric acid in the all analyzed extracts was from 911.4 to 2585.3 mg/100 g DW; wile the concentration of other organic acids such as malic and succinic was lower; for instance, malic and succinic acids in SLE-EtOH/PLE-EtOH extracts were present at 269.3/407.3 and 23.1/58.7 mg/100 g DW, respectively. The amount of quinic acid in SLE and PLE extracts was from 720.7 to 2417.4 mg/100 g DW. To the best of our knowledge, quinic acid was not previously quantified in the strawberry pomace. In general, the concentrations of organic acids were higher in aqueous extracts both isolated by PLE and SLE methods. Górnaś et al. [30] reported that the content of citric, malic and succinic acids in the dry strawberry pomace was 47, 27, and 13 g/kg DW, respectively. According to Reißner et al. [38] the amount of organic acids in pomace highly depends both on berry composition and the effectiveness of juice extraction, which influences the transfer of acids into the juice. Quantitative and qualitative analysis of anthocyanins. Anthocyanins are water-soluble pigments, which have been commercially used as antioxidants, nutraceuticals, and red natural colorants in different foodstuffs, cosmetics and medicines. The anthocyanins are responsible for the so-called cyanic colors of numerous plant species and their fruits, ranging from pink to red and from violet to dark blue. The anthocyanins are especially abundant in some dark-colored berries such as bilberries, black currants, and others [38][39][40]. Quantitative and qualitative analysis of anthocyanins. Anthocyanins are water-soluble pigments, which have been commercially used as antioxidants, nutraceuticals, and red natural colorants in different foodstuffs, cosmetics and medicines. The anthocyanins are responsible for the so-called cyanic colors of numerous plant species and their fruits, ranging from pink to red and from violet to dark blue. The anthocyanins are especially abundant in some dark-colored berries such as bilberries, black currants, and others [38][39][40]. The identification data of anthocyanins are presented in Table 2 Acylated anthocyanins such as pelargonidin 3-(6"-malonyl)-glucoside and cyanidin 3(-6"-coumaroyl)-glucoside were identified using the results reported by Šaponjac et al. [3] and Aaby et al. [4]. Table 2).
Antioxidant Characteristics Measured by the Chemical Methods
PLE was selected as an advanced extraction technique, which can quickly and comparatively selectively recover phenolic compounds using food and environmentally safe solvents (GRAS) such as EtOH and H 2 O. The extraction yields, as well as the total amounts of phenolic compounds of the extracts obtained by SLE and PLE are presented in Table 3. The highest yields were recovered by PLE with EtOH and H 2 O, 28.6 ± 1.4 and 24.9 ± 0.6 g/100 g of DW of pomace, respectively. SLE-H 2 O and SLE-EtOH yielded 19.5 ± 1.8 and 14.8 ± 1.3 g/100 g of DW, respectively. PLE extracts also contained higher amounts of phenolic compounds (Figure 1). The extract yield can be influenced by several factors, namely solvent properties (polarity, density), extraction method and time, solvent/solid ratio, temperature, and particle size of the plant material [41]. Furthermore, the viscosity and surface tension of the solvents decrease at higher temperatures, which might increase encroachment of the solvent into the matrix and a faster dissolution. Moreover, mass transfer rate is increased, thus resulting in higher yields in PLE [42][43][44][45]. The distinguishing advantage of PLE vs. SLE is the ability to combine elevated solvent temperature and pressure for achieving fast and efficient extraction within a wide range of compound polarities. In addition, PLE uses 2-4 fold smaller amount of solvent than SLE [41,46,47]. Regarding TPC and antioxidant potential, the efficiency of extraction methods may be evaluated taking into account 2 important indicators: (i) the concentration of the target compounds (or their groups) in the extract, which may be considered as a final or intermediate product for further application; (ii) the level of recovery of such compounds (e.g., antioxidants) from dry plant material mass. The TPC in the extracts varied from 21.5 to 46.8 mg GAE/g and it may be observed that it was significantly higher in the EtOH extract obtained by SLE, while in H 2 O extracts it was similar (Table 3). It may be explained by the remarkably higher yields in PLE-EtOH; in this case, Folin-Ciocalteu reactive substances are diluted with the neutral in this reaction compounds, e.g., pectins or other carbohydrates. It may also be noted that the relationships between TPC values and the amounts of chromatographically quantified flavonoids and phenolic acids do not exist. It should be noted that some other substances, e.g., such as reducing sugars, which are not taken into account in this assay, may interfere in the reaction together with polyphenolics. Aaby et al. [4] reported that the TPC in the H 2 O extracts of strawberry industrial waste was in the range of 21-120 mg GAE/100 g FW.
The solid material after various steps of extractions may still contain bound and not recovered phytochemicals. Therefore, antioxidant properties of non-soluble pomace fractions (Table 3) were monitored after each step of extraction by employing the so-called QUENCHER approach, which has been adapted to the common antioxidant assays [25]. The QUENCHER (QUick, Easy, New, CHEap and Reproducible) approach was proposed for direct assessment of antioxidant capacity of solid material, which is assayed without applying extraction step. It is considered that the reaction between free radical and present in the solid matrix antioxidants can occur at the interface when they are in contact, regardless of the hydrophobicity of the compound of interest [48]. Considering that the TPC in the raw pomace (starting material) was 18.8 mg/g DW, it may be assumed that comparatively small part of antioxidants was recovered from the strawberry pomace by SLE with EtOH and H 2 O: the residual TPC values were 10.2 and 14.3 mg GAE/g DW, respectively. PLE-EtOH and PLE-H 2 O recovered remarkably higher amounts of phenolics; the residual values after extractions were 6.5 mg and 3.0 mg GAE/g DW, respectively. The high TPC in the strawberry pomace suggests that they might be a good source of phytochemicals for value-added products. DPPH • , ABTS •+ scavenging and ORAC assays were used to evaluate antioxidant potential of strawberry pomace extracts (Table 1). DPPH • scavenging values were remarkably lower compared with ABTS •+ scavenging values and, although the basic principle of these two methods is somewhat similar, the differences may be explained by the peculiarities of the reaction mechanisms and different reaction medium polarity. Among the analyzed extracts, PLE-EtOH extract demonstrated the highest activity, followed by the PLE-H 2 O and SLE-EtOH extracts: their antioxidant capacity values in the ABTS •+ , DPPH • and ORAC assays were 148.5-391.9; 28-3-117,2 and 95.4 to 308,9 mg TE/g DW of strawberry pomace, respectively. Aaby and co-authors [4] reported significantly lower antioxidant values for acetone and water SLE extracts of strawberries; e.g., ORAC was 4.1-174 µmol TE/g FW, which is equivalent to 1.02-43.5 mg of trolox.
Strong positive linear correlations were observed between DPPH • scavenging values and the amounts of catechin, p-coumaric acid and quercetin-3-glucuronide, which suggests that these compounds may have an important impact on the results of this antioxidant capacity assay. ABTS •+ scavenging values are in strong correlation with the amounts of all chromatographically determined phenolic compounds, except for tiliroside. On the other hand, the content of tiliroside correlated with ORAC values, which also strongly correlated with the amounts of phenolic acids, total flavonoids and total phenolic acids. It may be noted that ORAC assay is based on different mechanism (scavenging of peroxyl radicals), which is more relevant to the oxidation processes taking place in the biological systems [24].
It should also be noted that the evaluation of antioxidant capacity of extracts is usually less accurate when the extracted substance contains amino acids, fibers and/or uronic acids [25]. It is evident that regardless the properties of extraction solvent, solid extraction residues do still contain insoluble compounds, which may demonstrate antioxidant properties. This phenomenon was also proved in ABTS •+ scavenging assay by applying the QUENCHER method (Table 1) to the strawberry pomace residues after SLE and PLE extractions. The ABTS •+ values determined for the solid residues after extractions decreased approx. 2.9-6.4-fold. From this point of view PLE was more efficient, as a significantly lower amount of bioactive compounds remained in the solid fraction after extraction. To the best of our knowledge QUENCHER method has not been previously applied to the strawberry solids at various processing steps.
Evaluation of Cellular Antioxidant Activity (CCA)
CAA was assessed in order to extend the information about antioxidant potential of strawberry pomace extracts. The CAA assay is a more biologically relevant method than the popular chemical antioxidant capacity assays because it accounts for the factors such as uptake, metabolism, and localization of antioxidant compounds within the cells [28].
The presented in the Sections 3.1 and 3.3.1 results on phytochemical composition and antioxidant capacity of extracts, demonstrated that they depend on the method of extraction and solvent polarity. PLE extracts demonstrated the strongest antioxidant activities in the DPPH • , ABTS •+ scavenging and ORAC assays (only TPC was higher in SLE-EtOH extract). So far as the concentration of individual compounds such as phenolic and organic acids, flavonoids were also remarkably higher in the PLE extracts they were selected for further evaluation. Both PLE extracts showed strong antioxidant activity with CAA values of 11.17 ± 1.88 and 5.9 ± 2.9 µmol QE/mg of dry weight for H 2 O and EtOH extracts, respectively, while the EC 50 values were 0.24 ± 0.01 and 0.50 ± 0.3 mg/mL, respectively (data not shown, the values were obtained together with quercetin EC 50 and used to calculate CAA value). Wolfe et al. [49] reported CAA of 42.2 ± 3.3 µmol QE/100 g fresh strawberries (in case of using HepG2 cells and applying PBS wash), while EC 50 value of the acetone extract diluted with methanol was 11.8 ± 0.9 mg/mL. To the best of our knowledge CAA for strawberry pomace extracts has not beed reported previously, while antioxidant potential of strawberry and its pomace extracts was studied previously by using various chemical methods, mainly ABTS •+ , DPPH • , ORAC, and FRAP [3,29,50,51].
Antiproliferative Effects and Cytotoxicity of Strawberry Pomace Extracts
In order to evaluate the antiproliferative activity, HT29 cells were used at the exponential grow phase, while cytotoxicity was assessed using Caco-2 cell line as the best accepted intestinal model. The extracts were assessed at their maximum solvent concentration allowed to use where EtOH extract showed capability of inhibiting cancer cell grow with an EC 50 value of 5.1 ± 0.2 mg/mL ( Figure 3A), without compromising normal epithelia once no cytotoxicity was observed on Caco-2 ( Figure 3B). Water extract was tested at its maximum concentration allowed showing no cytotoxicity or antiproliferative effect. Berry extracts have been widely explored due to high polyphenol content, which provides a strong antioxidant capacity to the extracts. Especially strawberries have been described as a good source of anthocyanins, flavonols, tannins and hydroxycinnamic acids which are already known to have strong bioactivity such as antioxidant, antiproliferative and cardiovascular potential [52][53][54][55]. High concentration of individual phytochemicals and strong antioxidant capacity may be responsible for the inhibitory effects of PLE-EtOH and PLE-H2O extracts against cancer cell growth ( Figure 1, Table 3). For instance, well-known anticancer compounds ellagic acid [56] and quercetin-3-glucuronide, which were abundant in EtOH extract, may play an important role for antiproliferative activity. However, due to the compositional complexity of secondary plant metabolites, antioxidant capacity values measured by the in vitro methods do not always correlate with their activities in vivo. Amatori et al. [55] reported the antiproliferative effect of methanolic polyphenol-rich strawberry extract on breast cancer using MCF-7 and A17 cell lines: the inhibition of cancer cell growth was time-and dose-dependent. Moreover, EC50 for A17 cell line determined after 24 h of incubation (1.14 ± 0.29 mg/mL) was non-cytotoxic in normal breast and fibroblast cells WI38 and NIH-3T3. The antiproliferative effect on HT29 cell line has been also tested and it was shown that strawberry extracts inhibit cell growth in a dose-dependent manner [54]. These results are in accordance with the results obtained in our study for EtOH extract of strawberry pomace: it showed antiproliferative activity Berry extracts have been widely explored due to high polyphenol content, which provides a strong antioxidant capacity to the extracts. Especially strawberries have been described as a good source of anthocyanins, flavonols, tannins and hydroxycinnamic acids which are already known to have strong bioactivity such as antioxidant, antiproliferative and cardiovascular potential [52][53][54][55]. High concentration of individual phytochemicals and strong antioxidant capacity may be responsible for the inhibitory effects of PLE-EtOH and PLE-H 2 O extracts against cancer cell growth ( Figure 1, Table 3). For instance, well-known anticancer compounds ellagic acid [56] and quercetin-3-glucuronide, which were abundant in EtOH extract, may play an important role for antiproliferative activity. However, due to the compositional complexity of secondary plant metabolites, antioxidant capacity values measured by the in vitro methods do not always correlate with their activities in vivo. Amatori et al. [55] reported the antiproliferative effect of methanolic polyphenolrich strawberry extract on breast cancer using MCF-7 and A17 cell lines: the inhibition of cancer cell growth was time-and dose-dependent. Moreover, EC 50 for A17 cell line determined after 24 h of incubation (1.14 ± 0.29 mg/mL) was non-cytotoxic in normal breast and fibroblast cells WI38 and NIH-3T3. The antiproliferative effect on HT29 cell line has been also tested and it was shown that strawberry extracts inhibit cell growth in a dose-dependent manner [54]. These results are in accordance with the results obtained in our study for EtOH extract of strawberry pomace: it showed antiproliferative activity against HT29 cells at the highest concentration. H 2 O extract was not active; most likely due to the higher content of secondary metabolites with anticarcinogenic properties in the EtOH extract. In addition, extraction temperature in case of PLE-EtOH was remarkably lower (90 • C), than in case of PLE-H 2 O (110 • C), and this factor may have an effect on the compounds stability, especially anthocyanins, which at higher temperature may loose sugar moiety providing a protective effect for unstable anthocyanidins. Anthocyanins, ellagic and coumaric acids, flavonoids (catechin, kampferol-3-glucuronide and tiliroside) were the major constituents in EtOH extract (Section 3.2). As mentioned before these compounds have been reported to have strong antiproliferative activities against various cancer cells and therefore are considered as promising preventing means against cancer.
Conclusions
Strawberry pomace was valorized as a source of phenolic compounds, organic acids, flavonoids, and anthocyanins by comparing conventional solid-liquid (SLE) and pressurized liquid (PLE) extractions with ethanol EtOH and H 2 O. Cytotoxicity, antiproliferative and cellular antioxidant activity of strawberry pomace extracts isolated by PLE is reported for the first time. PLE-EtOH extract showed the highest antiproliferative activity with no cytotoxicity associated. Both PLE extracts demonstrated strong antioxidant potential and protected Caco-2 cells upon stress stimuli. It may be assumed that the biological activities of the extracts are due to the presence of the identified and quantified in this study flavonoids (anthocyanins, catechin) and phenolic acids (ellagic, coumaric). Based on these findings PLE extracts may be regarded as promising antiproliferative and antioxidant substances in disease prevention. Therefore, they can find wide applications in developing new functional foods and nutraceuticals with health-beneficial properties. Future studies should focus on the use of strawberry pomace extracts in the selected food products and evaluation of their effects on the quality, while the studies on their health benefits should be continued by the in vivo assays. | 2021-08-28T06:17:18.842Z | 2021-07-31T00:00:00.000 | {
"year": 2021,
"sha1": "a67bb35f099bf4adb1b567254df2e8fc7761566e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/8/1780/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "913ac7762b1363ec8c38ea7665a1272e5dca341e",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12019223 | pes2o/s2orc | v3-fos-license | Response Monitoring in De Novo Patients with Parkinson's Disease
Background Parkinson's disease (PD) is accompanied by dysfunctions in a variety of cognitive processes. One of these is error processing, which depends upon phasic decreases of medial prefrontal dopaminergic activity. Until now, there is no study evaluating these processes in newly diagnosed, untreated patients with PD (“de novo PD”). Methodology/Principal Findings Here we report large changes in performance monitoring processes using event-related potentials (ERPs) in de novo PD-patients. The results suggest that increases in medial frontal dopaminergic activity after an error (Ne) are decreased, relative to age-matched controls. In contrast, neurophysiological processes reflecting general motor response monitoring (Nc) are enhanced in de novo patients. Conclusions/Significance It may be hypothesized that the Nc-increase is at costs of dopaminergic activity after an error; on a functional level errors may not always be detected and correct responses sometimes be misinterpreted as errors. This pattern differs from studies examining patients with a longer history of PD and may reflect compensatory processes, frequently occurring in pre-manifest stages of PD. From a clinical point of view the clearly attenuated Ne in the de novo PD patients may prove a useful additional tool for the early diagnosis of basal ganglia dysfunction in PD.
Introduction
When subjects commit an error in speeded reaction time tasks, a large phasic negative wave with fronto-central midline maximum, called ''error negativity'' (Ne) [1], or ''error related negativity'' (ERN) [2], is seen in the electroencephalogram (EEG), which is likely generated in the anterior cingulate cortex (ACC). A recent theory assumes that if an event is worse than expected (i.e. an error), the DA system sends a signal to the anterior cingulate cortex (ACC), which in turn elicits the Ne [3]. DA influx to the prefrontal cortex (PFC) may serve as a gating signal that instructs the network when to maintain a given activity state [4]. Its neuromodulatory effects may strengthen current representations, protecting them against interference from disruption by irrelevant distracting information [4,5]. In accordance with the dependence of the Ne on the DA-system, the Ne has been shown to be decreased in basal ganglia disorders like Parkinson's (PD), or Huntington's disease (HD) [6][7][8]. Regarding PD, another group [9] found no such reduction in similarly affected PD patients, which has been attributed to possible medication effects. However, it has been shown that medication unlikely affects the modulation of the Ne [10], but the question remains, whether long-term Ldopa medication causes a Ne reduction. Recently, Stemmer et al. [11] found no difference between an early stage PD-group and patients with a long history of medication, also arguing against medication effects on the Ne. Hence the most straightforward approach is to measure the Ne in newly diagnosed patients that are drug-naive, so called ''de novo'' patients. Analyses in existent studies was restricted to error-related processes, but not on processes related to general response monitoring. Here a component occurring after correct responses (''CRN'') [12] or (''Nc'') [13] is of importance. The Nc has been related to response monitoring [14] or to conflict between the actual response and a response program [15]. Allain et al. [16] have shown that the Nc is reduced in a correct trial preceding an error trial. This supports the monitoring hypothesis and suggests that the Nc is necessary for the maintenance of the proper stimulus-response mapping. Another recent study [13] further suggested that processes reflected by the Nc are generally evident after reactions (reflecting motor response monitoring), and that errors are adding specific processes on these, constituting the Ne [13,17]. The Nc has occasionally been found to be enhanced in healthy elderly [18], while the Ne has been reported to be reduced in elderly [19]. Similarly, abnormally large Ncs have been observed in patients with PFC-lesions [18] and patients with schizophrenia a disorder known to be associated with PFC dysfunction [20,21]. According to Coles et al. [22] a damage to prefrontal cortex, or to the pathway from prefrontal cortex to the basal ganglia, is leading to disturbed representations of the correct response and hence to abnormally large Ncs on correct trials. However, the prefrontal cortex has been found to be dysfunctional in PD [23][24][25], which may well affect the Nc, hence leading to abnormally large Ncs in PD.
In the light of a decreased dopaminergic function in older compared to younger as well as in PD-patients compared to healthy controls [26,27] this may suggest that an error-specific activity (i.e. Ne) protecting a task relevant representation is reduced in its function, while a more general activity, which is evident in correct (Nc) and error trials [13] is enhanced, possibly reflecting a compensatory mechanism. Such a compensatory pattern may be particularly present in newly diagnosed PD patients [28].
In summary the study specifically examines differences in tonic and phasic post-response monitoring processes between de novo PD-patients and healthy controls. Our objective was to test the following hypotheses: first, based on the assumption that the amplitude of the Ne depends on the DA system, we expect that the amplitude of the Ne will be reduced in drug-naive PD patients. Second, the Nc amplitude should be enhanced in de novo PDpatients reflecting the increased overall response monitoring [29] or reflecting the impairment of the correct response representation [22], because prefrontal cortex dysfunctions, frequently observed in PD [23][24][25].
Subjects
Fourteen newly diagnosed drug-naïve patients with idiopathic PD (7 women) were recruited via the PD out patient unit of the Neurological Clinic, St. Josefs-Hospital, Ruhr-University of Bochum (RUB) and of the Neurological Clinic, Klinikum Dortmund. The mean age of the patients was 59.6 years. Parkinson's disease was diagnosed by means of clinical assessment by the co-authors T.M. and M.S. Subsequently to initial clinical diagnosis all patients were immediately enrolled in the study (between 1 and 3 days after clinical diagnosis). Treatment was postponed until the study protocol (ERP-examination) was completed. To each patient a healthy control subject (N = 14) was matched by age, sex, and educational background. The mean age of the controls was also 59.6 years. None of the control subjects had any history of other neurological or psychiatric disorders, or was taking any drugs affecting the central nervous system. All participants gave signed informed consent after they were informed about the purpose of the study and the protocol was explained to them. The entire study was approved by the ethics committee of the University of Münster. The sociodemographic data of the Ss are given in Tab. 1. All subjects were tested with a battery of standard intelligence and neuropsychological tests in a separate session before the main EEG session. The Multiple Choice Intelligence Test (MWT-B) [30] is a test for crystallized intelligence routinely used in Germany. As a neuropsychological test of executive functioning the Wisconsin Card Sorting Test (WCST) [31] was used. In order to control for depression, the German version of the Beck Depression Inventory (BDI) was carried out [32]. The clinical testing was conducted with the Unified Parkinson's Disease Rating Scale (UPDRS) [33]. The neuropsychological data is given in Table 1, too. In the neuropsychological tests there was no significant difference between the patients and the controls. The overall depression score was relatively low and well below the threshold for depression. However it was higher in the patients (8.3) than in the controls (2.5) (t = 4.3, p,.0001).
All participants including PD-patients were free of any medication.
Modified flanker task
The task was originally designed by Kopp et al. [34] and slightly modified for our study. The stimuli consisted of vertical arrays of arrowheads or circles (see Figure S1). The central part of the stimulus was defined as target. When the target was an arrowhead the subjects had to press a button on the side the target pointed to; when the target was a circle, no response had to be given (Nogo trials). Above and below each target a flanker was presented which pointed either to the same side (congruent trials) or to the opposite side (incongruent trials) of the target. Nogo and incongruent trials had a probability of 20% each, congruent trials had a probability of 60%. By making the incongruent stimuli relatively rare we aimed at increasing interference and hence the error rate in the incongruent condition [35]. Right and left pointing flankers were equiprobable. The flankers preceded the targets by 100 ms (Stimulus Onset Asynchrony, SOA = 100 ms) to further strengthen their influence and consequently further increase the error rate in incongruent trials [36]. Flankers and targets were switched off 100 ms after target onset. The next flanker was presented 800 to 1200 ms (interval randomized) after the response of the subjects, or 1900 to 2300 ms after a Nogo target. Altogether 420 stimuli were presented in four blocks of 105 stimuli each, which were interrupted by short breaks. The subjects were asked to react as fast as possible to the arrowhead targets. A response was given with one of two joystick-like vertical bars. Pressure-sensitive buttons were mounted at the top of the bars and had to be operated with the right and left thumb. Time pressure was administered by an individual deadline and was determined using the error rates in the training session as indicator. A feedback tone (1000 Hz) was presented 500 ms after the response, if the RT was slower than the deadline RT.
EEG recording and analysis
During task performance the electroencephalogram (EEG) was recorded from 26 electrodes: Fp1, Fpz, Fp2; F7, F3, Fz, F4, F8; FC5, FC3, FCz, FC4, FC6; C3, Cz, C4; P7, P3, Pz, P4, P8; M1, M2; O1, Oz, O2. The vertical EOG was recorded from 4 electrodes above and below both eyes, and the horizontal EOG from 2 electrodes at the outer canthi of the eyes. The amplifier EPA-5 (Sensorium Inc.) was used. The forehead was used as ground. The primary reference was Cz. In addition to EEG and EOG, the response forces of both hands were measured, as outlined above. EEG, EOG and force data were sampled at 500 Hz (Acquire, Neuroscan Inc.) and stored continuously on a PC hard-disk together with stimulus and response markers. The data were analyzed off-line using Vision Analyzer (Brain Products, Munich). The EEG was filtered off-line with a filter band-width of 0.5-16 Hz. EEG segments beginning 200 ms before and ending 400 ms after the response were cut out and averaged separately for correct and error responses. The ERP data were re-referenced to average reference to make them independent on any specific reference such as the mastoid. Only the data of the incongruent trials were used for ERP analysis. The Ne in the error trials, and the Nc in the correct trials, were measured as the largest negative peak at FCz within a window of 20 to 120 ms after the response, relative to the baseline.
Ethics
Parkinson's disease patients were recruited from local clinics, the neurological department of the University of Bochum and the municipal hospital Dortmund. Healthy controls were recruited by newspaper announcements. All participants gave written informed consent. For the Parkinson's disease individuals, a family member was aware of the recruitment for the study and was involved in the consent procedure. The study was approved by the ethics committee of the University of Münster.
Statistical methods
Data from fourteen de novo PD patients (N = 14) and fourteen healthy controls (N = 14) were analyzed. There were no drop-outs. Reaction times (RTs) and error rates were analyzed as behavioural measures. Neurophysiological processes on correct and erroneous trials were analyzed in a repeated-measures ANOVA using the within-subject factor ''correctness'' (correct vs. error) and the between-subject factor group (controls, de novo PD). The degrees of freedom were adjusted using the Greenhouse-Geisser-Correction when appropriate. Significances are given one-tailed, due to higher test-power. The mean and standard error of the mean (6 SEM) are given. Post-hoc tests were calculated using the Bonferroni-correction. Due to higher test power, one-sided tests were performed. For statistical analysis SPSS 15.0 was used.
Behavioral data
Reaction times (RTs) on error and correct trials were subjected to a repeated measures ANOVA with the within-subject factor ''correctness'' and the between-subject factor ''group''. RTs differed between error and correct trials, being significantly longer on correct (512.1613.4) than on error trials (345.07612.1) (F(1,26) = 342.78; p,.001). Additionally, there was a main effect ''group'' (F(1,26) = 6.01; p = .021), showing RTs to be longer in the de novo group (457.9616.9) than in the control group (399.2616.9). There was no interaction ''correctness6group'' (F(1,26) = 0.2; p..8), indicating that RTs were always longer for correct than for error trials, regardless of group.
For the error rates also a repeated measures ANOVA was calculated using the within-subject factor ''trial type'' (congruent, incongruent, Nogo) and the between-subject factor ''group''. A significant main effect of trial type was obtained (F(2,52) = 50.96; p,.001; g = .66), where it is shown that error rates were lowest on congruent trials (0.5260. Figure S2 shows the response-locked ERPs after correct and incorrect responses for de novo patients and controls at FCz.
Neurophysiological data
A clear Ne is seen for error trials, while the correct trials exhibit a smaller negativity with shorter latency, the Nc. The Ne appears smaller, and the Nc larger in the patients vs. the controls. The difference between Ne and Nc appears very small in the patients. Neurophysiological data were analyzed in a repeated measures ANOVA using the within-subject factor ''correctness'' and the between-subject factor ''group''. The amplitudes of ERPs after error and correct responses differed from each other (24.3360.82). For the Nc the pattern was reversed, as the Nc was larger in the de novo group (23.1660.61), compared to controls (21.1960.61) (F(1,26) = 5.23; p = .031). For the de novo PDs it is shown that the Ne differed from the Nc (F(1,13) = 5.43; p = .037; g = . 29). Yet, in the control group this difference was larger (F(1,13) = 34.92; p,.001; g = .73). Values for the Ne and Nc for each individual patient-control pair are given in Figure S3.
The amplitude of the Ne in the de novo group was unrelated to their RTs in error trials, even though they were prolonged, compared to controls (r,.1; p..4). For the correct only a trend towards a relation was obtained (r = 2.404; p = .066). Correlating the Ne and Nc amplitudes with the BDI score, only revealed significant correlations in the de novo PD group, but not in the controls (Ne: r = 2.627; p = .008; Nc: r = 2.501; p = .034). The correlation shows that a higher BDI score was related to higher Ne or Nc amplitudes. Yet, in no de novo PD patient the BDI was above the critical cut-off value.
It may be suspected that the Nc waveform is contaminated by residual stimulus-related ERPs. To rule out this possibility, we computed stimulus-locked waveforms on correct trials. As can be seen in Figure S4 amplitudes were not higher in the PD, compared to the control group (F(1,26) = 2.22; p..15). Hence, even if the Nc is affected by these stimulus-locked ERPs it should have been modulated in the opposite direction, i.e. there should be a reduction of the Nc in de novo-PDs, which was not the case. Thus, the ERP waveforms obtained for the Nc are unlikely to be biased due to differences in stimulus processing.
Discussion
In the current study we assessed post-response processing functions in recently diagnosed PD-patients, compared to healthy controls. While the Ne was reduced even in the de novo patients, relative to healthy controls the Nc was enhanced in the patients. This pattern cannot be attributed to different performance levels, as the groups did not differ in error rates.
Furthermore, it can be ruled out that the Nc waveform is contaminated by residual stimulus-related ERPs, since the stimulus-locked ERP amplitudes in correct trials were not higher in the PD, compared to the control group. However, the RTs were generally prolonged in the de novo PD-group, which is likely due to the pathogenic mechanisms. In an earlier study [10] the (wellmedicated) patients had no prolonged RTs in comparison to matched controls in the same flanker task. This suggests that L-DOPA medication speeds up RT [37]. The prolongation of RTs seems to be unimportant for the modulation of the Ne, as no correlation was found between these parameters. As basic neuropsychological scores did not differ between the groups the results show a clear advantage of neurophysiological measures compared to standard neuropsychological test for detecting early cognitive changes in PD.
The reduction of the Ne in the patient group is in line with the reinforcement-learning hypothesis [3]. In light of this, it is interesting that phasic DA signals in medial frontal areas, as they are reflected in the Ne, are decreased, while neurophysiological processes as they are reflected by the Nc, are enhanced. As hypothesized in the introduction, Nc and Ne may both depend on the activity of the DA-system serving as a gating signal that instructs the network when to maintain a given activity state [4]. This may strengthen current representations, protecting them against interference by irrelevant distracting information [4,5]. Given this, the results suggest that de novo PD-patients show an increased overall motor response monitoring (Nc) [13] and hence a strengthening of motor response representations. It may be hypothesized that this alteration in medial frontal activity is at costs of error-specific dopaminergic increases: the system controlling motor response monitoring is more demanded in de novo patients, than in controls. If this system is controlled by the DA-system dopaminergic prefrontal neuron assemblies may not be able to alter their firing in order to be capable of the demands that error monitoring processes add on these [13]. In healthy subjects, where dopaminergic neuron assemblies are less strained during motor response monitoring this alteration in firing is possible to a larger extent. Together, these processes may result in a pattern of an increased Nc and a reduced Ne. In our previous studies [8,10] the Nc was not found to be significantly altered in long-term medicated patients. This pattern of results in de novo PD-patients may be an expression of compensatory processes, which are likely mediated via dopaminergic neurons in PD [38]. However, other studies have not proved the importance of this system [28]. Even though these are predominantly manifest in presymptomatic stages of PD [38] they may persist with reduced efficacy in very early stages of PD (i.e. de novo PD). This pattern of reduced Ne and enhanced Nc resembles what is often seen in healthy elderly vs. young subjects, or in frontal brain patients vs. elderly subjects [18]. Hence a pattern of compensatory enhancement of small DA signals at the cost of strong DA signals after errors appears to exist in normal aging and some CNS diseases. Critical for this interpretation in terms of a general mechanism, may be the finding that the scalp topographies of the Nc and Ne were only similar in the de novo PD group, but not for the controls.
Hence, and more probable the results in the PD-group, may be due to an impairment of the correct response representation that may be due to prefrontal dysfunctions in PD [23][24][25]. Therefore errors are not always detected and correct responses may sometimes be misinterpreted as errors. Such an alternative interpretation has been put forward by Coles et al. [22]. This is also supported by the current data, since the topographies were similar for correct and error trials, but only for the de novo PD group and not for controls. As RTs were prolonged in the de novo group, likely due to a general slowing of motor functions, it may also be hypothesized that this slowing in RTs is an expression of such an increased, overall response monitoring in an early stage of PD. It is possible that the higher BDI score in the de novo patients led to altered response monitoring (Nc) although the Ne amplitude is significantly reduced in the de novo patients, compared to controls.
From a clinical point of view it is highly relevant that the Ne reduction is fairly large in just diagnosed patients which exhibit only subtle signs of manifest PD (UPDRS part III of 12.7). Hence the Ne may have the potential for an additional diagnostic tool in the early diagnosis of PD. Future clinical studies have to show, whether observed modulations of the Ne are specific for various neurodegenerative diseases (e.g. PD, Huntington's disease, supranuclear palsy) as well as other neurological diseases (e.g. multiple sclerosis) and stages within a disease.
Future longitudinal studies may further examine, if the Ne becomes larger and the Nc reduced due to treatment in the course of the disease, suggesting that Ne and Nc may also be useful markers of treatment success. Figure S1 Stimulus arrays of the modified flanker task. Depicted are the stimuli for congruent, incongruent (right hand responses) and for Nogo (no response) condition. | 2014-10-01T00:00:00.000Z | 2009-03-27T00:00:00.000 | {
"year": 2009,
"sha1": "164797aa3edcfa2dae07baff175bfefe9c80b6d8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004898&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89042e30027af5f48ce80ad79f1173c26759049d",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54738827 | pes2o/s2orc | v3-fos-license | Isolation of Lactic Acid Bacteria That Produce Protease and Bacteriocin-Like Substance From Mud Crab ( Scylla sp . ) Digestive Tract
Digestive tract is complex environment consist of large amount of bacteria’s species. Fish intestine bacteria consist of aerobic or facultative anaerob bacteria which can produce antibacterial and enzym. The objectives of this research were to isolated lactic acid bacteria that produce bacteriocin-like and protease from mud crab digestive tract. Isolation and characterization of isolates were conducted employing media MRS. Neutralized cell free supernatant of isolates were tested using disc diffusion agar of against pathogenic and spoilage bacteria to indicate bacteriocin-like-producing lactic acid bacteria. Protease-producing isolate was tested using disc diffusion method in casein agar. Among a hundred isolates, 96 isolates were showed clear zone in MRS+CaCO3,, catalase negative, and Gram positive bacteria. Thirty four isolates produced protease and only four isolates (i.e. IKP29, IKP30, IKP52, and IKP94) showed strong inhibition against pathogenic and spoilage bacteria. There were three patterns of inhibition among three isolates against Bacillus subtilis, Staphylococcus aureus, Eschericia coli, and Salmonella sp. All three isolates showed potential uses for produce starter culture for fishery product fermentation purpose. This is the first report of isolation lactic acid bacteria that produced protease and bacteriocin-like from digestive tract of mud crab.
Protease is the most important enzyme for industrial and fermentation purpose.Microbial protease are group of enzymes that can have apply in numerous industries, pharmaceutical process, food industry, oil extraction, etc. (Borla et al., 2010).Proteolytic bacteria play important role as culture starter for fermentation.Bacterial peptidase function in meat fermentation is break down oligopeptide into amino acid (Sanz et al., 1999).Based on these reason, the objectives of this research were to isolated and characterized LAB that produce bacteriocin-like substance and protease from digestive tract as candidate of culture starter for fishery product fermentation.
Materials and Methods
Twelve mud crabs were purchased from fisherman of Gunung Sari mangrove area, Surabaya in December 2014.Mud crabs were taken with cool box with ice (±5 O C) to the laboratory.
Segregation of digestive tract
Digestive tract was taken from the mud crab specimen according to Talpur et al. (2012).Briefly mud crabs specimen was sprayed with alcohol 70% prior segregation.Segregation was conducted by scalpel from abdominal part to bucal cavity.Digestive tract of the mud crab was taken directly using sterile tweezer.
Enrichment procedures
Each of digestive tract of total 12 mud crab was taken separately into erlenmeyer flask containing sterile MRS broth and incubated overnight at 37 O C prior isolation process.
Isolation of lactic acid bacteria
One mL of diluted samples of enrichment digestive tracts (10 -1 , 10 -2 , 10 -3 ) were spread plated onto deMan, Rogosa, and Sharpe (MRS, Merck) medium prepared in sea water (30 ppt) and supplemented with 0.5% CaCO3 then incubated in 37 O C for 24 h.Bacteria that showed clear zone around colony was aseptically taken then streaked into MRS medium and incubated 37 O C. Pure cultures from plates were tested for catalase and Gram staining procedure.Isolate that showed catalase negative and Gram positive were taken added with 50% glycerol and stored in freezer (-10 O C) before used for further test.
Screening of protease-producing bacteria
LAB isolates were thawed and 100 μL of master culture growth in one mL of MRS broth for 18 h.Ten μL culture was dropped in sterile paper disc and put on survace of casein agar (1% skim milk, 1.5% agar, pH 6.5) then incubated 2-3 days.Isolate that produce protease was exhibited clear zone around paper disc.
Antimicrobial activity
Isolate that exhibited proteolytic activity tested for antimicrobial activity test employing disc diffusion agar method against six indicator bacteria (Table 1).Ten μL of 24 h culture was drop on sterile paper disc and put on surface of tryptone soya agar that overlayed by indicator bacteria.Plate was incubated in 37 O C for 24 h.Positive antimicrobial activity showed by clear zone surrounded paper disc containing lactic acid bacteria isolates.
Bacteriocin-like substance activity
Isolates that showed antimicrobial activities were cultured in 1.5 mL microtube for 18 h and centrifuge 5000 rpm for 15 minute.Supernatant was transferred into sterile microtube and pH was adjusted to 6.5 before heating 100 O C for 3 min to inactivated enzyme and bacteria that next to be called as neutralized cell free supernatant (NCFS).Ten mL of NCFS was transferred into sterile paper disc and put on surface of MRS that was overlay with L. plantarum.Positive bacteriocin-like substance activity showed by clear zone surrounded paper disc containing NCFS of LAB isolates.
Isolation of lactic acid bacteria
A hundred of isolates that showed clear zone in MRS+CaCO3 medium was isolated.Among 100 isolates, there were 96 isolates that have the characteristics of catalase-negative and Grampositive.All isolates have a spherical shape with a milky white color or white.These results were similar to those obtained by Nursyirwani et al. (2011) who Negative obtained 21 isolates of lactic acid bacteria in the digestive tract of tiger grouper with a round shape with a diameter of 0.25-2mm with white or beige collour.Jokovic et al. (2008) have tried to direct and enrichment method for isolation of lactic acid bacteria in traditional Serbian products made from milk (kajmak).Jokovic et al. (2008) was showed that the isolation of lactic acid bacteria by using the enrichment method more effective to obtain better results due to the lactic acid bacteria contained in the product in small amounts.
Screening of protease production
Results of screening protease production of LAB 96 isolates showed that 34 isolates (35.41%) able to produce protease.This is shown by clear zones in the medium casein agar.Udomsil et al. (2010) have isolated 64 isolates of lactic acid producing protease from fish sauce.Bacterial proteases are generally used to break down oligopeptides into amino acids (Sanz et al., 1999).That was one of the important criteria of a LAB isolates could be used as a starter in the production of fermented fish.The same thing, Zheng et al. (2014) states that the lactic acid bacteria producing protease is one of the important criteria to produce a scent in the production of fermented fish.
Antimicrobial activity
The The test results showed that microbial activity of 34 isolates that produce protease, there were 24 isolates that showed inhibition zone against indicator bacteria (Table 2).Among 24 isolates there are four isolates that showed high antimicrobial activity against some indicators bacteria, i.e IKP29, IKP30, IKP52, and IKP94.There are three patterns of inhibition among the four isolates.
Lactic acid bacteria are able to produce organic acid, H2O2, reuterin and bacteriocin that inhibits the growth of other bacteria in the ecosystem (Cotter et al., 2005).Organic acids produced by lactic acid bacteria causes the environment becomes acidic so the bacteria are not able to grow at low pH due to it cannot maintain cell homeostasis condition and result in cell death.The ability of lactic acid bacteria obtained from the digestive tract of mud crabs to inhibit bacterial growth of Gram positive and Gram negative is an important character in the application of bacteria as biopreservative agent.
Four isolates of LABs were inhibited the growth of pathogenic bacteria that normally occur in food products, S. aureus and Salmonella sp.This is in accordance with those obtained by Hor and Liong (2014) which showed that the lactic acid bacteria can inhibit the growth of S. aureus.Thelma et al. (2014) stated that the lactic acid bacteria inhibited E. coli, Salmonella paratyphii, and Listeria monocytogenes.Ability of LAB to inhibit pathogenic bacteria, indicating that the four isolates of LAB obtained has potential to be apply as biopreservation.Regarding to the ability of these four isolates to produce proteases, the four isolates can be used as a starter fermented fish products are capable of maintaining fermented food safety.
Production of Bacteriocin-like substance
Neutralized cell free supernatant of four isolates tested its activity against lactic acid bacteria L. plantarum were tested.The test was performed employing LAB such as Lactobacillus sakei subsp sakei JCM 1157 (Hwanhlem et al., 2014).The use of Lactobacillus brevis AP83, L. brevis H56, L. plantarum AP76, L. plantarum H12 were performed by Ghanbari, et al., (2013).Based on this research, L. plantarum can be used as an indicator to test activity of bacteria to produce bacteriocin.This was due to the LAB not inhibited by organic acid and H2O2 but inhibited by bacteriocin.NCSF of Isolate IKP29, IKP30, IKP53 and IKP94 was inhibits the growth of L. plantarum ranging from 3 to 4 mm.
The results indicate that inhibition of bacteria by all four isolates was done by the bacteriocin while also produce organic acids and hydrogen peroxide.Aslim et al. (2005) stated that the inhibition of microbes can be carried out by organic acids, hydrogen peroxide, bacteriocin, or a combination of all three of these substances.All of the four LAB isolates were produced bacteriocin-like substance and protease that can be used as culture starter for fishery fermentation production.This paper is the first report regarding the research of isolation of lactic acid bacteria in the digestive tract of mud crab that able to produce protease and bacteriocin-like substance.
Conclusion
The isolates obtained from digestive tract of mud crab can produce protease and bacteriocin-like substance.These four isolates, namely IKP29, IKP30, IKP53 and IKP94, were obtained and has have the potential to be developed as a starter culture for fermentation of fishery products such as the production of fish sauce and oyster sauce.
Table 1 .
Indicator bacteria that used for antimicrobial activity test | 2018-12-15T05:56:13.808Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "a0860c3d76abe74f002808b861952ef5b3e981de",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.undip.ac.id/index.php/ijms/article/download/8826/7143",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a0860c3d76abe74f002808b861952ef5b3e981de",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
} |
214810490 | pes2o/s2orc | v3-fos-license | Colorectal Cancer (CRC) treatment and associated costs in the public sector compared to the private sector in Johannesburg, South Africa
Background South Africa’s divided healthcare system is believed to be inequitable as the population serviced by each sector and the treatment received differs while annual healthcare expenditure is similar. The appropriateness of treatment received and in particular the cost of the same treatment between the sectors remains debatable and raises concerns around equitable healthcare. Colorectal cancer places considerable pressure on the funders, yet treatment utilization data and the associated costs of non-communicable diseases, in particular colorectal cancer, are limited for South Africa. Resources need to be appropriately managed while ensuring equitable healthcare is provided regardless of where the patient is able to receive their treatment. Therefore the aim of this study was to determine the cost of colorectal cancer treatment in a privately insured patient population in order to compare the costs and utilization to a previously published public sector patient cohort. Methods Private sector costs were determined using de-identified claim-based data for all newly diagnosed CRC patients between 2012 and 2014. The costs obtained from this patient cohort were compared to previously published public sector data for the same period. The costs compared were costs incurred by the relevant sector funder and didn’t include out-of-pocket costs. Results The comparison shows private sector patients gain access to more of the approved regimens (12 vs. 4) but the same regimens are more costly, for example CAPOX costs approximately €150 more per cycle. The cost difference between 5FU and capecitabine monotherapy is less than €30 per cycle however, irinotecan is cheaper in comparison to oxaliplatin in the private sector (FOLFOX approx. €500 vs. FOLFIRI aprox. €460). Administrative costs account for up to 45% of total costs compared to the previously published data of these costs totaling < 15% of the full treatment cost in South Africa’s public healthcare system. Conclusion This comparison highlights the disparities between sectors while illustrating the need for further research to improve resource management to attain equitable healthcare.
Background
Currently the South African healthcare system is divided into two healthcare sectors, namely public and private. While the majority of the South African population makes use of public healthcare (85%), only 15% subscribe to private medical insurance i.e. medical aid schemes, which must provide a prescribed minimum benefits package (PMB), similar to the care received in the public health care sector [1,2]. Conversely the public healthcare system is funded by the yearly national income taxation collection and the resource allocation is overseen by the National Department of Health (NDoH) via the individual Provincial Health Departments [2]. Part of the resource allocation includes medicine selection and access through the Essential Drugs Program (EDP), comprising of the Essential Medicines List (EML) and Standard Treatment Guidelines (STGs). These are used as a guideline for the PMBs as set out by the Medical Schemes Act [3,4].
Therefore the private healthcare sector is aimed at middle-and high-income earners to better cover their healthcare needs through increased access to medicines and healthcare professionals within the country [5]. Although medical services and medicines are covered by the medical insurance schemes, co-payments are frequently paid by the beneficiaries [1]. In addition, medicine selection is based on individual scheme formularies and benefit designs with regulated medicine pricing implemented by the NDoH to ensure cost transparency within the sector [6]. Although pricing and annual price adjustments ensure transparency, they do not govern the initial price of medicines as set out internationally by pharmaceutical companies. Cost differences are therefore common within the private sector per medicine class for a disease area, in particular cancer [7]. Overall total expenditures, incurred for the two funders remain similar between sectors despite the difference in the size of the population benefiting [1,2]. This substantiates the belief that the South African healthcare system is inequitable especially for diseases where less attention is paid such as cancer.
A competitive bidding process occurs in the public healthcare sector allowing the best possible price to be obtained for medicines prescribed [8,9]. Although this lowers the cost per medicine class; the range of choice between individual medicines for a class isn't provided as is for the private healthcare sector.
Despite South Africa being classified as an Upper-Middle Income Country (UMIC) according to the 2016 World Bank Statistics the current available treatment for colorectal cancer (CRC) in South Africa differs between the two healthcare sectors (Table 1) [10,11]. Additionally, private healthcare sector patients have access to many of the medicines available in High Income Countries (HICs) [12] such as the USA and EU [13][14][15][16][17][18][19]. Furthermore this difference not only influences the number of chemotherapy treatment regimens oncologists are able to prescribe in each sector but also the costs associated with CRC treatment. While practices including clinical treatment pathway implementation have been employed to curb the rising cost of cancer treatment in HICs, limited published information indicates that such clinical treatment pathways are not adequately in use in either healthcare sector within South Africa. EMLs and formularies do however direct medicine prescriptions in the two healthcare sectors (Table 1) [20][21][22][23].
Recent research on a similar database to the one used in this study focuses on surgical procedures and outcomes for CRC in a privately insured patient cohort. While valuable information is obtained from this research it does not address issues around cost and concerns only one of South Africa's health sectors [24]. Apart from this analysis, published literature regarding the costs in the private healthcare sector and differences between the sectors associated with receiving CRC chemotherapy is lacking. The determinations of these costs are an important and much-needed contribution as South Africa moves towards the implementation of Universal Health Coverage.
Thus the aim of this study was to compare a previously published South African public healthcare sector patient cohort's medicine utilisation and the associated costs by the same authors (Herbst et al, 2018) to a Subsequent to this study Panitumumab and Regorafenib have been approved for use by SAHPRA -South African Health Products Regulatory Authoritybut was previously available through a Section 21named patient use application.
c Not yet registered for use in South Africa but is available through Section 21 named patient application.
private South African medical aid scheme's claims (costs of chemotherapy submitted for payment) data for the same period.
Patient cohort database
The cohort inclusion criteria such as "newly diagnosed", outpatient treatment setting and type of cancer treatment along with study period were based on a previously published cohort study performed for South Africa's public healthcare sector thus allowing a comparison of costs between the sectors [25]. Three de-identified claim-based data sets were obtained from a private medical scheme and manually sorted to include all newly diagnosed CRC patients between 2012 and 2014. The claims data allowed for at least 12 months of follow-up data therefore costs up to the end of 2015 was requested. Only chemotherapy and related medicine treatment was included. The data sets received were named as follows, for the purpose of this study: A1medical claims for chemotherapy and related medicine, A2non-medical claims i.e. administrative costs for outpatient services and A3 -Demographic and disease-related data. Patients were excluded if diagnosis was prior to 2012 or after 2014, if no demographic data or non-medical data was received for any patients included in data set A1.
All patient identifiers from each data set were coded and only known to the researchers for the duration of the study. The final complete data set comprised of two smaller data sets ( Fig. 1: Flow diagram showing process of obtaining the final patient cohort included in the study).
Patient cohort demographics and treatment pathways
Demographic data included age, gender, diagnosis and surgery. Patient diagnosis was simplified into early CRC (no evidence of metastasis found) [26] and late CRC (evidence of metastasis found) [27]. This initial diagnosis was established as per the data received from the medical scheme but altered to late CRC if subsequent evaluation from the treatment pathway indicated metastasis. The per patient treatment pathways were manually derived from the final merged data set which was remodeled to include additional classification such as Anatomical Therapeutic Chemical (ATC) [28] for medicines, allowing each claim to be sorted into groups including administration medicine, chemotherapy, diagnostic/radiation medicine, pain management, supportive medicine or secondary supportive medicine. A two-dimensional pivot table was constructed in Excel for Mac (2011) to summarise each patient's treatment and subsequently develop each patient's treatment pathway according to sequential claim dates. Criteria applied in order to obtain the final treatment pathways and diagnoses per patient are seen in Table 2.
Patient cohort demographics were analyzed which, included patient numbers per diagnosis as well as the mean, median and range of the age for each diagnostic subgroup. The number of patients that underwent surgery was also calculated.
Per patient treatment costs
Using two-dimensional pivot tables all medical (medical claims for chemotherapy and related medicines) and non-medical (administrative costs for outpatient services) costs per patient, claimed through the private medical scheme, were collated. Claims data up to the end of 2015 was used to include at least 12 months of follow-up for patients enrolled in late 2014.
All claimed costs were adjusted to the last claimed cost in 2014 for each respective medicine or non-medical description in order to allow for comparisons to the public sector cohort results which, only published 2014 costing data [25]. All costs were converted to Euro's using the average annual exchange rate of 1€ = ZAR14.40 (September 2018) [29].
If quantities claimed didn't match the cost claimed, the quantities were adjusted to reflect the claimed costs.
Medicines where no claim could be found for 2014 were adjusted to the August 2014 private sector medicines price database (http://www.mpr.gov.za/PublishedDocuments.aspx). However in instances where 2014 price was unavailable, the final adjusted price was calculated using the annual medicine increases [30][31][32]. Medicines obtained via Section 21 "named patient" approval and claims classified as "ethical nonspecific" (e.g. haemodialysis concentrate) were adjusted by the annual average Consumer Price Index (CPI) increase for 2014 as per the Inflation.eu website (https://www.inflation.eu) [33]. Using adjusted cost data all average costs per patient were calculated.
The total cost per cycle for each regimen observed in the cohort's treatment pathways was filtered by CRC stage and the average claimed cost per medicine was determined so as to calculate the average cost per regimen (formulae in Supplement). The average cost per cycle for each regimen was determined using the treatment pathways developed in the cohort (formulae in Supplement). The average chemotherapy doses for each chemotherapy medicine were calculated to allow dosage comparisons based on the average cost per medicine as well as the cost per vial or tablets for the medicine using the lowest cost generic. This cost was selected to allow comparison between this cohort and the published public sector cohort data [25].
The non-medical costs were obtained by calculating the average administrative cost per regimen. The administrative costs included a global fee (fee charged for the management and services delivered during the treatment day) and a facility fee. For simplification, the global and facility fees were averaged for an oral and intravenous regimen. The costs per cycle and the total adjusted costs were calculated based on the average number of cycles per regimen from the medical data. It must be noted that administrative costs are independent of patient diagnosis but dependent on the chemotherapy administration. Consultation fees were excluded, as various medical specialties including medical oncologists, radiation oncologists and general practitioners submit differing claims but a consultation fee claim for every chemotherapy cycle was noted for every patient. Lastly, the total average cost per treatment regimen was calculated (formulae in Supplement).
Comparison of cohort to previously published public sector data [25] The demographics and average costs calculated for our cohort was compared to previously published public sector research, conducted by the same authors of this study, in order to establish if differences in CRC treatment, cost and access occurs [25]. The comparison was Table 2 Criteria applied to obtain final patient treatment pathways and diagnosis. 1. Treatment lines were determined by chemotherapy medicines grouped together if claimed over a single 3-month period.
2. Diagnosis was finalised based on the data captured and classification made by the medical scheme and was changed to late CRC if a biological medicine was used in either 1st or 2nd line treatment or more than two lines of therapy were followed by a biological medicine.
3. Each treatment line was colour-coded within the pathway for each patient and the treatment criteria were applied to finalize the number of treatment lines. A change in treatment line occurred if: • Oxaliplatin was switched to irinotecan or vice versa.
• A biological medicine was included or changed to another biological medicine. No change in treatment line occurred if: • A medicine was not prescribed for a certain number of cycles.
• 5-FU was switched to capecitabine or vice versa.
from the funder's perspective i.e. the Medical Scheme (private sector) and Government (public sector). Descriptive statistics was used to obtain the averages, means, medians and range for the data. Inferential statistics were not used in this costing study.
Patient cohort demographics and treatment pathways
The private sector patient cohort comprised of 729 males (56%) and 567 females (44%) with a mean age, regardless of CRC stage, of 63 years (range 23-91 years). More patients were diagnosed, according to final classification, with early CRC (65%) vs. late CRC (35%). 84% of the cohort underwent primary surgery (~67% early CRC diagnosis). Left-sided vs. right-sided CRC classification data was unavailable at the time therefore analysis based on the origin of the cancer could not be performed. Based on the criteria used to determine a change in chemotherapy treatment line for each patient's treatment pathway, up to 6 lines of chemotherapy treatment (including adjuvant chemotherapy) were found although not many patients received more than 2 lines of chemotherapy (approx. 7%) regardless of stage. According to the initial treatment regimen patients received, more early diagnosed CRC patients started on a capecitabinecontaining regimen as opposed to late stage diagnosed CRC patients receiving a 5-FU containing regimen (6 0% vs. 40%).
Per patient treatment costs
The largest cost component for the observed regimens, for either subgroup, was the cost of the chemotherapy, particularly regimens comprised of multiple chemotherapy agents. In addition administrative fees have a meaningful contribution to the overall cost per cycle (Fig. 2 and Fig. 3). The most expensive regimens per subgroup was found to differ, one such example was the use of FOLFOX + capecitabine for early CRC, which is unconventional and increases treatment costs, as the choice of regimen should either be FOLFOX or CAPOX.
Regorafenib was the most expensive regimen for the late CRC subgroup although this was only used in multirefractory patients. Bevacizumab was cheaper than cetuximab, approx. €1000 vs. €1500 respectively. This is due to the difference in cost of the two monoclonal antibodies ( Fig. 2: Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and Fig. 3: Late CRC regimen's cycle cost for each claimed component as per the treatment pathways constructed -(Achemotherapy alone; B -Chemotherapy plus Bevacizumab; C -Chemotherapy plus Cetuximab; D -Single agents for refractory patients)). Unexpected results include the similar cost per cycle between 5-FU and capecitabine regimens for both the early and late subgroups, approx. €290 vs. €310 and €300 vs. €310 respectively. Cost per cycle for irinotecan monotherapy was cheaper than oxaliplatin monotherapy despite the increased cost of administration for irinotecan- containing regimens, for either subgroup, approx. €350 vs. €410 and €370 vs. €420 respectively.
Comparison of cohort to previously published public sector data
The comparison between this study cohort and the previously published study highlights more differences than similarities. Apart from a similar gender split within the two cohorts (56% males: 44% females in this study vs. 55% males: 45% females), other demographic data such as age differ considerably (63 yrs. in this study vs. 57 yrs. in the public sector cohort). The stage at which patients are diagnosed in the private sector cohort is earlier than for the public sector cohort (35% vs 63%) and contributes to patients receiving more lines of treatment and therefore higher total treatment costs were observed.
Discussion
The patient cohort included in this study is likely to be fairly representative of the private healthcare sector within South Africa, as the medical aid scheme population comprises one of the largest in the country. In comparison the public sector cohort was from only one public sector facility, albeit one servicing a large area within Johannesburg, the largest city within the country [25].
In comparison to the published public sector patient cohort [25], the gender-proportions of the private sector cohort was similar. This trend follows the risk data seen in SEER (Surveillance, Epidemiology, and End Results Program) statistics [34]. Interestingly slightly more males than females are diagnosed within South Africa despite CRC being a non-gender specific disease [35].
Additionally, the number of females affected is lower in our cohort and could be as a result of their socioeconomic status, which influences differences in lifestyle. A prospective study conducted in Denmark found patients who adhere to health recommendations reduce their risk considerably [36]. The average diagnosis age for the public sector cohort was younger than our private sector cohort but the private sector patient cohort following similar global trends [37][38][39].
The stage at which the patients were diagnosed yields an interesting comparison. The number of patients with late CRC is greater in the previously studied public sector cohort [25]. Initially it was expected that due to the increased number of patients in our private sector cohort there may be more patients diagnosed with late CRC but when taking into account the socioeconomic status of the patients, healthcare resources and the asymptomatic timespan of the cancer, it is not unexpected to find more late presenting CRC patients in the public sector.
Assessing the number of treatment lines between the two patient cohorts illustrates the difference in access to treatment. As expected, a higher percentage of the metastatic subgroup received at least one line of chemotherapy in comparison to the non-metastatic subgroup (99% vs. 89%). More patients in the private sector cohort received 2nd (13% vs. 19%) and 3rd line treatments (0% vs. 5%), this is largely due to the absence of 3rd line treatments in South Africa's public sector (Table 1) due to the limited number of available medicines in the public sector.
Reasons for the use of unconventional chemotherapy in the private sector cannot be ascertained from the data however it is suspected to be either off-label use or indicate the presence of a secondary cancer that was not captured in the claims database. Apart from the clinical inappropriateness, this adds an unnecessary contribution to each patient's total cost of treatment. One such example was the recorded use of carboplatin + paclitaxel which increased the total cost by approximately €1600 (data not shown).
Comparing the treatment pathways developed in this study to international guidelines shows many treatments available elsewhere globally were available to private sector patients in South Africa. Standard therapies including bevacizumab and cetuximab were thus available to these patients unlike patients accessing public healthcare [13-15, 18, 19, 22, 40-44]. This gives some indication that chemotherapy treatment for CRC in South Africa does follow international trends. At the time of the study there was an absence of medicines such as aflibercept and panitumumab, although aflibercept was available via a named-patient regulatory approval process. Regorafenib was prescribed for a few patients, as it was also available on a named-patient basis at the time of the study, and will most likely be prescribed further since recent local regulatory approval, although availability will also be dependent on funding for reimbursement.
Looking at the first line treatments received in our patient cohort, capecitabine-containing regimens are favoured for early CRC patients (approx. 60%) whereas late CRC treatment pathways indicate a higher use of 5-FU-containing regimens (approx. 64%) even though capecitabine has proven non-inferiority to 5-FU for any stage of CRC [45]. It was unexpected to see a greater use of 5-FU for late CRC disease in the private sector but the majority of the regimens contain additional intravenous medicines thus it may be preference to receive treatment all at once. Many patients diagnosed with late CRC disease in our cohort have access to newer biological agents that can only be administered intravenously and may further contribute a preference for 5-FU when used in combination with conventional regimens. Studies have indicated patient preference for capecitabine due to less toxicity and ease of administration thus this does raise an important issue in the private sector as to what the drivers are for choice of treatment [46,47].
In the previously published study, costs associated with late CRC treatment were higher than early CRC treatment [25] and the expectation was that our study would replicate this trend; however this proved incorrect, with the average cost per cycle being similar between the stages for the same regimens. This is essentially due to similar dosages, which was as a result of the assumption that the claimed vials were the prescribed doses. However in clinical practice the dosage may be lower due to the occurrence of vial wastage in order to accommodate BMI (body mass index) or body weight dosing. Wastage cost can't be calculated from a claims database however these factors should be considered as published data by Bach and colleagues (2016) found that single-dose vials can lead to overspending as the vial sizes don't match the prescribed doses for many medicines. In addition to this, vial sharing may also occur in larger practices [48,49]. While vial sharing limits the wastage of viable medicines and potentially curbs overall treatment costs, the funder is billed for the entire vial thus clinical practice data of dose and cost doesn't necessarily match. Vial sharing is one method suggested to curb costs however it is not recommended for all intravenous medicines [48,49].
This observation is seen between both CRC subgroups for conventional regimens such as FOLFOX and FOL-FIRI, where the difference is less than €50 per cycle (FOLFOX: approx. €480 vs. €500 and FOLFIRI: approx. €420 vs. €460) as seen in Fig. 2: Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and Fig. 3: Late CRC regimen's Cycle cost for each claimed component as per the treatment pathways constructed -(Achemotherapy alone; B -Chemotherapy plus Bevacizumab; C -Chemotherapy plus Cetuximab; D -Single agents for refractory patients). When comparing the costs for these regimens to the previously published study a few differences are noted [25]. Firstly, there is no possible comparison between the two early CRC subgroups as patients in the South African public sector cohort did not have access to regimens such as FOLFOX or FOL-FIRI but comparing a non-inferior regimen CAPOX for either stage shows that the cost per cycle is much higher in our private sector patient cohort. The CAPOX regimen is in the range of €300 to €450 per cycle in the public sector [25] but costs more than €600 in our cohort. This illustrates that the cost to the funder, is higher in the private sector. Similarly to the findings from the previous published public sector cohort, irinotecancontaining regimens -FOLFIRI and CAPIRI cost less per cycle than regimens containing oxaliplatin. On average these regimens are €55 cheaper depending on the fluoropyrimidine prescribed as seen in Fig. 2: Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and Fig. 3: Late CRC regimen's Cycle cost for each claimed component as per the treatment pathways constructed -(Achemotherapy alone; B -Chemotherapy plus Bevacizumab; C -Chemotherapy plus Cetuximab; D -Single agents for refractory patients) (FOLFOX: approx. €490 vs. €500 and FOLFIRI: approx. €420 vs. €460; CAPOX: approx. €660 for either and CAPIRI: approx. €590 vs. €610).
Moreover the cost difference between 5-FU and capecitabine monotherapy is less per cycle, regardless of stage, than the cost difference noted between the two treatments in the previously published study [25]. From Fig. 2: Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and Fig. 3: Late CRC regimen's Cycle cost for each claimed component as per the treatment pathways constructed -(A chemotherapy alone; B -Chemotherapy plus Bevacizumab; C -Chemotherapy plus Cetuximab; D -Single agents for refractory patients) the difference in cost is less than €30 per cycle (5FU: approx. €290 vs. €300 and capecitabine: approx. €310 for early vs. late subgroups respectively) where the difference in the previously published study is 3 times more for 5-FU [25]. Therefore, based on cost, our results do not indicate a prescribing preference for the use of capecitabine, despite it's proven oral availability, which is not consistent with previous research [45,[50][51][52][53]. This indicates multiple factors contribute to treatment decisions made by oncologists and patients. A literature review by Tariman and colleagues (2012) illustrated the complex nature of treatment decisions in older cancer patients. Apart from the many decision-making models that may be employed in the healthcare setting, factors including the oncologist's medical expertise and practice type, a patient's health related experience and perception of making a decision together with a patient's family preference, burden and financial situation can all influence treatment choices [54].
Cost comparisons for newer therapies including bevacizumab and cetuximab could not be done, as these options are unavailable to patients in the South African public sector. However, treatment costs are increased substantially when a monoclonal antibody is added to conventional treatment, for example adding bevacizumab increases the cost per cycle by €811,31 and cetuximab by €1342,73. This result is in line with previous studies that show a lower cost of first-and second-line treatment with bevacizumab-containing regimens in comparison to cetuximab-containing regimens despite a similar efficacy [55][56][57][58]. This cost difference has shown to be more than $2000 per month per patient and alludes to a better value offering for funders [55][56][57][58].
The last notable comparison is the cost constituents for each regimen. Similarly to the previously published study, chemotherapy cost has a large contribution to overall cost per cycle regardless of the stage or regimen (Fig 2 Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and Fig 3: Late CRC regimen's Cycle cost for each claimed component as per the treatment pathways constructed -(Achemotherapy alone; B -Chemotherapy plus Bevacizumab; C -Chemotherapy plus Cetuximab; D -Single agents for refractory patients)) [25]. However, administrative costs are a major cost driver in our cohort, which differs from the public sector cohort [25]. The administrative costs included a global and facility fee as set out by the medical scheme tariff. In comparison to the previously published study, the administrative costs are much higher in our cohort and do have a contributing effect on the total costs as seen in Fig 2: Early CRC regimen's cycle cost for each claimed component as per the constructed treatment pathways and On average the cost contribution is between 10 and 45% of the total cost depending on the chemotherapy regimen. This is in line with previous research but is below the 70% threshold as found by Aitini and colleagues (2012) in their economic comparison of CAPOX and FOLFOX [59]. It is recommended that a time and motion study be undertaken in a similar manner to Herbst et al (2018) [25] so as to validate the tariffs charged and to allow for a more accurate comparison.
Limitations
Due to the type of the claims captured on the claims database, the average cost per regimen doesn't take into account for line of therapy but is the average for the stage of CRC diagnosed within the cohort. A comprehensive breakdown of the cost and equipment inclusions for the administration costs was unavailable therefore clarity and accuracy is lacking with respect to these costs and the total administration costs for intravenous regimens. This limits the comparison to the public sector as the administration costs calculated in Herbst et al's 2018 cohort was extrapolated from a previous study that included all necessary equipment [25,60]. The methodology utilized is based on the one previous study, other similar studies are lacking therefore this could not be validated against additional studies. Lastly, out-of-pocket (OOP) costs could not be determined using the claims database for this cohort as data only reflected the actual costs paid for by the funder. It would be beneficial to conduct a survey in line with previous research in order to quantify the OOP costs patients currently incur [61]. | 2020-04-07T14:31:30.218Z | 2020-04-07T00:00:00.000 | {
"year": 2020,
"sha1": "8dfee8b2718573ac18df62fc7dff43825c638971",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-05112-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8dfee8b2718573ac18df62fc7dff43825c638971",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261352471 | pes2o/s2orc | v3-fos-license | Marked response to cabazitaxel in prostate cancer xenografts expressing androgen receptor variant 7 and reversion of acquired resistance by anti‐androgens
Abstract Background Taxane treatment may be a suitable therapeutic option for patients with castration‐resistant prostate cancer and high expression of constitutively active androgen receptor variants (AR‐Vs). The aim of the study was to compare the effects of cabazitaxel and androgen deprivation treatments in a prostate tumor xenograft model expressing high levels of constitutively active AR‐V7. Furthermore, mechanisms behind acquired cabazitaxel resistance were explored. Methods Mice were subcutaneously inoculated with 22Rv1 cells and treated with surgical castration (n = 7), abiraterone (n = 9), cabazitaxel (n = 6), castration plus abiraterone (n = 8), castration plus cabazitaxel (n = 11), or vehicle and/or sham operation (n = 23). Tumor growth was followed for about 2 months or to a volume of approximately 1000 mm3. Two cabazitaxel resistant cell lines; 22Rv1‐CabR1 and 22Rv1‐CabR2, were established from xenografts relapsing during cabazitaxel treatment. Differential gene expression between the cabazitaxel resistant and control 22Rv1 cells was examined by whole‐genome expression array analysis followed by immunoblotting, immunohistochemistry, and functional pathway analysis. Results Abiraterone treatment alone or in combination with surgical castration had no major effect on 22Rv1 tumor growth, while cabazitaxel significantly delayed and in some cases totally abolished 22Rv1 tumor growth on its own and in combination with surgical castration. The cabazitaxel resistant cell lines; 22Rv1‐CabR1 and 22Rv1‐CabR2, both showed upregulation of the ATP‐binding cassette sub‐family B member 1 (ABCB1) efflux pump. Treatment with ABCB1 inhibitor elacridar completely restored susceptibility to cabazitaxel, while treatment with AR‐antagonists bicalutamide and enzalutamide partly restored susceptibility to cabazitaxel in both cell lines. The cholesterol biosynthesis pathway was induced in the 22Rv1‐CabR2 cell line, which was confirmed by reduced sensitivity to simvastatin treatment. Conclusions Cabazitaxel efficiently inhibits prostate cancer growth despite the high expression of constitutively active AR‐V7. Acquired cabazitaxel resistance involving overexpression of efflux transporter ABCB1 can be reverted by bicalutamide or enzalutamide treatment, indicating the great clinical potential for combined treatment with cabazitaxel and anti‐androgens.
Conclusions: Cabazitaxel efficiently inhibits prostate cancer growth despite the high expression of constitutively active AR-V7. Acquired cabazitaxel resistance involving overexpression of efflux transporter ABCB1 can be reverted by bicalutamide or enzalutamide treatment, indicating the great clinical potential for combined treatment with cabazitaxel and anti-androgens. countries. 1 Advanced prostate cancer is treated with androgen deprivation therapy, which is initially efficient in most cases, but eventually gives disease progress into a lethal stage known as castration-resistant prostate cancer (CRPC). The suggested underlying mechanisms behind CRPC include androgen receptor (AR) amplifications, AR mutations, constitutively active AR variants, intracrine steroid synthesis, and AR bypassing mechanisms. 2 For many years, docetaxel was the only available treatment for CRPC, but now several novel therapies with different mechanisms of action are approved 3 ; abiraterone, a steroidogenesis inhibitor blocking the CYP17A1 enzyme, the novel AR antagonist enzalutamide, the radioisotope radium-223, the immunotherapy Sipuleucel-T, and a new tubulin-blocking taxane, cabazitaxel. [4][5][6][7][8] In a previous study, we identified a sub-group of patients with CRPC with bone metastases expressing high levels of the constitutively active AR variant 7 (AR-V7) having a very poor prognosis. 9 Further research has shown that AR-V7 messenger RNA (mRNA) detection in circulating tumor cells or in peripheral blood of patients with CRPC indicates a likely resistance to treatment with enzalutamide and abiraterone, but not to taxanes, suggesting that cabazitaxel might be a suitable treatment for patients with high tumor expression of AR variants. [10][11][12][13][14][15] Therefore, the aim of this study was to investigate the effects of cabazitaxel on human 22Rv1 prostate cancer xenografts, expressing high levels of constitutively active AR-V7 along with other AR variants (AR-V1-6, V9, V12-14), [16][17][18][19] in comparison to effects of surgical castration and abiraterone treatment. We also wanted to explore mechanisms behind acquired cabazitaxel resistance. For this purpose, cabazitaxel resistant cell lines were established.
| Cell culture
The 22Rv1 cell line (
Control animals received sham operation (n = 5), vehicle for abiraterone (n = 5) or cabazitaxel (n = 5), or sham operation plus vehicle for abiraterone (n = 2) or cabazitaxel (n = 6). Abiraterone acetate (kindly provided by Janssen Cilag AB) was diluted to 40 mg/mL in 5% benzyl alcohol, 95% safflower oil and given daily by intraperitoneal injections of 0.5 mmol/kg. Cabazitaxel was received as frozen aliquots of Jevtana (Sanofi) 10 mg/mL, 24% polysorbate 80, 9.8% ethyl alcohol (EtOH) stock solution (leftovers from patient treatments at the Oncology clinic, Umeå University Hospital) and diluted to 2.08 mg/mL in 5% polysorbate 80, 5% glucose and 2% EtOH before given as two injections of 20 mg/kg with 7 days in-between. Mice that showed a body weight loss <10% (5 out of 17) received a third injection. The experiment was terminated after approximately 2 months or when tumors reached a volume of about 1000 mm 3 . Tumors and prostate tissue were dissected, freshly processed or fixed in 4% paraformaldehyde. Animal work was carried out in accordance with the protocol approved by the Umeå Ethical Committee for Animal Studies (permit number A5-15).
| Establishment of cabazitaxel resistant cell lines
Two different 22Rv1 xenografts relapsing during repeated cabazitaxel treatment were established as cell lines; termed 22Rv1-CabR1 and 22Rv1-CabR2, as further described. Tumor tissue was aseptically minced using scissors and dissolved by 0.1% collagenase (Sigma-Aldrich) in YLITALO ET AL.
| 215
Hanksʼ balanced salt solution (HBSS) containing calcium and magnesium (Thermo Fisher Scientific) while incubated at 37°C for 1 hour. After incubation, cells were filtered through a 100 µm cell strainer and washed with HBSS free from calcium and magnesium. Filtered cells were centrifuged twice, resuspended in growth media (described above) and seeded. When the cells showed stable growth the media was changed to RPMI with 10% charcoal-stripped FBS and increasing concentrations of cabazitaxel from 0.5 to 10 nmol/L within five passages. Cells were grown without cabazitaxel for at least one passage before experiments. 22Rv1 cells grown in charcoal-stripped media together with vehicle were used as control. To confirm cabazitaxel resistance in vivo, xenografts were established from 22Rv1-CabR1 (n = 10) and 22Rv1-CabR2 (n = 7) cells by subcutaneous injections, as described above. 22Rv1 cells were used as control (n = 8). All mice were surgically castrated 4 days before tumor cell injections. Two rounds of cabazitaxel treatment were given (20 mg/kg each, 7 days in-between) when tumors reached the size of 100 to 200 mm 3 . When tumors reached a volume of approximately 1000 mm 3 , mice were killed. Tumor tissue was collected and processed as described above.
| Simvastatin sensitivity
Simvastatin was purchased from Sigma-Aldrich and activated according to the protocol from the vendor. Resistance to simvastatin was tested in vitro by growing quadruplicates of 1 × 10 4 cells in 96-well plates for 4 days in media containing 0 to 100 µmol/L simvastatin. Cell viability was then assayed using CellTiter Glo as described above.
| Androgen receptor activity
The Cignal Lenti Reporter assay (Qiagen) was used to determine the AR activity of the cells, according to the protocol from the vendor.
Briefly, cells were seeded in 96-well plates (1 × 10 4 cells/well) and incubated overnight before transduced in triplicate with either AR Luc (cat no: CLS-8019L) or negative control (CLS-NCL) at multiplicity of infection (MOI) of 10 in 50 µL serum-free transducing media. The next morning, 10% charcoal-stripped FBS-HI culture media was added containing 0 or 1 nM dihydrotestosterone (DHT). After 72 hours, AR activity was measured as luciferase activity using the luciferase substrate (LARII) of the Dual-Luciferase Reporter Assay (Promega). Relative AR activities were obtained by dividing signals with the corresponding negative controls.
| Prostate-specific antigen measurements
Cells were seeded (3 × 10 5 cells/well in six-well plates) and incubated with or without 1 nM DHT for 4 days. Both cells and conditioned media were harvested. The total number of cells per well was counted using a Countess automated cell counter. Cells and cell debris were removed from the conditioned media (centrifugation 3000g, 5 minutes) before analysis of total prostate-specific antigen (PSA) levels, according to the clinical routine at the accredited Umeå University hospital laboratory (Elecsys total PSA reagent on Cobas e601 analyzers).
| RNA and protein extraction
Total RNA and protein were extracted using the AllPrep DNA/RNA/ Protein Mini Kit (QIAGEN), according to manufacturerʼs protocol and with protein fractions dissolved in 5% sodium dodecyl sulfate. RNA and protein concentrations were determined by absorbance measurements and RNA quality was verified with the 2100 Bioanalyzer (Agilent Technologies) as RNA integrity number ≥8.
Data analysis was performed with the GenomeStudio software (version 2011.1, Illumina). Samples were normalized using the average algorithm.
Gene probes with average signals above two-time the mean background level in at least one sample were included, leaving 14 728 probes for further analysis. Differentially expressed genes were identified by the t test (P < .05). Genes defined to have a fold-change ≥2 were subjected to multivariate modeling and to functional pathway analysis.
| Multivariate modeling
Unsupervised principal component analysis (PCA) was used to create an overview of the variation in the transcription data and to detect clusters and trends among samples and expressed genes. Data were mean-centered before analysis, and models were validated by sevenfold cross-validation. Multivariate statistical analysis was performed in SIMCA version 15.0.2 (Umetrics, Umeå, Sweden).
| Functional pathway analysis
Gene set enrichment analysis was performed by the MetaCore software (GeneGo, Thomson Reuters, New York, NY). Sets of genes associated with a functional process (pathway map) were determined as significantly enriched based on P values representing the probability for a process to arise by chance, considering the numbers of enriched gene products in the data versus the number of genes in the process. P values were adjusted by taking into account the rank of the process, given the total number of processes in the MetaCore ontology.
| Western blotting
Protein extracts from replicate samples were pooled in equal amounts and 10 to 50 µg protein per sample was separated by 4% to 20% Mini-PROTEAN TGX stain-free protein gels (Bio-Rad Laboratories) and transferred onto a nitrocellulose membrane using the Trans-Blot Turbo transfer system (Bio-Rad Laboratories).
| Immunohistochemistry
Formalin-fixed, paraffin-embedded tissue sections were deparaffinized in xylene and rehydrated through graded ethanol. Endogenous peroxidase activity was blocked with 3% H 2 O 2 in methanol followed by antigen retrieval using Tris-EDTA (pH 9) and blocking with Dako serum-free protein block (X0909; Dako) or Background Sniper (BS966L; Biocare Medical). Immunostaining was performed using primary antibodies targeting ATP-binding cassette sub-family B member 1 F I G U R E 1 Tumor progression in 22Rv1 xenografts treated with vehicle and/or sham operation (Con, n = 23), surgical castration (Cas, n = 7), abiraterone (Abi, n = 9), cabazitaxel (Cab, n = 6), castration plus abiraterone (Cas + Abi, n = 8), or castration plus cabazitaxel (Cas + Cab, n = 11). A, Survival analysis of mice with 22Rv1 xenografts given different treatments for up to 65 days. Mice were killed when tumor volume reached approximately 1000 mm 3 . B, Tumor growth rate (mm 3 /day) until 65 days from treatment start or until tumor volume reached approximately 1000 mm 3 . C, Ventral prostate (VP) size (% of total body weight). D, Tumor volume (mm 3 ) of individual xenografts (Xeno 1-6) repeatedly treated with cabazitaxel in castrated animals, initiated between Day 30 and 50 at tumor regrowth after the first round of cabazitaxel treatments (for details, see Section 2). Xenografts from which cell lines were established are specified. *P < .05, ***P < .001, in comparison to control group YLITALO ET AL. Figure 1A). Castration gave a modest survival benefit, while abiraterone had no obvious effect in this model ( Figure 1A). This was in line with markedly reduced tumor growth rates in mice treated with cabazitaxel (with or without castration) over a study time period up to 65 days ( Figure 1B). Castration and castration in combination with abiraterone induced a modest reduction in growth rate, while abiraterone treatment did not significantly reduce the growth of 22Rv1 xenografts ( Figure 1B), despite giving a castration effect comparable to that of surgical castration when monitored as reduced ventral prostate lobe weight ( Figure 1C). Complete tumor regression after cabazitaxel treatment (+/− castration) was seen in three animals, while tumor regrowth was seen in 14 cases (>30 days). Six animals treated with castration plus cabazitaxel were subjected to repeated cabazitaxel treatment at tumor regrowth, and progress during cabazitaxel treatment was observed in all cases ( Figure 1D).
| Establishment of cabazitaxel resistant cell lines
To study mechanisms behind cabazitaxel resistance, cells from three To investigate whether the cabazitaxel resistance displayed by 22Rv1-CabR1 and 22Rv1-CabR2 in vitro was sufficient to affect tumor growth in vivo, an additional round of xenograft experiments was performed. Mice were injected with 22Rv1-CabR1 (n = 10), 22Rv1-CabR2 (n = 7), or 22Rv1 cells used as control (n = 8). Before injection, cells were grown in charcoal-stripped media and mice were castrated. As seen in Figure 2B,C and Figure S1B, there was a clear difference in cabazitaxel response/resistance manifested as rapid growth of the 22Rv1-CabR1 and 22Rv1-CabR2 xenografts during cabazitaxel treatment, while the control 22Rv1 xenografts clearly regressed.
| Mechanisms behind cabazitaxel resistance
To identify mechanisms behind cabazitaxel resistance, total RNA from 22Rv1-CabR1, 22Rv1-CabR2, and parental 22Rv1 cells cultured in charcoal-stripped media was analyzed in triplicates by Table 1). As this might reflect a general increased in AR activity in 22Rv1-CabR1 and 22Rv1-CabR2 compared with parental 22Rv1 cells, the general AR activity in the cell lines were examined using an AR reporter assay in the absence and presence of DHT.
Surprisingly, the resistant cell lines showed lower endogenous AR activity than the parental 22Rv1 cells and also less induction by DHT stimulation ( Figure 3C). A similar tendency was seen for PSA secretion; with 22Rv1 producing the highest and 22Rv1-CabR1 the lowest PSA amount per cell ( Figure 3D). In line with this, the 22Rv1-CabR1 cell line showed lower AR levels than the other cell lines ( Figure 3E and Figure 4). Notably, the AR-V levels did not obviously differ in relation to cabazitaxel resistance. Pathway analysis of differentially expressed genes in the cabazitaxel resistant cell lines (Table S1) indicated a strong upregulation of the SCAP/SREBP transcriptional control of cholesterol and fatty acid biosynthesis in 22Rv1-CabR2, but not in 22Rv1-CabR1 cells (Table S2).
| Anti-androgens partly restores susceptibility to cabazitaxel
To confirm that the cabazitaxel resistance in 22Rv1-CabR1 and 22Rv1-CabR2 was caused by increased ABCB1 expression, cells were T A B L E 1 Genes with significantly increased expression levels (P < .05, FC > 2) in both cabazitaxel resistant cell lines; 22Rv1-CabR1 and 22Rv1-CabR2, in comparison to control 22Rv1 cell line in 22Rv1-CabR1 cells (Figures 5E,H). In contrast, neither ARantagonists nor elacridar affected the cabazitaxel sensitivity in 22Rv1 control cells (Figures 5A, 5D, and 5G). gene codes for the ABCB1 protein that is also known as the multidrug resistance protein 1 or the P-glycoprotein with known function as a drug efflux pump. 24 in clinical samples of CRPC metastases. [32][33][34] In conclusion, this study shows initially pronounced response to cabazitaxel in the prostate cancer 22Rv1 xenograft model expressing constitutively active, LBD-truncated AR variants (including AR-V7).
| Simvastatin resistance in cabazitaxel resistant cells
Sub-sequential development of cabazitaxel resistance was associated with induced expression of the ABCB1 drug efflux transporter as well as by increased cholesterol biosynthesis. Resistance could be reversed by the ABCB1 inhibitor elacridar and partly overcome by coadministration of the AR-antagonists bicalutamide and enzalutamide. Taken together, our results show great translational potential suggesting that combined treatment with taxanes and antiandrogens could be used to overcome and/or delay acquired cabazitaxel resistance in patients with prostate cancer. | 2019-09-17T02:52:15.082Z | 2019-12-04T00:00:00.000 | {
"year": 2019,
"sha1": "b696b20ea5d9a1f9c6595c164719e1ae28f3846c",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pros.23935",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "792bb93940e2169530a0e461777a397aff7029ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
271510302 | pes2o/s2orc | v3-fos-license | Comparison of Double-Stranded DNA at the 5′ and 3′ Ends of the G-Triplex and Its Application in the Detection of Hg(II)
Leveraging the fluorescence enhancement effect of the G-triplex (G3)/thioflavin T (ThT) catalyzed by the adjacent double-stranded DNA positioned at the 5′ terminus of the G3, the G3-specific oligonucleotide (G3MB6) was utilized to facilitate the rapid detection of mercury (Hg(II)) through thymine–Hg(II)–thymine (T-Hg(II)-T) interactions. G3MB6 adopted a hairpin structure in which partially complementary strands could be disrupted with the presence of Hg(II). It prompted the formation of double-stranded DNA by T-Hg(II)-T, inducing the unbound single strand of G3MB6 to spontaneously form a parallel G3 structure, producing a solid fluorescence signal by ThT. Conversely, fluorescence was absent without Hg(II), since no double strand and formation of G3 occurred. The fluorescence intensity of G3MB6 exhibited a positive correlation with Hg(II) concentrations from 17.72 to 300 nM (R2 = 0.9954), boasting a notably low quality of limitation (LOQ) of 17.72 nM. Additionally, it demonstrated remarkable selectivity for detecting Hg(II). Upon application to detect Hg(II) in milk samples, the recovery rates went from 100.3% to 103.2%.
Introduction
Mercury (Hg(II)), a naturally occurring heavy metal, poses significant toxicity risks and can accumulate and migrate within environmental matrices [1].Beyond natural processes such as volcanic eruptions, geothermal activities, and forest fires, Hg(II) is a byproduct of various industrial activities.Hg(II) in gaseous or liquid forms is produced during mineral mining and the disposal of waste products, including cement, pesticides, and lamps.Hg(II) disperses into the air, water, and soil, subsequently entering the human body through bioaccumulation, and can result in severe damage to the liver, brain, and nervous system, posing serious health risks [2].According to the World Health Organization (WHO), the allowable concentrations of Hg(II) in potable water must be below 1 ppb (10 −9 , equal to µg/L) [3].Consequently, detecting trace levels of Hg(II) in the environment is critical for safeguarding human health.Traditional techniques of detection, including selective photoelectrochemical (PEC) methods [4], anodic stripping voltammetry (ASV) [5], inductively coupled plasma-mass spectrometry (ICP-MS) [6,7], fluorescence spectrometry [8][9][10], colorimetric methods [11], and enzyme-linked immunosorbent assays [12], offer high accuracy and sensitivity.However, their practical application is hindered by the complex operational requirements and substantial testing costs.Specifically, ICP-MS encounters higher detection errors at lower Hg(II) concentrations and necessitates specialized personnel.Thus, an urgent demand is for developing more straightforward and direct techniques of detection for Hg(II).
In recent years, functional nucleic acids, specifically ligand binders known as aptamers, have gained much attention in the construction of biosensors because of their advantages of facile modification, low costs of synthesis, and high sensitivity [13].It is well established that Hg(II) can interact with the N3 positions of the adjacent thymine (T) bases, replacing the imino proton to form the T-Hg(II)-T mismatched pairs [14].Consequently, DNA sequences rich in T bases can form T-Hg(II)-T complexes, making them suitable candidates for use as Hg(II)-specific aptamers.For example, the T-T mismatch in double-stranded DNA (dsDNA) can selectively and tightly bind Hg(II), forming a T-Hg(II)-T complex.The Hg(II)-mediated dsDNA structure often exhibits more excellent structural stability than conventional AT/TA base pairing [15].As a result, T-rich oligonucleotides are frequently utilized in constructing biosensors as Hg(II)-specific aptamers.Notably, Hg(II) can induce conformational changes in its DNA aptamers, leading to the formation of secondary structures such as hairpins [16], DNA duplexes [17], and G-quadruplexes [18].These structural transformations can be detected using fluorescent intercalating dyes designed to recognize such changes.In biosensor designs, Hg(II) can act as a fluorescent switch by using T-rich dsDNA in conjunction with fluorescent dyes such as 4 ′ ,6-diamidino-2-phenylindole (DAPI) [16], thioflavin T (ThT) [19,20], and SYBR Green I [21].The conformational changes in Hg(II)-specific aptamers induced by Hg(II) binding alter the fluorescence signal, which can be measured precisely.
Additionally, the G-triplex (G3) is acknowledged as a distinctive secondary DNA structure capable of effective monitoring by ThT [22].ThT preferentially binds to a parallel G3 structure, producing a more powerful fluorescence signal than an antiparallel or mixed parallel-antiparallel structure.It offers a more flexible and adaptable DNA configuration, thus enhancing the potential for the development of Hg(II) biosensors.The versatility of the DNA structure opens numerous possibilities for the innovative design and optimization of biosensors.Wang et al. designed intramolecular dsDNA structures using T-Hg(II)-T mismatched complexes to facilitate the detection of Hg(II) [23].The design, which places dsDNA adjacent to the G3 structure, significantly enhanced the fluorescence signal of the G3/ThT complex.Moreover, analysis of the G3 folding dynamics indicated that the 5 ′ overhangs of G3 preferentially promote the formation of a parallel G3 structure over an antiparallel one [24].In other words, the proximal DNA to the 5 ′ or 3 ′ ends of the G3 structure can differentially affect its conformation.However, the effect of the dsDNA at the 5 ′ and 3 ′ ends of G3 on G3's folding still needs to be fully understood.
The study designed a series of oligonucleotides, as shown in Table S1, to monitor the folding of G3 induced by the adjacent dsDNA at the 5 ′ or 3 ′ end of G3 using ThT.Specifically, the Hg(II) aptamers G3MB6 contains single-stranded DNA (ssDNA) with the sequence of 5 ′ -TGCTTAGTCCCTAGCTATATGGGAAGGGAGGG-3 ′ , which includes the G3 motif.cDNA-9, with the sequence 5 ′ -TAGCTTGGGTCTTTGCA-3 ′ , was constructed to develop a labeling-free method for detecting Hg(II) based on the formation of a G3-ThT complex, thereby improving the sensitivity of detection.Without Hg(II), G3MB6 forms a DNA hairpin structure that restricts the intercalation of ThT, resulting in no fluorescence signal.With Hg(II), G3MB6 and cDNA-9 form dsDNA through T-Hg(II)-T base pairs, forming the G3 structure.It facilitates the embedding of ThT in the G3, producing a stronger fluorescence signal and enabling the specific detection of Hg(II).Therefore, it offers the advantages of simple operation, good stability, and low cost, providing a novel approach to monitoring trace levels of Hg(II) in natural samples and ensuring human health and safety.
Principles of Detecting Hg(II)
ThT is recognized for its ability to form a highly fluorescent G3/ThT complex with parallel G3 structures [22].As detailed in Scheme 1, the formation of dsDNA at either the 5 ′ or 3 ′ end of G3 can influence its folding pattern, thereby modulating the fluorescence signal of the G3/ThT complex.Consequently, the conformation of G3 can be inferred from the specific fluorescence intensity of the G3/ThT complex.Our findings indicate that dsDNA at the 5 ′ end of G3 enhanced the G3/ThT complex's fluorescence intensity, suggesting that dsDNA at the 5 ′ end promotes the formation of parallel G3 structures.In this work, a label-free DNA probe mediated by G3-dsDNA for detecting Hg(II) was developed.A guanine-rich DNA probe, G3MB6, was designed, and the single-stranded cDNA-9 was partially complementary to the 5 ′ end of G3MB6.Without Hg(II), G3MB6 naturally adopted a hairpin conformation due to being self-complementary, and cDNA-9 remained in a free coil state, resulting in weak fluorescence signals when ThT was added.When the Hg(II) was introduced to the detection system, the T-Hg(II)-T mismatches disrupted the hairpin structure of G3MB6, allowing the 5 ′ end of G3MB6 to hybridize with cDNA-9, forming a double-stranded region.The hybridization of T-Hg(II)-T further influenced the folding of G3MB6 into a parallel G3 structure, which is consistent with the 5 ′ overhangs of G3 [24], significantly restricting the rotation of the aromatic ring of ThT and generating a solid fluorescent signal.This enabled the quantitative monitoring of Hg(II) by activating a change in the fluorescent signal.Our investigation into G3's conformation revealed that dsDNA at different positions (5 ′ and 3 ′ ends) directly impacted the folding of G3, which can be monitored by the fluorescence of ThT.Specifically, G3 transitions from a parallel to an antiparallel fold when the dsDNA is positioned at the 3 ′ terminus, causing substantially lower fluorescence intensity than the configuration with dsDNA at the 5 ′ end (5 ′ -dsDNA-G3-3 ′ ).
dsDNA at the 5′ end of G3 enhanced the G3/ThT complex's fluorescence intensity, suggesting that dsDNA at the 5′ end promotes the formation of parallel G3 structures.In this work, a label-free DNA probe mediated by G3-dsDNA for detecting Hg(II) was developed.A guanine-rich DNA probe, G3MB6, was designed, and the single-stranded cDNA-9 was partially complementary to the 5′ end of G3MB6.Without Hg(II), G3MB6 naturally adopted a hairpin conformation due to being self-complementary, and cDNA-9 remained in a free coil state, resulting in weak fluorescence signals when ThT was added.When the Hg(II) was introduced to the detection system, the T-Hg(II)-T mismatches disrupted the hairpin structure of G3MB6, allowing the 5′ end of G3MB6 to hybridize with cDNA-9 forming a double-stranded region.The hybridization of T-Hg(II)-T further influenced the folding of G3MB6 into a parallel G3 structure, which is consistent with the 5′ overhangs of G3 [24], significantly restricting the rotation of the aromatic ring of ThT and generating a solid fluorescent signal.This enabled the quantitative monitoring of Hg(II) by activating a change in the fluorescent signal.Our investigation into G3's conformation revealed that dsDNA at different positions (5′ and 3′ ends) directly impacted the folding of G3, which can be monitored by the fluorescence of ThT.Specifically, G3 transitions from a paralle to an antiparallel fold when the dsDNA is positioned at the 3′ terminus, causing substantially lower fluorescence intensity than the configuration with dsDNA at the 5′ end (5′-dsDNA-G3-3′).
Scheme 1.The effect of the adjacent 5′ or 3′ ends of the dsDNA of G3 monitored by ThT and the principles of detecting Mercury (Hg(II)).
Effect of 5′/3′-dsDNA on the Fluorescence Signal of G3/ThT
The reaction's feasibility was initially optimized to establish the ideal assay conditions.As depicted in Figure S1A, various final concentrations of ThT, ranging from 2 to 8 µM, were detected to determine the concentration that yielded the highest fluorescence intensity (F/F0, where F denotes the fluorescence intensity of G3MB1 + cDNA-1 + ThT and F0 denotes the fluorescence intensity of G3MB1 + ThT).This indicated that the optima F/F0 was achieved at a concentration of ThT of 4 µM.In determining the optimal reaction time for the formation of a duplex between G3MB1 and cDNA-1, the fluorescence Scheme 1.The effect of the adjacent 5 ′ or 3 ′ ends of the dsDNA of G3 monitored by ThT and the principles of detecting Mercury (Hg(II)).
2.2.Effect of 5 ′ /3 ′ -dsDNA on the Fluorescence Signal of G3/ThT The reaction's feasibility was initially optimized to establish the ideal assay conditions.As depicted in Figure S1A, various final concentrations of ThT, ranging from 2 to 8 µM, were detected to determine the concentration that yielded the highest fluorescence intensity (F/F0, where F denotes the fluorescence intensity of G3MB1 + cDNA-1 + ThT and F0 denotes the fluorescence intensity of G3MB1 + ThT).This indicated that the optimal F/F0 was achieved at a concentration of ThT of 4 µM.In determining the optimal reaction time for the formation of a duplex between G3MB1 and cDNA-1, the fluorescence intensity increased along with an increase in the reaction time from 2 to 60 min.Figure S1B illustrates that F/F0 reached a maximum after 60 min, maintaining stability.Subsequently, the detection conditions were refined, particularly for the pH of incubation and the concentrations of ions (Na(I) and Mg(II)) in the 50 mM Tris-HCl buffer.Figure S2A demonstrates that a pH of 7.6 was optimal.The effect of the concentration of salt on F/F0 was also evaluated, with the Na(I) concentrations set at 0, 50, 100, 150, and 200 mM, and the Mg(II) concentrations at 0, 10, 20, 30, and 50 mM.Figure S2B,C show that increasing concentrations of Na(I) and Mg(II) facilitated the formation and stabilization of the DNA's double helix and G3 structure, thereby enhancing the F/F0 of the detection system.Nevertheless, excessive salt ions in the buffer impeded the system's performance, with optimal concentrations identified as 100 mM for Na(I) and 20 mM for Mg(II).Lastly, the reaction temperature of G3MB1 + cDNA-1 and the ThT incubation time were assessed.Reaction temperatures of 25, 37, 50, and 60 • C were tested.Figure S3A reveals that samples incubated at 37 • C exhibited the highest F/F0 ratio.Elevated temperatures adversely affected the stability and formation of the duplex and the G3 structure, resulting in decreased F/F0.The incubation time of ThT also significantly influenced the fluorescence intensity.Figure S3B indicates that among the five groups (ThT incubated for 1, 5, 10, 20, and 30 min), an incubation time of 5 min was optimal.
The fluorescence signal of the G3/ThT complex can be significantly affected by the adjacent dsDNA.To investigate the impact of the number of base gaps connecting the adjacent dsDNA to G3, as well as the effects of G3's conformations on the fluorescence of G3/ThT, we designed G3MB1 and its complementary strands of cDNA-1-4, as detailed in Table S1, with base gaps of 3, 2, 1, and 0, respectively.This demonstrated that the enhancement of the fluorescence of the G3/ThT complex was most pronounced when the number of base gaps was two or three.Conversely, shorter base gaps diminished the fluorescence signals, as illustrated in Figure 1A.We hypothesized that a gap of two or three bases between the adjacent dsDNA and the G3 structure facilitated the formation of a parallel G3 configuration, thereby increasing the binding affinity of ThT.In contrast, a base gap of zero or one may induce a less favorable parallel form in the G3 structure, and the fluorescence intensity of ThT was reduced.
concentrations of Na(I) and Mg(II) facilitated the formation and stabilization of the DNA's double helix and G3 structure, thereby enhancing the F/F0 of the detection system.Nevertheless, excessive salt ions in the buffer impeded the system's performance, with optimal concentrations identified as 100 mM for Na(I) and 20 mM for Mg(II).Lastly, the reaction temperature of G3MB1 + cDNA-1 and the ThT incubation time were assessed.Reaction temperatures of 25, 37, 50, and 60 °C were tested.Figure S3A reveals that samples incubated at 37 °C exhibited the highest F/F0 ratio.Elevated temperatures adversely affected the stability and formation of the duplex and the G3 structure, resulting in decreased F/F0.The incubation time of ThT also significantly influenced the fluorescence intensity.Figure S3B indicates that among the five groups (ThT incubated for 1, 5, 10, 20, and 30 min), an incubation time of 5 min was optimal.
The fluorescence signal of the G3/ThT complex can be significantly affected by the adjacent dsDNA.To investigate the impact of the number of base gaps connecting the adjacent dsDNA to G3, as well as the effects of G3's conformations on the fluorescence of G3/ThT, we designed G3MB1 and its complementary strands of cDNA-1-4, as detailed in Table S1, with base gaps of 3, 2, 1, and 0, respectively.This demonstrated that the enhancement of the fluorescence of the G3/ThT complex was most pronounced when the number of base gaps was two or three.Conversely, shorter base gaps diminished the fluorescence signals, as illustrated in Figure 1A.We hypothesized that a gap of two or three bases between the adjacent dsDNA and the G3 structure facilitated the formation of a parallel G3 configuration, thereby increasing the binding affinity of ThT.In contrast, a base gap of zero or one may induce a less favorable parallel form in the G3 structure, and the fluorescence intensity of ThT was reduced.Furthermore, the experiments assessed the effect of the G3's position on the fluorescence signal.We observed a significant enhancement of the fluorescence signal when the G3 structure was positioned at the 3′ terminus of the DNA strand, indicating that the adjacent dsDNA located at the 5′ terminus of G3, such as in G3MB1, substantially increased the fluorescence signal.In contrast, only a modest enhancement was detected for G3MB2, Furthermore, the experiments assessed the effect of the G3's position on the fluorescence signal.We observed a significant enhancement of the fluorescence signal when the G3 structure was positioned at the 3 ′ terminus of the DNA strand, indicating that the adjacent dsDNA located at the 5 ′ terminus of G3, such as in G3MB1, substantially increased the fluorescence signal.In contrast, only a modest enhancement was detected for G3MB2, where the G3 structure was located at the 5 ′ end.The observed phenomenon was attributed to the formation of the favorable structure of a parallel G3 at the 3 ′ terminus of G3MB1.The parallel G-triplex was more predisposed to bind ThT, partially restricting the rotation of the aromatic ring within ThT and ultimately generating a robust fluorescent signal, as shown in Figure 1B.
The length of its stem directly influences the stability of the hairpin structure.To determine the optimal formation of the hairpin structure, we designed G3MB1, G3MB3, and G3MB4 (Table S1) with stem lengths of four, six, and eight base pairs, respectively.As illustrated in Figure 2, the fluorescence signals of G3MBn (n = 1, 3, 4) were of low intensity, indicating that even a stem length of four base pairs was sufficient to inhibit the formation of the G3.Upon adding their corresponding complementary strands, G3MB1, with a stem length of four base pairs, exhibited the strongest fluorescence signal compared with G3MB3 and G3MB4, which have six and eight base pairs, respectively.As shown in Figure 2D, the fluorescence signals decreased gradually with the increase in the stem length of the hairpin structures.This suggested that shorter stem lengths facilitate more manageable disruption, thereby making the formation of the G3 more controllable.Short stems do not require extensive sequences for recognition, thus shortening the response time.S1) with stem lengths of four, six, and eight base pairs, respectively.As illustrated in Figure 2, the fluorescence signals of G3MBn (n = 1, 3, 4) were of low intensity, indicating that even a stem length of four base pairs was sufficient to inhibit the formation of the G3.Upon adding their corresponding complementary strands, G3MB1, with a stem length of four base pairs, exhibited the strongest fluorescence signal compared with G3MB3 and G3MB4, which have six and eight base pairs, respectively.As shown in Figure 2D, the fluorescence signals decreased gradually with the increase in the stem length of the hairpin structures.This suggested that shorter stem lengths facilitate more manageable disruption, thereby making the formation of the G3 more controllable.Short stems do not require extensive sequences for recognition, thus shortening the response time.According to prior research, ThT binds to guanine-adenine (GA) sequences in dimeric parallel strands, enhancing fluorescence [25].Additionally, ThT can recognize guanine-thymine (GT) sequences when forming T-Hg(II)-T mismatched pairs [26].Our earlier study demonstrated that T-Hg(II)-T mimics thymine-adenine (TA) pairs [16], leading According to prior research, ThT binds to guanine-adenine (GA) sequences in dimeric parallel strands, enhancing fluorescence [25].Additionally, ThT can recognize guaninethymine (GT) sequences when forming T-Hg(II)-T mismatched pairs [26].Our earlier study demonstrated that T-Hg(II)-T mimics thymine-adenine (TA) pairs [16], leading us to hypothesize that GA sequences promote the restriction of ThT's rotation, increasing its fluorescence.To test this hypothesis, we designed G3MB5 (TGCTACTACCC-GAGCTATATGGGTAGGGCGGG) and compared it with G3MB1 (TGCTAAGTCCCGAGC-TATATGGGAAGGGAGGG).As shown in Figure 3, G3MB5 exhibited only a weak increase in the fluorescence signal upon binding to cDNA-8.In contrast, G3MB1 displayed a substantial increase in the fluorescence of ThT when paired with cDNA-2.The significant increase in fluorescence can be attributed to the GA sequences in the G3 structure of G3MB1, which likely increased the binding affinity for ThT.Consequently, the structure in G3MB1 demonstrated a more robust fluorescence response, indicating its superior suitability for this detection system.stantial increase in the fluorescence of ThT when paired with cDNA-2.The significant increase in fluorescence can be attributed to the GA sequences in the G3 structure of G3MB1, which likely increased the binding affinity for ThT.Consequently, the structure in G3MB1 demonstrated a more robust fluorescence response, indicating its superior suitability for this detection system.
Feasibility of G3MB6/ThT for Monitoring Hg(II)
The reaction times required for the detection of Hg(II) using G3MB6 and cDNA-9 were investigated to optimize the assay's efficiency.As depicted in Figure S4A, the longer the reaction time, the higher the relative fluorescence intensity (F/F0), and saturation was reached at 60 min.Beyond this point, extending the reaction time did not further enhance the fluorescence intensity.Additionally, the influence of the reaction temperature on the detection of Hg(II) was evaluated.The highest relative fluorescence intensity (F/F0) was observed at 25 °C, as depicted in Figure S4B.Therefore, it can be concluded that the dsDNA most effectively enhanced the fluorescence signal of the ThT/G3 complex through the T-Hg(II)-T structural linkage when reacted at 25 °C for 60 min.This indicated that the optimal conditions for the detection of Hg(II) used the G3MB6 probe and the incubation temperature of the complementary strand of cDNA-9 was 25 °C, and the reaction time was 60 min.
To validate the efficacy of this method for detecting Hg(II) ions, we first examined the intrinsic fluorescence signal of ThT in the buffer solution, as shown in Figure 4.In its free state, ThT exhibited minimal fluorescence intensity.Without the complementary strand of cDNA-9, G3MB6 was mixed with Hg(II), and ThT was subsequently added; the fluorescence signal remained low, indicating that the hairpin structure of G3MB6 remained stable in the solution.When Hg(II) was absent, the mixture of G3MB6 and cDNA-9, followed by the addition of ThT, resulted in no significant change in fluorescence compared with G3MB6 alone.This suggested that cDNA-9 alone cannot disrupt the hairpin structure of G3MB6.However, after mixing G3MB6 with cDNA-9 containing Hg(II), the fluorescence signal of ThT was significantly enhanced.This indicated that cDNA-9 can open the hairpin structure of G3MB6 by forming T-Hg(II)-T base pairs with Hg(II).This confirmed the feasibility of the fluorescence method of detecting Hg(II) ions, demonstrating its potential for practical applications in detecting Hg(II) contamination.
Feasibility of G3MB6/ThT for Monitoring Hg(II)
The reaction times required for the detection of Hg(II) using G3MB6 and cDNA-9 were investigated to optimize the assay's efficiency.As depicted in Figure S4A, the longer the reaction time, the higher the relative fluorescence intensity (F/F0), and saturation was reached at 60 min.Beyond this point, extending the reaction time did not further enhance the fluorescence intensity.Additionally, the influence of the reaction temperature on the detection of Hg(II) was evaluated.The highest relative fluorescence intensity (F/F0) was observed at 25 • C, as depicted in Figure S4B.Therefore, it can be concluded that the dsDNA most effectively enhanced the fluorescence signal of the ThT/G3 complex through the T-Hg(II)-T structural linkage when reacted at 25 • C for 60 min.This indicated that the optimal conditions for the detection of Hg(II) used the G3MB6 probe and the incubation temperature of the complementary strand of cDNA-9 was 25 • C, and the reaction time was 60 min.
To validate the efficacy of this method for detecting Hg(II) ions, we first examined the intrinsic fluorescence signal of ThT in the buffer solution, as shown in Figure 4.In its free state, ThT exhibited minimal fluorescence intensity.Without the complementary strand of cDNA-9, G3MB6 was mixed with Hg(II), and ThT was subsequently added; the fluorescence signal remained low, indicating that the hairpin structure of G3MB6 remained stable in the solution.When Hg(II) was absent, the mixture of G3MB6 and cDNA-9, followed by the addition of ThT, resulted in no significant change in fluorescence compared with G3MB6 alone.This suggested that cDNA-9 alone cannot disrupt the hairpin structure of G3MB6.However, after mixing G3MB6 with cDNA-9 containing Hg(II), the fluorescence signal of ThT was significantly enhanced.This indicated that cDNA-9 can open the hairpin structure of G3MB6 by forming T-Hg(II)-T base pairs with Hg(II).This confirmed the feasibility of the fluorescence method of detecting Hg(II) ions, demonstrating its potential for practical applications in detecting Hg(II) contamination.
Sensitivity and Selectivity of G3MB6/ThT for Detecting Hg(II)
The sensitivity of the G3MB6/ThT sensor to detect Hg(II) was investigated, and a solution with incrementally increasing concentrations of Hg(II) was introduced into the detection system.As shown in Figure 5A, in the 0-300 nM range, with an increase in the concentration of Hg(II), the fluorescence intensity gradually increased.Without Hg(II), the G3MB6 probe spontaneously formed a hairpin structure, preventing the ThT from intercalating and thus exhibiting a weak fluorescent signal.With an increase in the concentration of Hg (II), the T-base-rich cDNA-9 formed T-Hg(II)-T base pairs with G3MB6, disrupting the original hairpin structure of G3MB6 and allowing the formation of a G3 structure.The structural change enabled ThT to embed within the G3 and excite fluorescence.The fluorescence signal reached its maximum when the concentration of Hg(II) was 300 nM, as the T bases of cDNA-9 in the system had formed T-Hg(II)-T mismatches, completely disrupting the hairpin structure.Further enhancement of the concentration of Hg(II) could not continue to improve the fluorescence signal.
Sensitivity and Selectivity of G3MB6/ThT for Detecting Hg(II)
The sensitivity of the G3MB6/ThT sensor to detect Hg(II) was investigated lution with incrementally increasing concentrations of Hg(II) was introduced in tection system.As shown in Figure 5A Additionally, we compared this assay with other recently developed sensors for detecting Hg(II) based on non-labeling methods, focusing on the linear range and LODs.Table 1 summarizes the linear range and LOD of detecting Hg(II) in various systems, illustrating their sensitivity and practicality.Compared with other strategies, this method exhibited a favorable LOD, detection efficiency, and sensitivity in its linear range.These characteristics highlight its feasibility for detecting Hg(II).
Table 1.Comparison with the label-free methods for detecting Mercury (Hg(II)) by fluorescence ("on-off" or "off-on").The change in the fluorescence signal (F-F0) under excitation by ThT was linearly correlated with a concentration of Hg(II) of 0-300 nM.The linear fitting equation was y = 1.219x + 13.55, where y stands for F-F0 and x stands for the concentration of Hg(II), with an R 2 value of 0.9954.The sensor's limit of detection (LOD) for Hg(II) was 5.32 nM, calculated according to the 3σ/slope criterion.And based on the formula of 10σ/slope, the LOQ value of the sensor was 17.72 nM.This indicated that the G3MB6/ThT sensor could quantitatively detect Hg(II) within the concentration range of 17.72-300 nM with high sensitivity.
To investigate the selectivity of the G3MB6/ThT sensor for detecting Hg(II), we introduced 600 nM Hg(II) and 3 µM of other metal ions, including Ni(II), Mg(II), Fe(II), Fe(III), K(I), Co(II), Mn(II), Ca(II), and Cu(II), as interfering cations.The change in the fluorescence values (F-F0) is illustrated in Figure 5B.The results demonstrated that only Hg(II) generated a significant fluorescent signal by embedding ThT in the G3 structure.Even when other metal ions were introduced at much higher concentrations than Hg(II), they did not produce notable fluorescence signals.This indicated that the G3MB6/ThT sensor possesses high specificity for detecting Hg(II), as the detection method relies on forming T-Hg(II)-T mismatched structures.Furthermore, when Hg(II) was present along with other metal ions, the fluorescence intensity was almost identical to that observed with Hg(II) alone (Figure S5), further confirming that this method has high selectivity for detecting Hg(II).
Additionally, we compared this assay with other recently developed sensors for detecting Hg(II) based on non-labeling methods, focusing on the linear range and LODs.Table 1 summarizes the linear range and LOD of detecting Hg(II) in various systems, illustrating their sensitivity and practicality.Compared with other strategies, this method exhibited a favorable LOD, detection efficiency, and sensitivity in its linear range.These characteristics highlight its feasibility for detecting Hg(II).
Analysis of Hg(II) in Tap Water and Milk
To evaluate the accuracy and practicality of the newly developed G3MB6/ThT sensor for the determination of Hg(II) in real-world samples, G3MB6 and cDNA-9 were utilized to assess the levels of Hg(II) in milk and tap water samples, incorporating 20, 100, and 250 nM Hg(II) solutions.Upon excitation at 430 nm, the fluorescence intensities at 505 nm were recorded, and the concentrations of Hg(II) were derived using the equation y = 1.219x + 13.55, where y denotes F-F0 and x denotes the concentration of Hg(II).The recovery of Hg(II) in milk samples ranged from 100.30% to 103.20%, as shown in Table 2.Moreover, the recovery of Hg(II) in tap water was calculated, as detailed in Table 2, and was from 101.67% to 103.75%.The detection system's relative standard deviations (RSDs) were consistently lower than 5% (n = 3), attesting to its high accuracy.These findings underscore the efficacy and precision of the detection method to identify the concentration of Hg(II) in real-world samples, thereby establishing its utility as a reliable tool for environmental and food safety monitoring.
The intensity of fluorescence was assessed using an F-4500 fluorescence spectrometer (Hitachi, Tokyo, Japan).Under an excitation wavelength of 430 nm, the emission spectra were measured from 450 nm to 650 nm, recording the maximum emission (Em max ) peak at 505 nm.The excitation and emission slit widths of F-4500 were both set at 5 nm.The TMP's voltage was maintained at 700 V for the optimal detection condition.
Fluorescence of the G-Triplex with the Adjacent dsDNA
For this assay, 1.25 µL of the DNA probe (G3MB1, 20 µM) and 1.5 µL of the complementary DNA strand (cDNA-1, 20 µM) were added to an Eppendorf tube (0.5 mL), then diluted with a buffer to 200 µL, and their final concentrations were 125 nM and 150 nM.To optimize the experimental conditions, the intensity of fluorescence of ThT at 505 nm was examined under different incubation conditions, such as the final concentration of ThT (2-8 µM), the reaction time between G3MB1 and its complementary cDNA-1 (2-360 min), the 50 mM Tris-HCl buffer (50 mM KCl, 100 mM NaCl; pH 7-8.5, the concentration of salts in the 50 mM Tris-HCl buffer (Na(I): 0-200 mM; Mg(II): 0-50 mM), the reaction temperature (25-60 • C), and the reaction time of ThT (1-30 min).
The incubation mixture in an Eppendorf tube (0.5 mL) was composed of a DNA probe (G3MBn, n = 1-5) and complementary DNA strands (cDNAn, n = 1-8), which were diluted with a 50 mM Tris-HCl buffer (50 mM KCl, 100 mM NaCl, and 20 mM MgCl 2 , pH = 7.6) to final concentrations of 125 nM and 150 nM.The mixture was mixed entirely in a shaker and then incubated at 37 • C for 60 min.The concentration of the DNA probe and cDNA was 1:1.2.Afterward, 8 µL ThT (100 µM) was added to the abovementioned incubated solution to reach a total volume of 200 µL.Each sample was incubated at room temperature (25 • C) for 5 min.For optimization of G3MBn (n = 1-5), we studied the effects of the number of gap bases between the ds-DNA and the G-triplex from 0 to 3; the length of the hairpin stem formed by the probe itself from 4, 6, and 8 bases; and different structures of the G-triplex.
The reaction solution was prepared as mentioned above.A Tris-HCl (50 mM) buffer containing 50 mM KCl, 100 mM NaCl, and 20 mM MgCl 2 (pH = 7.6) was adjusted to 192 µL and mixed with the reactants thoroughly, then incubated in a shaker at 25 • C for 60 min.The concentration of G3MB6 and cDNA-9 was 1:1.2.Subsequently, 100 µM ThT (8 µL) was added to each sample to obtain a final volume of 200 µL and reacted at 25 • C for 5 min.For the sensitivity of detecting Hg(II) in the system, the final concentrations of Hg(II) solution were 0, 2,4,6,8,10,20,40,60,80,100,150,200,250,300, and 350 nM.In the selectivity experiment, interference ions Ni(II), Co(II), Mg(II), K(I), Fe(III), Fe(II), Mn(II), Ca(II), and Cu(II) in a final concentration of 3 µM were used as interfering substances in the 600 nM Hg (II) solution.When the factors were not the experimental subjects, they were kept under the optimal conditions for detection.Each sample underwent analysis at least three times.
Detection of Hg(II) in Natural Samples
To evaluate the performance of the biosensor detection system in natural samples, G3MB6 and cDNA-9 were used to detect concentrations of Hg(II) in water and milk, measuring the recovery rates of Hg(II).Thus, the concentrations of Hg(II) were 20, 100, and 250 nM.Each group of samples underwent analysis at least three times.
Conclusions
Capitalizing on the phenomenon of G3/ThT's enhancement of fluorescence, catalyzed by the adjacent dsDNA situated at the 5 ′ terminus of the G3, our investigation unveiled several critical findings.Firstly, we observed that a four-base stem within the hairpin structure, crafted by single strands, effectively curtailed the formation of G3, affording a notable advantage in the reaction time compared with longer stem lengths.Simultaneously, the emergence of dsDNA at the 5 ′ terminus orchestrated the alignment of the G3 moiety into a parallel configuration, thereby amplifying the affinity of ThT towards the G-3 and yielding a conspicuous fluorescence signal at 505 nm, coupled with the highest F/F0 ratio.We identified the optimal configuration, indicating that two or three bases between the G3 and dsDNA profoundly influenced ThT's integration into the G3 structure.Moreover, our study pioneered a novel Hg(II) detection strategy based on G3/ThT.Leveraging the interaction between G3MB6 and its complementary strand, cDNA-9, when Hg(II) existed, we facilitated the formation of a T-Hg(II)-T complex, mimicking TA base pairs.The molecular rearrangement effectively exposed the G3 structure at the 3 ′ end of G3MB6, thereby harnessing the potential enhancement of the fluorescence of G3/ThT.Subsequent sensitivity and selectivity analyses underscored the system's remarkable efficacy and specificity for detecting Hg(II).Applying the G3MB6/ThT sensor to monitor Hg(II) levels in milk and tap water samples yielded promising outcomes, with average recovery rates confirming successful detection.The G3MB6/ThT sensor that was developed demonstrated remarkable precision in identifying Hg(II) in environmental samples.Its rapidity, simplicity, and accuracy position it as a valuable asset for assessing water quality and environmental conditions, thus promising impactful practical applications.
Figure 1 .
Figure 1.(A) Enhancement of the fluorescence of the G3/ThT complex with varying base gaps.(B) Impact of G3's position on the fluorescence signal.For this, 125 nM G3MB1/G3MB2 and 150 nM cDNA-1-cDNA-5, respectively, were reacted in a 50 mM Tris-HCl buffer (pH = 7.6) including 50 mM KCl, 100 mM NaCl, and 20 mM MgCl2 at 37 °C for 60 min.Subsequently, a final concentration of 4 µM of ThT was added and reacted for 5 min at 25 °C.The fluorescence signal was detected at Ex = 430 nm.
Figure 1 .
Figure 1.(A) Enhancement of the fluorescence of the G3/ThT complex with varying base gaps.(B) Impact of G3's position on the fluorescence signal.For this, 125 nM G3MB1/G3MB2 and 150 nM cDNA-1-cDNA-5, respectively, were reacted in a 50 mM Tris-HCl buffer (pH = 7.6) including 50 mM KCl, 100 mM NaCl, and 20 mM MgCl 2 at 37 • C for 60 min.Subsequently, a final concentration of 4 µM of ThT was added and reacted for 5 min at 25 • C. The fluorescence signal was detected at Ex = 430 nm.
, in the 0-300 nM range, with an incre concentration of Hg(II), the fluorescence intensity gradually increased.Witho the G3MB6 probe spontaneously formed a hairpin structure, preventing the Th tercalating and thus exhibiting a weak fluorescent signal.With an increase in t tration of Hg (II), the T-base-rich cDNA-9 formed T-Hg(II)-T base pairs with G rupting the original hairpin structure of G3MB6 and allowing the formation of a ture.The structural change enabled ThT to embed within the G3 and excite flu The fluorescence signal reached its maximum when the concentration of Hg(I nM, as the T bases of cDNA-9 in the system had formed T-Hg(II)-T mismatc pletely disrupting the hairpin structure.Further enhancement of the concen Hg(II) could not continue to improve the fluorescence signal.The change in the fluorescence signal (F-F0) under excitation by ThT w correlated with a concentration of Hg(II) of 0-300 nM.The linear fitting equati 1.219x + 13.55, where y stands for F-F0 and x stands for the concentration of H an R 2 value of 0.9954.The sensor's limit of detection (LOD) for Hg(II) was 5.32 lated according to the 3σ/slope criterion.And based on the formula of 10σ/slope value of the sensor was 17.72 nM.This indicated that the G3MB6/ThT sensor cou tatively detect Hg(II) within the concentration range of 17.72-300 nM with high s To investigate the selectivity of the G3MB6/ThT sensor for detecting Hg(II) duced 600 nM Hg(II) and 3 µM of other metal ions, including Ni(II), Mg(II), Fe( K(I), Co(II), Mn(II), Ca(II), and Cu(II), as interfering cations.The change in th cence values (F-F0) is illustrated in Figure 5B.The results demonstrated that o
Table 1 .
Comparison with the label-free methods for detecting Mercury (Hg(II)) by fluorescence ("on-off" or "off-on").
Table 2 .
Recovery rates of concentrations of Hg(II) in milk and tap water (n = 3). | 2024-07-28T15:12:07.425Z | 2024-07-26T00:00:00.000 | {
"year": 2024,
"sha1": "7776ea54e9f31b5887ab7bc778d484f28e149fd7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms25158159",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4a0fcca793ab84263e9c033301ac8742c4bd101",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12024965 | pes2o/s2orc | v3-fos-license | Factors influencing the length of the incision and the operating time for total thyroidectomy
Background The incision used for thyroid surgery has become shorter over time, from the classical 10 cm long Kocher incision to the shortest 15 mm access achieved with Minimally Invasive Video-Assisted Thyroidectomy. This rather large interval encompasses many different possible technical choices, even if we just consider open surgery. The aim of the study was to assess the correlation between incision length and operation duration with a set of biometric and clinical factors and establish a rationale for the decision on the length of incision in open surgery. Methods Ninety-seven consecutive patients scheduled for total thyroidectomy were prospectively evaluated. All operations were performed by the same team and the surgeon decided the length of the incision according to his personal judgement. Patients who had previously undergone neck surgery were excluded. Results The length of the incision was strongly correlated with gender, thyroid volume, neck circumference and clinical diagnosis and weakly correlated with the body mass index. Operation duration was only weakly correlated with gender and neck circumference. Multiple linear regression revealed that the set of factors assessed explained almost 60 % of the variance in incision length but only 20 % of the variance in operation duration. When patients were classified according to the distribution of their thyroid volume, cases within one standard deviation of the mean did not show a significant difference in terms of operation duration with incisions of various lengths. Conclusions Although thyroid volume was a major factor in driving the decision with respect to the length of the incision, our study shows that it had only minor effect on the duration of the operation. Many more open thyroidectomies could therefore be safely performed with shorter incisions, especially in women. Duration of the operation is probably more closely linked to the inherent technical difficulty of each case.
Background
The classical Kocher incision for thyroid surgery, which is approximately 10 cm long, has been the gold standard for more than a century. Since the introduction of Minimally Invasive (MI) surgery of the neck in the second half of the 1990s [1], several different techniques have been proposed, which have been classified as pure endoscopic techniques, video-assisted techniques and minimally invasive open surgery [2]. Recently the concept of "minimally invasive" has been questioned [3], which shows how still lively is the debate about the way to obtain satisfactory cosmetic results and limit the overall invasiveness of the procedure without enhancing the risk among patients. In the context of open surgery, different technical solutions have been proposed in the pursuit to shorten the incisions, including the use of sections of strap muscles or flapless incisions [4][5][6].
If we consider a 3 cm. long incision as the upper threshold of video-assisted or endoscopic techniques and 8 cm. as the lower threshold of the conventional Kocher incision, a range of many possible choices is delimited. In this context, despite some published papers [7,8], a standardised classification and guidelines to determine the appropriate extent of open access remain to be elaborated.
This study is a single-surgeon prospective survey that aimed to assess the correlation of both incision length and operation duration with a set of biometric and clinical factors and establish a rationale for the decision on the length of the incision in open surgery.
Methods
Ninety-seven consecutive patients scheduled for a total thyroidectomy were prospectively evaluated. Patients with previous neck surgery were excluded. All operations were performed by the same medium-high volume team according to Ho [9], with a flow of approximately 100 total thyroidectomy/year. For each patient we recorded the body mass index (BMI), circumference of the neck (NC), distance between the suprasternal notch and thyroid cartilage (STD), volume of the thyroid gland (VT) as measured by ultrasound according to Ruggieri [10], length of the incision (LI), as well as the clinical and pathological diagnoses. The sample was composed by 77 female and 22 male patients, with a mean value ± s.d. for age of 54.6 ± 14.5, for BMI of 27.07 ± 5.2, for NC of 38.8 ± 4.3, for STD of 7.2 ± 1.3 and for TV of 27.7 ± 18.9. Clinical diagnoses were multinodular goitre (77), papillary carcinoma (12), hyper functioning goitre (8). In 26 cases a concurrent chronic thyroiditis was present. The duration of the operation (DO) was measured from the moment when the incision was made until the time of wound closure.
Surgical technique
The surgeon (AA) decided the length of the incision according to his personal judgement, as determined by the patient's clinical data. An incision was made transversally between the cricoids and the suprasternal notch. The LI was then measured, the platisma was divided and superior and inferior flaps were raised. The strap muscles were separated longitudinally and the gland was exposed. Thyroidectomy was then performed according to the conventional technique through the use of a harmonic dissector. Traction of the ipsilateral lobe outside of the wound, and lateral retraction of the lateral wound margins by retractors were used to obtain a satisfactory exposure of the surgical field, even in cases of large multinodular goitres.
Statistical methods
All data were prospectively collected and stored in an electronic format. Comparison between means was performed by two-tailed Student's t-test for unpaired samples when two variables were compared and by analysis of variance (ANOVA) for more than two variables. The strength of correlation between variables was assessed by Pearson's r coefficient, while the overall predictivity of a set of factors for a dependent variable was computed by linear multiple regression. A correlation coefficient >0.50 was considered to be a sign of a strong correlation. The r squared (R 2 ) value was assumed to be a measure of the amount of variance of the dependent variable that was explained by the independent variable or by the regression model.
Results
Length of the incision as a dependent variable was strongly correlated with thyroid volume and neck circumference and weakly correlated with BMI. The duration of the operation was only weakly correlated with neck circumference. LI and DO were also weakly related to each other. Table 1 reports the matrix of r coefficients, while Figure 1 shows the two strongest correlations for LI.
Both LI and DO were significantly longer in men than in women and LI was also different for different diagnoses and in case with concurrent thyroiditis ( Table 2).
The full model used for multiple linear regression explained almost 60% of the variance in incision length (R 2 = 0.59). When DO was considered as a dependent variable, only 20% of the variance was explained (R 2 = 0.20).
To ensure that thyroid volume was similar among patients operated upon with incisions of varying lengths and a range in DO, the patients were stratified into three classes according to the distribution of their thyroid volume: cases with TV smaller than 1 standard deviation from the mean, cases within one standard deviation of the mean and cases with a TV greater than 1 standard deviation from the mean. DO was then compared to three classes of LI, defined as above according to their distribution around the mean. There was not a significant difference in DO among the classes of LI for any of the three strata of TV (Table 3).
Discussion
In Italy, thyroidectomy is the fifth most frequently performed operation in the Departments of General Surgery [11]. It is a procedure commonly performed across the country, in both large volume centres and small hospitals. MI techniques for thyroid surgery-not only MIVAT but many other approaches [12]-are proposed with increasing frequency but they require a high level of competence to minimise the length of the learning curve, which is otherwise rather long [13]. It has been stated that the "widespread application of this technique has been somewhat limited and, for practical purposes, has been confined to high-volume surgeons who have plentiful skilled assistants" [14]. No clear advantage of MI techniques in terms of medium-long term outcomes has been demonstrated [15], it is therefore reasonable to aim for standardisation and technical advancement of the conventional open technique in order to reduce the invasiveness of the procedure. Our observational study was based on the underlying hypothesis that there is a latent tendency to overestimate the difficulty of the operation and to create a wider incision than is strictly needed. We empirically showed that the main factors related to the length of the incision are gender, neck circumference and thyroid volume. Diagnosis and pathology had an influence, but these factors could be relevant in an indirect way, because of their influence on thyroid volume. Neither diagnosis nor the presence of thyroiditis was related to DO. These findings are consistent with the findings of Brunaud [7], who also found a correlation of LI with BMI that was stronger than the one we found. This difference could be due to a difference in the samples. Our patients tended to be over-weighted (mean BMI = 27.07), which could have counteracted the effect of this factor. BMI as a risk factor has been studied in a large database of patients [16] and was positively correlated with a longer operation time and with higher morbidity but not to a clinically significant extent. In our opinion NC is a better candidate than BMI as an element on which to base the decision regarding the incision length, but a study specifically tailored to this goal should be designed. We found only a weak correlation among the assessed factors, including the length of the incision (r = 0.33) and the duration of the operation. In particular, when patients with thyroid of similar volume were operated upon with incisions of varying lengths, the time needed for the operation was not significantly different for smaller incisions compared to the longer ones.
Terris [8] proposed a classification system for MI thyroid surgery, based on two factors: size of the largest nodule and BMI. The authors then divided the continuum of possible incision lengths (from 0 to >6 cm) into four classes. These classes were defined a priori and then validated by retrospectively grouping a series of 359 patients. Information on DO was not available, but the results of clustering yielded mean incision lengths of 2.0, 3.3, 4.9 and 8.3 cm for the four classes, respectively. These results coincide only partially with ours, regarding the range of incisions used (4Ä7 cm). In particular, there is a wide gap between the third and the fourth class. Our work provides further elements to set guidelines to assist the surgeon in the choice of a better incision for use in open surgery and that the limits of incision length in open surgery can be lowered.
A limitation of this study was that-because of the prospective design-the surgeon knew that his decision about the length of the incision was going to be recorded; this could have altered his judgement. To limit this bias, the interim results of the on-going study were not disclosed. Comparison between the figures of the first and the last set of operations during the study showed that the performance remained relatively stable regarding mean LI and DO, as if the on-going study had not altered the surgeon's behaviour. The decision was always made according to his subjective evaluation, without any formal decision process based on empirical data. In this sense the surgeon's decision regarding length was a consequence of his assessment of the expected technical difficulty of the operation.
Conclusions
Our data suggest that the hypothesis of a tendency to use longer incisions than needed is likely. More open thyroidectomies could be safely performed with shorter incisions. This is especially true for women with narrow neck and with a thyroid of small volume. The duration of operation in itself is probably more closely linked to the inherent technical difficulty of each case. | 2017-06-22T15:23:30.987Z | 2012-07-31T00:00:00.000 | {
"year": 2012,
"sha1": "fa06dac690e8bff69ded8ec91f75372b8aa5b275",
"oa_license": "CCBY",
"oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/1471-2482-12-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f46c6a65ae07d65e9bc487aade9a678c68234c67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238494181 | pes2o/s2orc | v3-fos-license | Indications of Primary Cesarean Section In Multipara
Objective: To evaluate the indications of primary cesarean section in multipara and to assess the obstetric outcome including maternal, fetal morbidity and mortality, perinatal outcome. Study Design and Setting: It was a hospital based study of primary caesarean sections (CS) done on multiparous patients in duration of two years between January 1, 2016, and December 2017 at Jinnah medical college hospital Karachi. Methodology: Multiparous patients were those who had delivered through vaginal route one or more times (i.e. 28 weeks of gestation or above) or had 1–4children and grand-multiparous are those who had 5 or more children. All the cases included in the study were hospital based and cesarean section was decided by specialist. The procedure was performed by registrars and specialists. The selected patients were followed up till they were discharge from the ward with minimum hospital stay of three days. Data was compiled and results were carried out by SPSS version 23. Results: During the two years of study period, the number of total deliveries were 2064. The primary CS rate in multipara was 37.17%. These women have more likely to have an emergency cessarean sections compared to elective i.e. 85% and15%. The mean age of women was 29.5 years, booked cases were 72.5% and unbooked were 27.5%. Regarding indications for cesarean sections, non-progress of labour ranked first 25.5% followed by fetal distress 20%, pre eclampsia 12 % and ante partum hemorrhage 10.5% etc. Increase incidence of morbidity and mortality was seen in patients undergoing cesarean section due to different reasons. Conclusion: Primary caesarean sections in multipara comprise only a small percentage (37.17%) of total deliveries but were related to high maternal and fetal morbidity.
In the past years there has been a significant rise in the rate of caesarean section (CS) in both developed and developing countries rising from about 5% in developed countries to more than 50% in some regions of the world. 3cording to World Health Organization (WHO) study during the period of 2007-8 the rates of caesarean section in China and other Asian countries were 46% and 27%, respectively 4 in spite of 10-15% suggested by WHO. 10 With the passage of time there has been a change in the indications for caesarean section and the rate of both primary and repeat caesarean delivery have been on the rise, A study by Emma L Barber et al concluded that primary caesarean births accounted for 50% of the increase in caesarean section rate. 5 is essential to evaluate the several indications, maternal and fetal outcome related with a cesarean delivery as several studies have established that cesarean section causes a greater risk of maternal morbidity, and mortality in comparison to vaginal deliveries. 1 Primary caesarean section in multipara means first caesarean section done in the women who had delivered through vaginal route previously a viable fetus.Since these women (multiparous) have had previous uneventful labours, a sense of false security prevails in them and as a result such multiparous mothers often overlook there regular antenatal checkup and labour.There are still many doctors with an attitude of satisfaction that once a woman had passed through
INTRODUCTION:
Cesarean section is the most commonly performed surgical procedures; in many cases it can be life-saving for the mother, fetus or both 1 .Cesarean section is generally performed these days, when a vaginal delivery would lay the baby's or mother's life or health at risk. 2 her first pregnancy and labour, she had practically nothing to worry about her subsequent childbirths. 6The rapid rise in the rate of cesarean section in the current years warrant serious concern.Pakistan being a developing country have shown an alarming increase in the rate of cesarean section deliveries Haidar G et al from Hyderabad and Shamshad from Abbotabad Pakistan reported caesarean section rate as high as 67.7% 7 and 45.1% in 2007. 8The rationale of the study was to assess the CS in Karachi.
METHODOLOGY:
It was a hospital based study of primary caesarean sections done on multiparous patients in duration of two years between January 1, 2016, and December 2017 at Jinnah medical college hospital which is a tertiary care hospital.Multiparous patients are those who had delivered through vaginal route one or more (i.e.28 weeks of gestation or above) or had 1-4 children and grand-multiparous were those who had 5 or more children.All the cases included in the study were hospital based and cesarean section was decided by specialist.The procedure was performed by registrars and specialists.The selected patients were followed up till they were discharged from the ward with minimum hospital stay of three days.The patient's information was collected with the help of doctor present on the duty.The demographic data; included were age, parity, gravidity, maternal medical history; specific information on maternal or fetal pregnancyrelated complications; booked and unbooked status, mode of delivery, gestational age (measured according to the last menstrual period), (and it was confirmed by an ultrasound examination within 20 weeks of gestation or by the first trimester ultrasound measurement of the crown-rump length of the fetus), all primary indications for cesarean sections, the newborn's sex, birth weight and apgar score; and the maternal and perinatal outcomes and the need for ICU admission.All adverse maternal and fetal outcomes were recorded.All those females who were primigravida and previous cesarean section were excluded from the study.Informed consent was obtained from all participants.The study was approved by the hospital's research and ethics committee.
Statistical analysis was conducted using SPSS version 23.
For continuous variables minimum, maximum, mean, and standard deviation were calculated.Chi-square test was used for categorical variables.
RESULTS:
During the two years total deliveries were 2064 out of which vaginal deliveries were 1278 i.e. 61.91 % and total cessarean section were 786 with rate of 38.08.The multiparous women were 200 in number in which primary cesarean section was done and cesarean section rate came out to be 37.17%.
These women have more likely to have an emergency cessarean sections compared to elective i.e. 85% and15%.The overall incidence of primary emergency and elective caesarean section rate was shown in table no 1.The mean age of women was 29.5 years with range from 15 to 45 years, the 66% of women presents between 26-35 years of age and 72 % presents between 31-40 years.Among 200 multiparous patients which undergoing cessarean section, 87.5% presents with parity 1, 2, 3 and 4 while grand multiparity (5+ births) prevalent in 12.5% of all women.Prevalence of cesarean section according to parity was present in table no 2. Booked cases were 402 (72.5%) and unbooked were 136 (27.5%).The overall indications for cesarean sections were shown in table no. 3, in which non progress of labour ranked first (25.5%)followed by fetal distress (20%) etc.
Increase incidence of morbidity and mortality was seen in patients undergoing cesarean section due to different reasons.The number of patients who had, blood transfusion were fifteen, patients with prolong hospital stay were six due to (wound infection, obstructed labour, blood pressure and sugar monitoring), two patients had obstetrical hysterectomy and forty three babies were admitted in NICU due to fetal distress, neonatal jaundice, hypoglycemia, growth restriction and neonatal sepsis.
DISCUSSION:
A woman who had normal vaginal delivery still may require a caesarean section for safe delivery.The average labor curve continues to change from low parity to multiparity but not toward an ever improved progress The primary caesarean sections in multipara comprise small proportion of total deliveries i.e 9.6 % in our study which was relatively less than primary caesarean in primipara, but were actually associated with high maternal and fetal morbidity it is of concern.
In the study primary lscs in multipara constitutes 37.17%, however it is still higher than the World Health Organization recommendation of 15% 10 but is in the range of cesarean sections performed in United States i.e 34 % 2 but low than a highest level of 46% in China 4 , and other parts of Pakistan 67.7% 7 and 45.1% in 2007 8 .During labour it is now easier to determine the risks relating to the mother and the baby earlier due to increased use of technology, which can be somewhat related to increase in the amount of cesarean sections.
The rate of emergency caesarean section is much higher 85 % than the elective caesarean section i.e.15 % this is similar to earlier studies in Pakistan 12,13, Saxena N et al study 11 and Nigeria 14 etc and might be because of the prevalence of such factors as cephalo-pelvic disproportion and prolonged obstructed labour which are diagnosed in labour another probable explanation could be the great aversion to operative delivery in this environment which makes women 'surrender' to surgery as a last result .
The rate of emergency caesarean section is much higher 85 % than the elective caesarean section i.e.15 % this is similar to earlier studies in Pakistan 12,13, Saxena N et al study 11 and Nigeria 14 etc which might be because of the prevalence of factors such as prolonged labour or cephalo-pelvic disproportion which are diagnosed in the labour are could be the possible explanation for emergency cesarean section instead of instrumental vaginal delivery.
In the present study maximum number of women undergoing primary caesarean section were in the age group of 31-40 years i.e. 72 % and 66% presents bw 25-35 years which is also found in other researches i.e.Partha Saradhi et al study where as In Adnan A. Abu Omar series maximum number of patients were in the age group of < 25 years 15 .
Prolonged labor and fetal compromise remained the major indications for emergency cesareans.The commonest indications observed in this study were failure to progress 25.5%, fetal distress 20% , pre eclampsia 12 %, APH 10.5%etc which are similar to findings from other studies.A study conducted in the US 16 , in urban Bangladesh 17 and by Boyle A, Reddy UM and Landy HJ, etc 18 concluded the same indications for primary cesarean deliveries.In the majority of our patients i.e 72.5% were booked cases while 27.5% were unbooked which is comparable to 78.4% of Partha Saradhi et al study.
Increase incidence of morbidity and mortality is seen in patients undergoing cesarean section due to various reasons.
In the study postoperative complication rate was 11.5% which is nearer to 12% cases of Sethi Pruthwiraj et al study 1 , patients who had blood transfusion were 7.5%, patients with prolong hospital stay were 3% due to (wound infection, obstructed labour, blood pressure and blood sugar monitoring ) and two patients had obstetrical hysterectomy due to postpartum hemorrhage.Regarding to neonatal morbidity 21.5% were admitted in NICU due to fetal distress, neonatal jaundice, hypoglycemia, intra uterine growth restriction and neonatal sepsis similar to results observed in other studies 5,1 .There was no maternal death observed in the study.This may be because of the skilled obstetrician attendant at birth, Page-107 JBUMDC 2019; 9(2):105-108 The limitation of the study were the one public healthcare hospital in one territory.Further analysis in respective areas about indications and prevelance of primary caesarean section in the multiparous may be performed.
CONCLUSION:
Primary caesarean sections in multipara comprise only a small percentage of total deliveries but are related with high maternal and fetal morbidity.Due to previous normal deliveries these woman passes in a subnormal state of health throughout their pregnancy and labour, so they should be emphasized for good antenatal and intrapartum care and expert supervision periodically for any unforeseen emergencies.REFERENCES:
Table 1 :
9.Frequency of primary caesarean section
Table 3 :
Indication and Parity Crosstabulation Descriptive Analysis of Indications of Primary Cesarean Section In Multipara effective care during labour, management of pregnancy complications, availability of antibiotics, blood transfusion facilities and effective neonatal intensive care and early referral. | 2019-06-28T15:50:44.045Z | 2019-03-05T00:00:00.000 | {
"year": 2019,
"sha1": "444403704b1f21d349809b40c30e790edad3e4e8",
"oa_license": "CCBYNC",
"oa_url": "https://jbumdc.bahria.edu.pk/index.php/ojs/article/download/348/324",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "444403704b1f21d349809b40c30e790edad3e4e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220056984 | pes2o/s2orc | v3-fos-license | Chikungunya Virus: An Emergent Arbovirus to the South American Continent and a Continuous Threat to the World
Chikungunya virus (CHIKV) is an arthropod-borne virus (arbovirus) of epidemic concern, transmitted by Aedes ssp. mosquitoes, and is the etiologic agent of a febrile and incapacitating arthritogenic illness responsible for millions of human cases worldwide. After major outbreaks starting in 2004, CHIKV spread to subtropical areas and western hemisphere coming from sub-Saharan Africa, South East Asia, and the Indian subcontinent. Even though CHIKV disease is self-limiting and non-lethal, more than 30% of the infected individuals will develop chronic disease with persistent severe joint pain, tenosynovitis, and incapacitating polyarthralgia that can last for months to years, negatively impacting an individual’s quality of life and socioeconomic productivity. The lack of specific drugs or licensed vaccines to treat or prevent CHIKV disease associated with the global presence of the mosquito vector in tropical and temperate areas, representing a possibility for CHIKV to continually spread to different territories, make this virus an agent of public health burden. In South America, where Dengue virus is endemic and Zika virus was recently introduced, the impact of the expansion of CHIKV infections, and co-infection with other arboviruses, still needs to be estimated. In Brazil, the recent spread of the East/Central/South Africa (ECSA) and Asian genotypes of CHIKV was accompanied by a high morbidity rate and acute cases of abnormal disease presentation and severe neuropathies, which is an atypical outcome for this infection. In this review, we will discuss what is currently known about CHIKV epidemics, clinical manifestations of the human disease, the basic concepts and recent findings in the mechanisms underlying virus-host interaction, and CHIKV-induced chronic disease for both in vitro and in vivo models of infection. We aim to stimulate scientific debate on how the characterization of replication, host-cell interactions, and the pathogenic potential of the new epidemic viral strains can contribute as potential developments in the virology field and shed light on strategies for disease control.
INTRODUCTION
The Chikungunya virus (CHIKV) is an arthropod-borne virus (arbovirus) globally distributed to the tropical areas that has recently spread to subtropical areas and the western hemisphere. CHIKV is an arthritogenic virus belonging to the family Togaviridae, genus Alphavirus, and is the etiological agent of the acute febrile illness Chikungunya fever (CHIKF) that caused millions of human cases since major outbreaks starting in 2004 (Sharp et al., 2014). This disease was named after the Makonde (Kimakonde) language from the south of Tanzania, which means "to bend over, " referring to the posture assumed by individuals that display the most severe forms of the disease with extreme and incapacitating joint pain. Although CHIKV infection is associated with low mortality rates, it imposes severe morbidity to the acute-infected individuals. The debilitating joint pain can persist for several months to years as a clinical outcome known as "post-chikungunya chronic polyarthralgia" (pCHIKV-CPA), which deeply affects the patient's quality of life (Consuegra-Rodríguez et al., 2018). Since 2004, substantial urban outbreaks of CHIKV infection have occurred throughout the tropical and subtropical regions of the world, particularly in geographical areas inhabited by the vectors Aedes spp. mosquitoes (Petersen and Powers, 2016). More recently, CHIKV outbreaks occurred in Africa, Asia, Europe, the Americas, and the Pacific islands (Petersen and Powers, 2016). This unprecedented spread of CHIKV infections was accompanied by high morbidity, several cases of neuropathies, and atypical disease presentations, making CHIKV a major global health threat. Facing this scenario, the characterization of the infectious and pathogenic potential of the actual circulating virus isolates will help to understand and, more effectively, control the disease.
The first isolation of CHIKV, and the report of an epidemic, occurred in 1952/53 in Tanganyika Province, actual Tanzania, with the infected individual presenting disabling joint pains, severe fever, and eventually rash (Lumsden, 1955;Ross, 1956). The bite of infected female mosquitoes transmits the virus, and its circulation could be related to two different cycles of transmission: (1) a sylvatic cycle where enzootic transmissions between non-human primates and Aedes spp. mosquitoes, such as Ae. (Diceromyia) furcifer, Ae. (Diceromyia) taylori, Ae. (Stegomyia) luteocephalus, Ae. (Stegomyia) africanus, and Ae. (Stegomyia) neoafricanus, which occasionally spilled over to humans; (2) an urban cycle where humans and Ae. aegypti and Ae. albopictus are involved. The importance of the sylvatic cycle could be highlighted in a recent study that detected the virus in non-human primates from Malaysia and revealed a high similarity between human and non-human primate sequences of CHIKV. Thus, these monkeys maybe both hosts and reservoirs for CHIKV (Suhana et al., 2019). In addition, CHIKV has been detected in other zoophilic mosquitoes Ae. dalzieli, Ae. argenteopunctatus, Cx. ethiopicus, and An. rufipes suggesting that other species may participate in a secondary sylvatic cycle (Diallo et al., 1999).
Phylogenetic studies show that CHIKV originated from Africa, although the specific region where the virus evolved could not be pinpointed, and subsequently spread to Asia. These studies also classify viral isolates into three main lineages: the enzootics East/Central/South African (ECSA), West African, and the endemic/epidemic Asian strains. The Asian lineage could be sub-divided into two clades: the Indian clade, which was extinct, and the Southeast Asian lineage that continues to circulate (Powers et al., 2000;Volk et al., 2010). The recent epidemic that affected La Réunion Island and other islands from Indian Ocean revealed a new strain derived from the ECSA group, which was named the India Ocean lineage (IOL) (Njenga et al., 2008). The distribution of CHIKV genotypes worldwide is represented in Figure 1A.
Mutations in the viral genome impact at viral propagation and adaptation of these lineages in different vectors. Ae. aegypti and Ae. albopictus mosquitoes are the main vectors in the urban cycle of CHIKV transmission. Studies showed that genomic differences amongst circulating CHIKV accounted for its transmission from each of these vectors. For instance, the presence of the A226V variant on the envelope (E1) gene of CHIKV was related to an increase in viral infectivity, dissemination, and transmission in Ae. albopictus, resulting in the wide spread of the virus (Tsetsarkin et al., 2007). This mutation did not confer any advantage to transmission in Ae. aegypti. Followed by the selection of A226V, adaptation substitutions L210Q and K252Q (E2 protein) that arose independently in the IOL strain in India are associated with a greater increase of CHIKV dissemination in Ae. albobictus vector (Tsetsarkin and Weaver, 2011;Tsetsarkin et al., 2014). Still, variants K211E, in the E1 gene, and V264A, in the envelope (E2) gene, lead to an increase in viral dissemination and transmission for Ae. aegypti but not for Ae. Albopictus (Agarwal et al., 2016). Moreover, the T98A variant in E1 enhances the vector-adaptability effect of A226V, since epistatic interactions between E1-98T and E1-A226V are restrictive . In another study, variant G60D in the E2 increased CHIKV infectivity in Ae. albopictus in the presence of either alanine or valine at position 226 in E1 protein. This change also increases infectivity in Ae. aegypti. The E2 variant, I211T, increases the CHIKV infectivity exclusively for Ae. albopictus but only when associated with A226V change. I211T variant could be related to the maintenance of CHIKV in the enzootic Africa cycle since it was detected in most sequences from the ECSA clade obtained before 2005 (Tsetsarkin et al., 2009). Mutations occurring at the 3 -UTR could also contribute to vector adaptability, since a 177 nt duplication found in the Caribbean strain of CHIKV and confirmed in sequences from Mexico, Trinidad, and the Dominican Republic, conferred a growth advantage in insect cell cultures to viruses harboring this duplication over the Asian strain and other Caribbean strains lacking the duplication (Stapleford et al., 2016). The most relevant variants, as well as their impact on each vector and in virus infectivity, are summarized in Table 1.
Mainly during the 1960s and 1970, epidemics of CHIKV were restricted to Africa and Southeast Asia, in countries like South Africa, Democratic Republic of Congo, Uganda, Indonesia, Thailand, and India. However, this scenario started to change in 2004 with reports of an outbreak in Lamu, Kenya, beginning in May and reaching its peak in July, with an estimated 75% of the island's population affected (Sergon et al., 2008). The disease then FIGURE 1 | (A) Global distribution of CHIKV lineages. CHIKV infections are more likely to occur in tropical and sub-tropical regions of the globe, highlighted in red on the map. The geometric forms represent the different lineages of CHIKV that are currently in circulation. (B) The number of confirmed cases is shown for each country individually. There is not autochthonous transmission reported in Chile and Uruguay, only imported cases. The Asian strain first reached South America by French Guyana, but ECSA strain has arrived by northeast Brazil and got predominated in Brazil. The colors represent the circulation of Aedes aegypti and albopictus in each country, as indicated in the subtitle. spread through Mombasa and the Comoros islands. Other islands from the Indian Ocean were affected, including La Réunion Islands where, between March 2005 andApril 2016, 244,000 cases were reported (Renault et al., 2007).
The variant E1-A226V on the viral envelope glycoprotein was detected for the first time in viruses that circulated during the La Reunion epidemic (Tsetsarkin and Weaver, 2011). This adaptation of CHIKV to Ae. albopictus allowed that regions of the planet such as Italy (during July and August 2007) (Fadila and Failloux, 2006;Rezza et al., 2007) and France (during 2010 and 2014) (Grandadam et al., 2011;Delisle et al., 2015), that never had reported CHIKF cases, experienced the occurrence of CHIKV disease.
The CHIKV adaptation to Aedes albopictus has constantly been associated to spread of CHIKF to new areas of the globe. In fact, full-length viral sequences unraveled unique adaptive variants in, at least, three occasions, that conferred selective advantage for CHIKV transmission by Ae. albopictus (Tsetsarkin et al., 2007;Beesoon et al., 2008;De Lamballerie et al., 2008;Dubrulle et al., 2009;Severini et al., 2018). (2013), and Philippines (2013) (Lanciotti and Valadere, 2014). In February 2014, CHIKV had already reached continental territory, when autochthonous infections were observed in French Guiana, the first country in South America to declare CHIKV infection. At this point, the dispersion of CHIKV to other American countries was only a matter of time. From 2014 to 2015, more than 16,000 individuals were infected in French Guiana. Importantly, infections presented several atypical cases, such as neurological disorders, cardio-respiratory failure, acute hepatitis, acute pancreatitis, renal disorders, and muscular impairment. Only two deaths associated with CHIKF during this period were documented (Bonifay et al., 2018; Figure 1B).
EPIDEMIOLOGY OF CHIKV ON THE SOUTH AMERICAN CONTINENT
CHIKV cases arose in Venezuela in June 2014, from recent travelers from the Dominican Republic or Haiti, and in July 2014, autochthonous transmissions were reported. Phylogenetic analysis showed that the CHIKV circulating in Venezuela clustered to the Asian genotype (Caribbean clade) and did not harbor the main substitutions associated with Ae. albopictus viral adaptation (Camacho et al., 2017).
Ecuador was another country that early confirmed community transmission of CHIKV. Berry et al. (2020) showed that CHIKV was introduced into Ecuador at multiple time points in 2013-2014, and these introductions were all associated with the Caribbean islands, despite the increasing influx of Venezuelan citizens. From 2014 to 2017, Ecuador reported 35,714 CHIKF cases. The transmission for two or more years after the 2015 epidemic peak suggests that CHIKV has become endemic in this country. The CHIKF outbreaks in Ecuador were associated with the Asian strain which harbors the E1:A98T and E1:K211E amino acid changes. Since Ae. aegypti is the main mosquito vector in Ecuador this data indicates that CHIKV had not acquired all the adaptative substitutions necessary to increase viral fitness within this vector (Berry et al., 2020).
CHIKV autochthonous cases were confirmed in Colombia in September 2014, and during the epidemics (2014-2015) more than 460,000 cases diagnosed of CHIKF by clinical features were reported, with the majority of them occurring in women, with 12 fatal cases reported. The rate of new infections is decreasing over time, although Colombia is the country with the third-highest number of infections, according to the Pan American Health Organization (PAHO). The characterization of Colombian CHIKV genomes determined that it belongs to the Asian strain and clustered with three distinct Asian strain branches: Panama (Caribbean Colombia, Huila); Nicaragua (Cauca and Risaralda); and St. Barts (Bogotá, D.C), which may be the result of three independent introductions. Each subclade showed non-synonymous mutations (nsP2-A153V, Y543H, G720A; nsP3-L458P; and Capsid R78Q), and that may impact on CHIKV fitness and pathogenesis (Rico-Mendoza et al., 2019;Villero-Wolf et al., 2019; Figure 1B).
Records of CHIKV infection cases in Bolivia are extremely scarce. However, CHIKV circulated in this country since March 2015, when 204 cases were reported (Carbajo and Vezzani, 2015). In 2017, 3,367 cases were reported across the country (including clinically diagnosed only) (Escalera-Antezana et al., 2018; Figure 1B).
Since 2014, Peru has reported 27 confirmed cases of CHIKV, all of them imported from neighboring countries such as Venezuela and Colombia (Ministerio de Salud, Dirección General de Epidemiología, 2015). This country has the circulation of Aedes aegypti vector in 18 territories and co-circulation of other arboviruses such and ZIKV and Dengue. The first case of autochthonous transmission of CHIKV was reported in 2015 and since then, 951 cases of autochthonous transmission were confirmed in the country according to PAHO. Different regions of Peru present divergent rates of CHIKV infection, varying from 4.6 to 9.4% of all cases of febrile illness (Alva-Urcia et al., 2017;Sánchez-Carbonel et al., 2018), demonstrating that several factors could impact on the epidemiology of CHIKV infection, including the molecular diagnostics, which, in addition to being poorly established and accessible in the country, and the environmental factors, such as natural climatic events, that can increase the frequency of infections.
Some South American countries situated mostly under the Tropic of Capricorn present temperate climate, with warm summers and low temperatures in the winter season, which impair the establishment of a considerable mosquito's population and, consequently, the transmission of arboviruses is negatively impacted. The first CHIKV imported case in Chile was described in 2014, from the Dominican Republic. Since then, all cases reported in Chile were imported, mainly from travelers returning from the Caribbean islands. Argentina, however, presented autochthonous CHIKV transmissions in 2016, and more than 320 lab-confirmed cases were reported, according to PAHO (Perret et al., 2018; Figure 1B).
In 2017, 123,087 autochthonous cases were confirmed in the American continent (Pan American Health Organization, 2020). In Brazil, unprecedented dissemination of CHIKV infections has been occurring since 2015, with an accumulated of 712,990 confirmed cases notified over a 4-year period. This outbreak had its major incidence in the Southeast and Northeast regions of the Brazilian territory, corresponding to two-thirds of all confirmed Brazilian cases mainly in periurban and highly populated urban areas of the country.
The first local transmission of the CHIKV in Brazil that occurred in September 2014, at the city of Oiapoque, state of Amapá, localized in the Northern region of Brazil was related to the Asian lineage. Soon after this first autochthonous detection, CHIKV infections from the ECSA genotype were notified in the city of Feira de Santana, Bahia state, the north-eastern region of Brazil. Asian and ECSA genotypes co-circulate in the North and Northeast regions of Brazil (Nunes et al., 2015). However, CHIKV ECSA strain spread to other northeastern states, such as Paraíba, Sergipe, Pernambuco, and Alagoas. In 2017 this strain reached the Amazon region. Interestingly, while the north and southeast regions of Brazil had the majority of CHIKV cases in 2016, Roraima, for instance, the northernmost state of Brazil located in the Amazon basin, only had its exponential increase of cases in 2017. All strains analyzed from this outbreak in Roraima were of the ESCA strain. An extended analysis demonstrated that most cases circulating in Roraima and Amapa since 2015 were of the CHIKV ECSA origin (Naveca et al., 2019). The CHIKV Asian strain was first identified in Roraima in 2014, representing people returning from Venezuela, but the infection did not spread from these two cases. This data demonstrates the high potential of CHIKV ECSA spread in the Brazilian territory.
CHIKV ECSA also reached the southeast region of Brazil, causing large outbreaks. Increasing evidence indicates that the ECSA genotype has predominated in the Southern region, especially in Rio de Janeiro. Xavier et al. (2019) sequenced 11 near-complete CHIKV genomes from clinical samples of patients from Rio de Janeiro, and together with the whole sequencing of 2 CHIKV genomes from positive individuals by Cunha et al. (2017), during the 2016 outbreak, and 10 partially sequenced samples (CHIKV E1 gene) by Souza et al. (2017), the phylogenetic reconstructions confirmed that in Rio de Janeiro the ECSA strain is the driving force of the epidemics ( Figure 1B).
Phylogenetic analysis also demonstrated that the origin of ECSA strain in Rio de Janeiro was from the north-eastern region of Brazil. Xavier et al. (2019) also showed that there is high human mobility between the two regions and the epidemic waves from the north-eastern region and Rio de Janeiro state had synchronicity during late 2015 to the early months of 2016. Moreover, they estimated that CHIKV was circulating unnoticed for at least 5 months before the first reports of autochthonous transmissions in Rio de Janeiro . Another work has estimated an even earlier ECSA genotype introduction in the Rio de Janeiro state. The time-scaled phylogenetic tree estimated the introduction as early as 2014 (Souza et al., 2019).
Corroborating data from Cunha et al. (2017), the genomes of the CHIVK circulating ECSA strain did not carry the E1-A226V and E2-L210Q Ae. albopictus adaptive changes. In fact, in Brazil, Ae. aegypti is the main circulating mosquito strain (Cunha et al., 2017;Souza et al., 2019;Xavier et al., 2019). Thus, it is expected that mutations that confer high viral fitness in Ae. albopictus have not been fixed at these locals.
Although the Brazilian ECSA CHIKV did not harbor the E1-A226V and E2-(L210Q, V264A), which were also related to CHIKV-vector adaptability (Tsetsarkin and Weaver, 2011), unique mutations such as E1-K211T, E1-N335D, E1-A377V, and E1-M407L are present together with E2-A103T (Cunha et al., 2017;Souza et al., 2017). The impact of these mutations on CHIKV adaptability to Aedes ssp. vectors still needs to be addressed, but as for the polymorphic E1-211K, the E1-K211E mutation has been implicated in better viral transmission for Ae. aegypti but not for Ae. albopictus (Agarwal et al., 2016). Importantly, the unprecedented spread of the ECSA strain in Brazil, which substituted the Asian strain in the north part of the country, suggests a greater potential of transmission of this strain.
The dynamics of CHIKV disease in South America, its spread, and the outcome expected can be influenced by several complex factors. The climate patterns, like pluviosity, humidity, ocean-atmosphere climate phenomenon, such as El Niño-Southern Oscillation (ENSO), as well as other parameters, as vector habitat availability, adaptability of the virus into a new vector species, cocirculation of other arboviruses, heterogeneity of health systems in each country, country's economy and the Human Development Index, mobility of individuals (by traveling, exodus, among other reasons), the efficiency in combating disease vectors, the capacity of surveillance and epidemiological vigilance, with the proper actions to stop the outbreaks. All the previous parameters are related to viral vector biology and adaptability. In any case, the biological behavior of each CHIKV strain cannot be ruled out and the characterization of different CHIKV strains in terms of replication, virus-cell interaction, and pathogenesis urge to be determined.
Virus Particle, Genomic Structure, and the Replication Cycle
The CHIKV viral particle carries the 11.8 Kb, single-stranded positive genomic RNA, which is arranged in two modules: the 5 two-thirds codes for the non-structural protein (nsPs1-4) and the 3 one-third codes for the structural proteins (CP, E3, E2, 6K, E1) (Knipe et al., 2001); additionally the 3 one-third can be translated as a truncated polyprotein composed of CP, E3, E2, C-terminal 6K fused with a Transframe or TF peptide (Firth et al., 2008;Snyder et al., 2013). The 5 terminus is capped with a 7-methylguanosine and the 3 terminus is polyadenylated. The genomic RNA is enclosed by a capsid formed by 240 copies of a single Capsid (CP) protein arranged as icosahedrons with T4 symmetry. This nucleocapsid is delimited by the external phospholipid envelope formed essentially by cholesterol and sphingolipid derived from the host cell plasma membrane containing the virus glycoproteins E1 and E2. Each CP interacts with the cytosolic domain of E2. The glycoproteins are arranged as trimeric spikes composed of heterodimers of E1 and E2, and each viral particle contains 80 spikes which lead to the incorporation of 240 copies of E1 and E2 (reviewed in Knipe et al., 2001;Jin and Simmons, 2019). Glycoproteins E1 and E2 mediate CHIKV infection of susceptible cells, where E2 is responsible for receptor binding while E1 plays a role in viral-host membranes fusion.
Until recently, the cellular receptor used by CHIKV, and other arthritogenic alphaviruses, was not known, but several pieces of evidence pointed out to CHIKV use of glycosaminoglycans (Smit et al., 2002;Gardner et al., 2014;Weber et al., 2017;and reviewed in Solignat et al., 2009), T-cell immunoglobulin and mucin 1 (TIM-1) (Moller-Tank et al., 2013), and other PtdSerbinding proteins, such as Axl and TIM-4 (Jemielity et al., 2013) and prohibitin (Wintachai et al., 2012) as adsorption factors. However, Zhang et al. (2018) demonstrated that CHIKV and other arthritogenic alphaviruses, such as Ross River Virus (RRV) and Mayaro Virus (MAYV), use Mxra8 (also known as DICAM, ASP, or Limitrin) as a cell receptor for virus entry. Mxra8 is an adhesion molecule of epithelial, myeloid, and mesenchymal cells with homology to the junctional adhesion molecule that serves as the receptor for reoviruses. The immunoglobulin domains A and B of CHIKV E2 bind to Mxra8 and this binding was necessary for CHIKV mouse infection. Interestingly, infection with the CHIKV ECSA strain La Réunion did not show any requirement to use Mxra8 for viral entry, which indicates that other unknown molecules can function as CHIKV receptors. In addition, this observation demonstrates that different genotypes of CHIKV can adapt differently to the host, thus possibly indicating divergent outcomes of CHIKV disease.
Even though several studies pointed out that E2 acts on CHIKV binding to surface cell receptors, while E1 is the main protein factor involved in the intracellular process of virus entry, there is evidence that points to shared participation of the two proteins at the viral entry and its subsequent events. First, like other alphaviruses, CHIKV can use endocytosis to enter a cell, in a pH-dependent process in clathrin-coated vesicles via receptor-mediated interaction (DeTulleo and Kirchhausen, 1998;Smith and Helenius, 2004;Kielian et al., 2010). In this scenario, after CHIKV enter cells via receptor-mediated endocytosis, the acidic endosomal environment results in glycoproteins irreversible conformational changes followed by E2-E1 heterodimers dissociation and E1 rearrangement into fusogenic homotrimers that induce fusion of viral and endosomal membrane, allowing the release of the nucleocapsid into the cytosol (Voss et al., 2010). But the Old-World Alphavirus title (Weaver et al., 1994) makes something very clear about CHIKV: the virus, its vectors, and its final hosts have been coevolving for a long time. Therefore, other pathways did not take long to be elucidated, like the clathrin-independent, epidermal growth factor receptor substrate15 (Eps15)-dependent pathway (Bernard et al., 2010), which also takes the virus particle into the endosome. A third pathway exploited by the virus to get into an acidic cell compartment is the macropinocytosis, recently attributed to CHIKV , but an already well-established mechanism for other enveloped viruses, such as Ebola virus (EBOV), and non-enveloped viruses, such as adenoviruses; the Rab GTPases-and phosphoinositidedependent maturation of the macropinosome induces its fusion to endosomal compartments (Egami et al., 2014). The low pH of acid milieu creates the proper microenvironment required to induce conformational changes in the viral envelope, dissociating E1-E2 heterodimers and forming E1 homotrimers, allowing CHIKV fusion to the endosome membrane and the release of the nucleocapsid into the target cell's cytosol where, as it was demonstrated to the Sindbis Virus (SINV), the uncoating of the viral genomic RNA is carried out by the association of the CP and the ribosomes (Singh and Helenius, 1992).
Like other togaviruses and due to the particular arrangement of alphavirus genomic RNA, following uncoating, the CHIKV non-structural (ns) proteins are translated as polyproteins P123 and P1234, with 1,857 amino acids and 2,475 amino acids, respectively. A well-conserved opal (UGA) stop codon is present at the C-terminus of nsP3 and determines the translation of P123, which contains the nsP1, nsP2 and, nsP3 proteins. The readthrough of the opal stop codon leads to the translation of the full-length P1234, that contains the nsP4 protein, the viral RNA-dependent RNA polymerase (RdRp), in addition to the nsP1-nsP3 proteins. The readthrough frequency of the opal stop codon, determined for the SINV, is about 5-20% of the genomic mRNA translation. Therefore, the stoichiometric concentration of nsP4 is 1/20 to 1/5 of the other non-structural proteins (Shirako and Strauss, 1994).
Interestingly, some isolates of alphaviruses code an amino acid residue at the place of the opal stop codon. For instance, a SINV isolate presenting severe morbidity and mortality in mice codes for cysteine at the opal stop codon position (Suthar et al., 2005), while in ONNV both arginine and the opal stop codon are present, and a viral fitness advantage and higher infectivity in the Anopheles gambiae mosquito vector is related to the presence of the opal stop codon (Myles et al., 2006). Analyses by deep-sequencing of a Caribbean isolate of CHIKV (ECSA-derived IOL linage) demonstrated the presence of both the opal stop codon and arginine at the end of nsP3 coding region. The moderate disease was observed in mice infected with a Sri Lanka CHIKV isolate harboring an opal stop codon to arginine change. Sri Lanka isolate shares high similarity with the Caribbean isolate, and the opal stop codon to arginine change did not alter viral replication kinetics (Jones et al., 2017). Collectively, these data suggest that the identification of viral determinants will contribute to a better understanding of CHIKV disease severity and prognostics, and the epidemic potential of different viral strains.
The full-length P1234 is autocatalytically cleaved into nsP4 and P123, the premature cleavage of nsP4 has a simple biological explanation: the cycle's continuity depends on fast replication of the viral genetic material. The nsP1-4 are part of the replication complex (RC), which will determine the replication of the viral genomic RNA and the transcription of the genomic and the subgenomic (26S RNA) viral RNAs. The initial RC complex is formed by the uncleaved P123 plus nsP4 (P123-nsP4), which is targeted and anchored to the plasma membrane by the association of the nsP1alpha-helical peptide and palmitoylated amino acids within the P123. The association of the nsP1 membrane-binding domain with the plasma membrane will induce bulb-shaped invaginations, called spherule, where viral RNA synthesis takes place (Figure 2). The negative-strand RNA bears the subgenomic promoter, a sequence of 21 nucleotides, complementary to the nucleotides of the junction region, 19 of the upstream and two downstream of the replication's initiation point. The subgenomic 26S RNA is identical in sequence to the one-third of the genomic RNA 3 terminus and serves as a template to structural proteins synthesis. Like genomic RNA, the subgenomic RNA is also capped and polyadenylated (Knipe et al., 2001). As P123 is cleaved into the final nsP1, nsP2, and nsP3 proteins, its association with nsP4 in a specific quaternary structure convert the RC into a positive-strand RNA replicase, which will synthesize the viral genomic and subgenomic RNA.
The nsP1 is an initiation factor for negative-strand RNA synthesis and RNA capping via its guanine-7-methyltransferase and guanylyltransferase enzymatic activities.
The nsP2 works as an RNA helicase, a nsPolyprotein protease, and recognizes the subgenomic RNA promoter.
The nsP3 acts as a replicase unit and also as an accessory protein involved in RNA synthesis by recruiting several hostcell factors that participate and optimize viral replication. The nsP3 hypervariable domain (HVD), at the C-terminus, binds the Ras-GHP SH3 domain (G3BP) protein family to promote replication for several alphaviruses. This biding is particularly critical for CHIKV and is, in part, related to the capacity of the virus to inhibit stress granule formation (Kim et al., 2016;Meshram et al., 2018). In this sense, nsP3 VHD binding to the fragile X syndrome (FXR) family members also plays a role in alphavirus replication. Beyond a role in avoiding the formation of stress granules, binding of nsP3 to these proteins is also important to promote viral RNA synthesis by facilitating the assembly of the RC complexes. Different studies have shown that for several alphaviruses the nsP3 binding to these family members is virus-specific and also cell type-specific, presenting a high level of redundancy. However, for CHIKV the binding of host factors from different families is not redundant (Kim et al., 2016;Meshram et al., 2018), pointing out to a critical role of this replication step for the CHIKV-host coevolution.
More recently, two other cellular factors binding to the HVD of nsP3 were implicated in promoting virus replication and permissiveness of CHIKV infection. The host DHX9 DEXHbox helicase is a DNA/RNA helicase that has been demonstrated to participate in the replication of diverse RNA positive viruses (Picornaviridae, Arteriviridae, Flaviviridae -Pestivirus genus, and Retroviridae -HIV-1). Matkovic et al. (2019) showed that the nsP3 HVD binds DHX9, redirects this protein from the nucleus to the cytoplasm at discrete puncta structures to increase CHIKV genomic RNA translation early at the viral infectious cycle. Further, they demonstrated that CHIKV nsP2 also binds to DHX9 and targets it to the proteasomal for its degradation. This step is critical to the switch of genomic RNA translation to replication (Matkovic et al., 2019).
Four and a half highly conserved LIM1 domain (FHL1) is a cellular protein that recently has been implicated as a cellular factor promoting CHIKV tropism. This protein has three distinct spliced isoforms in human cells (1A, 1B, and 1C). 1A is abundantly expressed in skeletal muscles and fibroblasts, while 1B and 1C are present in muscle, brain, and testis. Meertens et al. (2019) demonstrated that FHL1 binds to the nsP3 HVD and promotes CHIKV replication. This host factor was also important to the ONNV Old World alphavirus, while it had no impact on the replication of the New World alphaviruses MAYV, SINV, and Semliki Forest Virus (SFV). Primary cells from patients with FHL1 deficiency were resistant to CHIKV infection, highlighting the importance of this cellular factor in promoting skeletal muscle and fibroblast tropism of CHIKV and viral pathogenesis. Strikingly, the dependence of this factor was demonstrated for all CHIKV strains, except the Western African linage, reinforcing the hypothesis that the success of emergent and re-emergent CHIKV strains to spread and establish in the human population and on mosquito vectors will be determined by the interaction of different host factors and the viral proteins.
Collectively, these new findings help to expand the model of CHIKV replication: after the release of viral capsid in the target cell cytoplasm, uncoating of genomic RNA is followed by the translation of P123 and P1234 non-structural precursors, facilitated by the host DHX9 helicase. The initial RC complex formed by P123 and nsP4 then associates with the incoming genomic RNA and the complex is targeted to the plasma membrane by the nsP1 portion of P123. G3BP and FXR factors associates with the RC complex at this very early stage to avoid genomic RNA degradation. DHX9 degradation by the viral nsP2 is critical to the switch from translation to viral replication. Once the first double-stranded RNA replication intermediates are synthesized, they are isolated into the membrane spherule leading to the amplification of these processes. The new synthesized positive-stranded genomic RNAs exit the membrane spherules and are translated in close proximity of the plasma membrane, forming new RC complexes, which by binding of G3BP, FXR, and possibly FHL1, oligomerize and increase the formation of new RCs to amplify the amount of viral genomic RNA within (2) Once inside the endosome, the acidic environment leads to conformational rearrangement of glycoproteins followed by dissociation of E2-E1 heterodimers and E1 rearrangement into fusogenic homotrimers that induce fusion of viral and endosomal membrane, allowing the release of nucleocapsid into the cytosol. (3) Following uncoating and genomic RNA release, the non-structural proteins are translated as polyproteins denominated P123 and P1234. (4) A replicative complex (RC) formed by uncleaved P123 plus nsP4, the genomic RNA, and several host factors is targeted and anchored at the plasma membrane inducing bulb-shaped invaginations, known as spherules, where RNA synthesis will occur. dsRNA indicates the viral replicative intermediate. nsP1-3 associates with nsP4 in a specific quaternary structure converts the RC into a positive-strand RNA replicase, which synthesizes the viral genomic and subgenomic RNAs. Spherules are internalizate and shape functional large cytopathic vacuoles that bear multiple spherules. (5) Subgenomic RNA (26S) is translated, producing the structural polyprotein (6) E1and E2-E3 (pE2) are translocated into the ER and go through the post-translational process of maturation and glycosylation. (7) Capsid autoproteolysis releases free capsid into the cytoplasm that interacts with genomic RNA, giving origin to the nucleocapsid. (8) The viruses bud out of infected cells through the cell membrane in a pH and temperature-dependent process. (9) CHIKV replication induces ER stress and activates the Unfolded Protein Response (UPR). By non-elucidated mechanisms CHIKV infection also results in oxidative stress, generating Reactive Oxygen Species (ROS) and Reactive Nitrogen Species (NRS). (10) Both ER and oxidative stress can trigger autophagy, a pro-survival signal, in an attempt to preserve cell viability. When CHIKV capsid is produced in the cytoplasm, it can be ubiquitinated and sequestered by adaptor protein SQMT1/p62 into the autophagosomes, leading to capsid degradation in the autophagolisosome. (11) CHIKV is able to trigger NLRP3 inflammasome, starting a signaling cascade that culminates in the activation of the caspase 1, that turns able to cleaves of pro IL-1β and pro IL-18, generating mature cytokines, that will elicit adaptive responses, but also can contribute to pathological inflammatory events such as edema and arthritic disease symptoms. the infected cell early on the infection. Studies from SINV and SFV suggest a high dynamics of spherule internalization through a Phosphatidylinositol-3 kinase (PI3K) activated endocytosis, actin and myosin-dependent transport, and fusion with late endosomes (Spuul et al., 2010), leading to the formation of the so-called large cytopathic vacuoles (CPV-1) (Figure 2).
Subgenomic viral RNAs exiting from CPV-1 are immediately translated in close proximity to the Endoplasmic Reticulum (ER) to produce the viral structural polyprotein. At the C-terminal of the CP, a peptide signal leads to the translocation of the polyprotein across the ER membrane. Whereas, through proteolytic processing, it will give rise to intermediate proteins CP, p62, 6K or 6k/TF and E1. From a new stage of proteolysis, hijacking cellular proteases, the final structural proteins will appear: CP, E2, E3, 6K, or 6K/TF and E1 (Aliperti and Schlesinger, 1978;Kääriäinen and Ahola, 2002;Melton et al., 2002;Ramsey and Mukhopadhyay, 2017). Alphavirus capsid proteins are multifunctional and have an intrinsic protease activity. Thus, CP is autocleaved out of the structural precursor protein by its Serine-protease activity. In CHIKV the CP N-terminal is unstructured and has the RNA-binding domain, whereas the C-terminal globular domain harbor the Serine-Histidine-Aspartic acid protease domain. CP will remain in the cytosol for the formation of the viral nucleocapsid.
The glycoprotein E1 has only one transmembrane domain, while E2 has two transmembrane domains. They go through a post-translational process of maturation and glycosylation and are exported in vesicles, hijacking the cellular secretory machinery, up to the cell's plasma membrane.
The glycoprotein E3 is translated right after the capsid protein; it aids with cellular chaperones in the proper folding of E2 and E1, and has a specified signal sequence that addresses the remainder of the polyprotein to the ER membranes. It remains associated with E2, which is why both are called pE2 at this stage, until the moment it reaches the trans-Golgi, where the cellular Furin protease is responsible for the cleavage of pE2 in E2 and E3, making the "spike" now functional.
The 6K protein is a hydrophobic small protein that joins the E2 and E1 parts of the polyprotein, allowing for proper envelope processing. It also participates in membrane permeabilization, virus assembly, and budding. An additional protein, which is an extension of the 6K N-terminus, is also synthesized during alphavirus infection. This protein results from a −1 frameshift event 40 nucleotides before the beginning of E1 glycoprotein and leads to the formation of a truncated structural precursor, as described above (Firth et al., 2008;Snyder et al., 2013). This frameshift occurs in a 10-18% frequency during the subgenomic RNA translation. The resulting protein is an 8 kDa TF that is incorporated into viral particles and probably participates in viral assembly.
The newly formed virus particles bud out from infected cells through the cell membrane in a pH and temperature-dependent process, which requires that the temperature is close to physiological (∼36 • C) and that the pH is neutral or slightly alkaline (Lu and Kielian, 2000). There are some other mandatory requirements for exporting viral particles, such as the connection between the capsid and E2 (Suomalainen et al., 1992), the heterodimerization between E1 and E2 (Sjöberg and Garoff, 2003), and the interaction between virus' structures and hostcell factors: Arf1 and Rac1 assisting the stabilization of E2/E1containing cytopathic vacuole type II, trafficked by actin filaments-that E2 apparently induces the accumulation and the elongation-by a mechanism involving Rac1, Arp3, and PIP5K1, all constitutive cellular factors (Radoshitzky et al., 2016). Figure 2 summarizes the major features of the CHIKV replication cycle.
VIRUS-CELL INTERACTION CHIKV Infection and Host and Virus Transcriptional and Translational Regulation
Transcriptional shutoff during CHIKV infection impairs the cellular response to viral replication and avoid the establishment of an antiviral state. The CHIKV nsP2 mediate degradation of RBP1, the catalytic subunit of cellular RNA polymerase II, resulting in transcriptional shutoff, cytopathic effect, and reduced IFN-β production. Thus, nsP2 expression is cytotoxic and suppresses both cytokine production and activation of interferon-stimulated genes (ISGs) in infected cells (Akhrymuk et al., 2019).
CHIKV infection also results in the shutoff of host cell protein synthesis, whereas viral proteins continue to be synthesized. The host cell shutoff is a result of Eukaryotic Translation Initiation Factor 2 α (eIF2α) phosphorylation (White et al., 2011). Phosphorylation of eIF2α disables the ternary complex, essential for cap-dependent translation initiation. How CHIKV infection results in eIF2α phosphorylation remain unclear. Although infection increases the double-stranded RNA-dependent protein kinase (PKR) activation, eIF2α phosphorylation also occurs independently of PKR (White et al., 2011).
Moreover, CHIKV modulates protein synthesis by interfering with mTOR activation. Joubert et al. (2015) demonstrated that during the first 24 h of infection, mTOR and S6K phosphorylation is reduced, which directly impacts on host cell protein synthesis. mTORC1 low activity is associated with AMP phosphorylation kinase (p-AMPK), an energy-sensing enzyme, followed by TSC2 activation, which acts as an inhibitor of mTOR phosphorylation (Joubert et al., 2012). Inhibition of the mTOR complex 1 (mTORC1) increases CHIKV production and this effect is independent of IFN-I production and autophagy induction. To bypass the deleterious effect of mTORC1 inhibition for cap-dependent mRNA translation, CHIKV protein synthesis is mediated via Mnk/eIF4E pathway (Joubert et al., 2015). Interestingly, mTORC1 inhibition also increases SINV infection, but had no effect on influenza A infection (a member of the Orthomyxoviridae family), suggesting that different viruses developed singular strategies to modulate mTORC1 activity (Joubert et al., 2015).
The PI3K-AKT-mTOR pathway is the major pathway that mTOR is involved in. Thaa et al. (2015) demonstrated that CHIKV infection induces AKT serine 473 phosphorylation but had no effect on S6 phosphorylation, one of the downstream targets of the PI3K-AKT-mTOR pathway. AKT phosphorylation by CHIKV is lower compared with other alphaviruses like SFV. SFV nsP3 triggers strong AKT activation, which is associated with the RC internalization. On the other hand, replication complexes were broadly localized at the cell periphery in CHIKV infection (Thaa et al., 2015). However, it remains to be elucidated how different CHIKV strains will impact on both AKT activation and mTOR modulation. Different alphaviruses modulate the PI3K-AKT-mTOR pathway in specific manners associated with particular virus replication features.
CHIKV, Autophagy, and Oxidative Stress
Macroautophagy, referred herein as autophagy, is a homeostatic process conserved in eukaryotes that recycle cargo proteins and organelles through lysosomal degradation by their selective sequestration inside double-membrane vesicles, known as autophagosome (Yang and Klionsky, 2010). It is also described as a cytoprotective process with important roles in immunity response against sterile and infection-associated inflammation, including viral infection (Deretic and Levine, 2018).
Despite its relevance to the immune response against infections, autophagy may play a role in both anti and proviral replication. For instance, some viruses are able to subjugate the autophagy machinery in its own advantage. This process has been investigated for alphaviruses (Liang et al., 1998;Orvedahl et al., 2007Orvedahl et al., , 2010Eng et al., 2012;Joubert et al., 2012). The role of autophagy during CHIKV infection is still controversial and can be divergent according to the cell type used to replicate CHIKV.
Oxidative stress is an important mechanism to fight back pathogens. It occurs due to a dysregulation of redox control, caused by increased levels of reactive oxygen species (ROS) and reactive nitrogen species (RNS) and/or a reduction in the antioxidant defense system (Jones, 2006;Cataldi, 2010). Free oxidative species are able to initiate autophagy and can also lead to cell death during strong and prolonged stimulation (Djavaheri-Mergny et al., 2007;Filomeni et al., 2010). Joubert et al. (2015) assessed CHIKV capacity to induce ROS and RNS. They observed, in murine fibroblast cells (MEF), that CHIKV infection led to increased production of both ROS and NO. In addition, they demonstrated that CHIKV-induced autophagy on these cells was mediated by the independent induction of endoplasmatic reticulum (ER) and oxidative stress pathways, delaying cell death by apoptosis through induction of IRE1a-XBP-1 pathway at the same time as ROS-mediated AMPK activation and mTOR inhibition. Consequently, the treatment with N-acetyl-l-cysteine, a potent antioxidant, reduces CHIKVinduced autophagy, observed by the decrease in LC3 puncta on these cells (Joubert et al., 2012). Therefore, it was demonstrated that CHIKV infection can induce endoplasmic reticulum and oxidative stress at the early stages of infection to trigger autophagy (Figure 2).
Interestingly, during the late stages of viral replication in MEF cells, autophagy is suppressed concomitantly with enhanced cell death by apoptosis, favoring viral release and spread (Joubert et al., 2012), showing a time-dependent pattern of autophagy regulation by CHIKV infection.
In human epithelial adenocarcinoma cells (HeLa), CHIKV infection can regulate autophagy through the interaction between viral proteins and the autophagic receptors sequestosome 1/p62 (SQSTM1/p62) and calcium-binding and coiled-coil domaincontaining protein 2/nuclear dot 10 protein 52, known as NDP52. Both proteins are able to interact with both cargo proteins and LC3, directing autophagy targets to autophagosomes (Judith et al., 2013). It was shown that SQSTM1/p62 can protect CHIKV infected human cells from death by binding ubiquitinated viral capsid and targeting it to lysosomal degradation (Figure 2). Moreover, CHIKV infection in certain cell types leads to robust SQSTM1/p62 degradation. Differently, it is being described that NDP52, but not its murine ortholog, interacts with the viral protein nsP2 promoting viral replication (Judith et al., 2013). Therefore, during CHIKV infection, autophagy can be regulated in different ways playing both pro-or anti-viral roles according to the time of the replication cycle and to the cell type and this can be crucial for the infection progression and virus spread.
CHIKV and the Endoplasmic Reticulum Stress
The ER is an essential cellular membrane organelle, with a dynamic structure that plays important roles in many cellular processes, including protein synthesis, folding and secretion, calcium homeostasis, lipid production, and the transport of cellular components. ER plays an essential role in the replication process of several viruses, including viral entry, assembly, protein synthesis, and genome replication. The massive viral replication can cause disturbances on the protein folding machinery, disrupting ER homeostasis, which culminates in ER stress (Liu and Kaufman, 2003;He, 2006;Inoue and Tsai, 2013;Jheng et al., 2014). The ER stress activates an evolutionarily conserved prosurvival pathway, termed the unfolded protein response (UPR), that acts for maintenance of ER homeostasis. UPR has three main mechanisms to restore the adequate ER function: (1) inhibition of protein synthesis, (2) induction of genes of chaperone family, necessary for the folding protein processes, (3) eliminating the amount of misfolded or unfolded proteins by activation of the ER-associated protein degradation (ERAD) pathway (Malhotra and Kaufman, 2007;Hetz et al., 2011).
In mammalian cells, the three main branches of the UPR are the protein kinase-like ER-resident kinase (PERK), the activating transcription factor 6 (ATF6), and the inositolrequiring enzyme 1 (IRE1). These proteins are associated with the ER chaperone BiP/Grp78. When unproperly folded proteins accumulate in the ER lumen, BiP/Grp78 dissociates from these three transmembrane signaling proteins, resulting in activation and initiation of the UPR pathway. Then, activated PERK phosphorylates eIF2α at Ser51, decreasing the load of proteins entering into the ER lumen by blocking general protein translation. Activated ATF6 is a transcription factor that increases the transcription of a number of ER chaperones, the X boxbinding protein 1 (XBP1), and other transcription factors. Activation of IRE1 results in the IRE1 mediated splicing of the XBP1 mRNA, which activates the expression of downstream genes like chaperones and other proteins involved in protein degradation (Yoshida et al., 2001;Harding et al., 2002;Vattem and Wek, 2004;Jheng et al., 2014).
Beyond triggering ER stress and UPR, viruses have evolved different strategies to subvert these cellular responses for their own benefit, e.g., enhancing replication, persisting in infected cells, and evading immune responses, as described for several viral families, such as Flav i-, Herpes-, and Togaviridae (reviewed by Ambrose and Mackenzie, 2011;Green et al., 2014;Li et al., 2015).
CHIKV infection results in the activation of the UPR pathway in different cell lines. However, results from different groups are discordant and may reflect the cell-specificity for UPR activation. Fros et al. (2015) showed that in Vero cells, the expression of CHIKV envelope proteins alone can induce UPR by the upregulation of ATF4 and GRP78/Bip. Additionally, CHIKV-infected Vero and an adult WT mouse model of CHIKV arthritis only partially induced by XBP1. Furthermore, the authors demonstrated that individual expression of CHIKV nonstructural protein nsP2 protein was sufficient to inhibit the UPR pathway (Fros et al., 2015). Whereas, CHIKV infection of HEK293 cells activated the ATF6-UPR branch, but not IRE1 or PERK pathways. In these cells, CHIKV infection blocked eIF2α phosphorylation even in the presence of pharmacological activation of UPR by Thapsigargin and Tunicamycin. The authors demonstrated that nsP4 was sufficient to inhibit phosphorylation of eIF2α (Rathore et al., 2013).
ER stress, autophagy, and apoptosis in response to CHIKV infection were also investigated in HeLa and HepG2 cells and showed distinct results. In HeLa cells, CHIKV infection activated the PERK branch of UPR, with consequent eIF2α phosphorylation (Khongwichit et al., 2016). Diversely, Joubert et al. (2012) observed activation of UPR in HeLa through the splicing of XBP1 by IRE1 during CHIKV infection. The ATF6 branch was also activated in these cells. Whereas in HepG2 IRE1 activation was strong, the activation of PERK and ATF6 was less pronounced and only a low level of eIF2α phosphorylation was observed. For both cells, the downstream protein CHOP, which is involved in apoptosis signaling, was also upregulated (Khongwichit et al., 2016).
Moreover, the silencing of IRE1 during CHIKV infection of HeLa leads to fewer CHIKV-induced autophagosomes. Apparently, CHIKV-induced autophagy is dependent on both triggering of oxidative stress and UPR pathways. These data reinforce the idea that the ER could serve as a subcellular platform for autophagy initiation. Signaling of UPR and autophagy are interconnected, and these two pathways crosstalk to modulate the cell survival or dead by apoptosis (Bernales et al., 2006;Axe et al., 2008;Joubert et al., 2012).
Data regarding ER stress and UPR during CHIKV infection, although apparently conflicting, indicate that CHIKV infection can elicit distinct interactions with cell machinery depending on the cell type and possibly the viral strain analyzed. These data raise the necessity to further investigate the role of UPR on cell lines with close similarity to the cells naturally infected by CHIKV, as epithelial cells, skin fibroblasts, muscular, and endothelial cells. Furthermore, the use of mouse models of infection can also contribute to determining the relevance of the UPR signaling to CHIKV replication and pathogenesis.
CHIKV and the Inflammasome
Inflammasomes are cytosolic molecular complexes that initiate inflammatory responses upon the detection of pathogens, cellular damage, or environmental irritants by the pattern recognition receptors (PRRs). Upon activation, inflammasome is assembled and activates caspase-1, which cleaves proinflammatory cytokines prointerleukin-1β (proIL-1β) and prointerleukin-18 (pro IL-18) resulting in proteolytic maturation and secretion of active forms of these cytokines (IL1-β and IL-18, respectively). All these signaling cascades lead to a type of programmed cell death known as pyroptosis that is inherently inflammatory and characterized by caspase 1-dependent formation of plasma membrane pores leading to ion fluxes, that culminates with the cytoplasmic membrane rupture and subsequent release of intracellular content in order to control microbial infections (Martinon et al., 2002;Bergsbaken et al., 2009;Conforti-Andreoni et al., 2011; Figure 2).
In a scenario of viral infections, inflammasome can amplify the sensing of viral nucleic acids (RNA or DNA). Although inflammasome signaling and activity is supposed to resolve the infection and promote homeostasis, high levels of inflammasome-triggered proinflammatory cytokines have been associated with inflammation and pathogenesis of several viral, bacterial, autoimmune diseases, and cancer (Davis et al., 2011;McAuley et al., 2013;Negash et al., 2013;Wikan et al., 2014;Olcum et al., 2020).
The role of inflammasome on CHIKV replication and pathogenesis has been poorly explored. One study, from Ekchariyawat et al. (2015), demonstrated that CHIKV infection could generate inflammasome signaling in human dermal fibroblasts cells, culminating in activation of caspase 1 and increase IL1 β expression and maturation, as well as induction of the expression of the inflammasome sensor AIM2, although AIM2 has been implicated in recognition of dsDNA only. In the absence of inflammasome assembly (through caspase 1 silencing), CHIKV replication rates were enhanced (Ekchariyawat et al., 2015). Moreover, ASC2 and NLRP3 expression, as well as IFN-β and some ISGs, were upregulated in CHIKV-infected fibroblasts.
More recently, Chen and colleagues showed that NLRP3 inflammasome is activated in humans and mice. Expression of NLRP3, ASC, and caspase 1 was100-fold enhanced in PBMCs from a cohort of CHIKV-infected patients. Also, IL18 and IL1 β mRNA levels were increased in these patients in the acute phase of CHIKF (Chen et al., 2017). In a mouse model of CHIKV-induced inflammation, subcutaneous inoculation of ECSA CHIKV strain isolated from La Réunion (LR2006-OPY1), a microarray gene analysis revealed increased expression of NLRP3, NLRP1, NLRC4, IL-1β -and IL-18-binding protein, caspase-1, IL-18 receptor, and IL-18 receptor accessory protein, with high expression coinciding with the peak of inflammatory arthritic disease symptoms (Chen et al., 2017). Furthermore, using a molecule that inhibits the activation of the NLRP3 inflammasome, the group observed substantial improvement of arthritic symptoms, with a reduction of inflammation, myositis, and osteoclastic bone loss, although the general replication remained at the same levels. Also, in ASC −/− mice the foot swelling after CHIKV infection was less severe, compared to wild type mice. Taken together, these studies reveal the relevance of inflammasome on CHIKV infection, highlighting its role in the pathology of arthritic disease and inflammation. Concisely, the compelling data open the possibility for the development of therapeutic strategies targeting the inflammasome pathway to ameliorate arthritic symptoms.
CHIKV Pathogenesis
Dermal fibroblasts are the primary targets and the main sites of CHIKV replication (Sourisseau et al., 2007;Ekchariyawat et al., 2015), but other skin cells are also susceptible, like keratinocytes and melanocytes (Gasque and Jaffar-Bandjee, 2015). From the skin, the virus migrates via lymphatic circulation, to the nearest lymph node, reaching the bloodstream where it infects mostly monocyte-derived macrophages (Sourisseau et al., 2007). In a non-human primate (NHP) model, CHIKV migration was demonstrated by the presence of CD68 + macrophages positive for CHIKV antigen trafficking to lymphoid tissue and the spleen from early timepoint up to 3 months after infection (Labadie et al., 2010). From the blood, the virus reaches joints, muscles, and bones, which are the sites most linked to the chronic symptoms of the disease. Satellite cells of skeletal muscle are permissible for CHIKV infection and can act as a reservoir of mature skeletal fibers precursors, therefore, they have an active and crucial role in maintaining tissue structure (Ozden et al., 2007) and, when infected, can constitute a site of viral persistence. Mature skeletal muscle fibers and primary myoblasts have also been targeted by CHIKV (Couderc et al., 2008;Lohachanakul et al., 2015). In the joints, viral RNA and proteins were found during the acute and chronic phase of the infection; macrophages, primary human chondroblasts, and fibroblasts from synovial tissues are susceptible to CHIKV infection, with synovial macrophages being the main site of viral persistence linked to CHIKV (Hoarau et al., 2010;Zhang et al., 2018). The bones of the regions closest to the joints are also targets of infection since primary human osteoblasts are permissive to CHIKV (Chen et al., 2015). These are the preferred targets of viruses, which are not coincidentally linked to the most commonly observed clinical manifestations. The appearance of unusual clinical manifestations, affecting central nervous, cardiovascular, respiratory, digestive, hematopoietic, and renal systems is due to the presence of cells, vital to local homeostasis, that is also susceptible to the CHIKV infection.
The Immune Response at Acute Phase of Infection
The type I interferon (IFN) response is an early innate immune mechanism that elicits antiviral responses and activates components of the innate and adaptive immune systems. IFNs are quickly induced after recognition of viruses by host pattern recognition receptors (PRRs), mainly by Toll-like receptors (TLRs), cytosolic receptors as retinoic acid-inducible gene-I (RIG-I), and melanoma differentiation-associated gene 5 (MDA5) (Thon-Hon et al., 2012;Jang et al., 2015). After recognition of their respective ligands (double-stranded [ds] RNA for RIG-I and MDA5), the mitochondrial antiviralsignaling protein (MAVS) is activated via Card-card interactions, domains presented both in MAVS and cytosolic receptors. Then, TBK1 is activated by MAVs and phosphorylates the interferon regulatory factor 3 (IRF-3), which dimerizes and translocates into the nucleus. This signaling pathway induces the production of type I IFNs through activation of the IFN-α/β promoter. IFNs are secreted and act in autocrine and paracrine ways, after activation of the interferon-α/β receptor (IFNAR), triggering a signaling cascade of events that culminates in the expression of ISGs that enhance viral recognition and interfere with several steps of the viral cycle (Platanias, 2005;Hu et al., 2018).
The role of IFNs for CHIKV pathogenesis is well known. Viral replication is controlled by IFNs in cells, and mice lacking IFNAR have important viral dissemination, related to high rates of mortality (Schilte et al., 2010;Suhrbier et al., 2012). In cynomolgus macaques, infection with the isolate CHIKV-LR recapitulates common characteristics of the immune response, such as an increase in plasma levels of IFN-α/β, interleukin 6, and monocyte chemoattractant protein 1, correlating with peak levels of viremia (Labadie et al., 2010). Additionally, in fibroblastic cell lines, CHIKV infection induces the expression of antiviral genes, as IFN-α and RIG-I. Moreover, CHIKV is able to interfere with the nuclear translocation of phosphorylated STAT1, a transcription factor that promotes the expression of several ISGs (Thon-Hon et al., 2012). Cook et al. (2019) recently showed distinct but synergistic roles for IFN-α and β in controlling CHIKV replication and disease. While IFN-α acts in non-hematopoietic cell types, reducing replication and early dissemination of CHIKV, IFNβ has a substantial impact on pathogenesis, since it can limit neutrophil-mediated inflammation at the site of infection (Cook et al., 2019).
Recently, Bae et al. (2019), through a gene screening in HEK293T cells, reported that viral protein nsP2 and envelope glycoproteins E1 and E2 are strong antagonists of the IFNβ signaling pathway. Triggering of IFN response, although a common feature of RNA viruses, can vary in amplitude and intensity depending on the virus species and even different genotypes and/or strains from the same species. The characterization of IFN response during the infection of the CHIKV isolates related to the most recent epidemics in Latin America will allow us to understand the pathogenic potential of these viruses.
Natural killer (NK) cells are at the front line in controlling virus replication via stimulation of IFN-I. Like other viruses, CHIKV is able to induce the activation of a phenotype rarely seen in the NK cells of healthy patients; these cells have the NKG2C1 receptor activated, which makes them highly cytotoxic, leading to the lysis of infected cells (Petitdemange et al., 2011).
Antibodies and CD8 + T cells are key players in adaptive immune responses. It has been shown the activation and multiplication of CD8 + T cells during the first days of infection followed by a switch to CD4 + T-cells, but the exact role of T-cells in CHIKV infection remains uncertain. In mice, CD8 + T cells were recruited to the musculoskeletal tissue in the first week of infection , which could be one of the reasons for the increased levels of IFN-γ . These cells can also be linked, among other mechanisms described above, with the control of viral replication in the acute phase, since there is an increase in perforins, granzymes, and proteins linked to the degranulation of CD8 + T cells, which would culminate in apoptosis of infected cells (Dias et al., 2018).
Regarding antibodies, anti-CHIKV antibodies are fully capable of offering protection even in the first days of infection, since IgM is detected initially at 2-3 days after the appearance of symptoms (Litzba et al., 2008). Antibody-mediated response suppresses the spread of the virus, either by direct neutralization or by activation of the complement system . In a study with rhesus macaques comparing the CHIKV strains La Reunion (CHIKV-LR) and Western Africa 37997 (CHIKV-37997), T-cell and antibody responses were more robust in the animals infected with LR compared to 37997 (Messaoudi et al., 2013). A different study showed that 90% of antibody response against CHIKV was mediated by IgM within the first 9 days of infection in cynomolgus macaques inoculated with CHIKV-LR (Kam et al., 2014).
Immune Response at the Chronic Phase of Infection
Chronification of the infection usually leads to continuous inflammation of the joints. This inflammation can be immunemediated by several elements that, a priori, could be allies in fighting infection; it is possible for NK cells to infiltrate synovial tissues and maintain an inflammatory environment conducive to arthralgia, for example. However, NK cells associated with the chronic phase of the disease have reduced expression of cytolytic mechanisms, such as perforin, and increased expression of IFNγ and TNF-, pro-inflammatory components that can contribute to the establishment of a highly inflamed environment in joints (Thanapati et al., 2017).
The CHIKV-Induced Disease
Usual Clinical Manifestation of CHIKF Arthritis and arthralgia CHIKV, among other mosquito-transmitted alphaviruses, like RRV, Barmah Forest Virus (BFV), and MAYV, can cause debilitating pain and inflammation of joints in humans (Staples et al., 2009), leading to the severe and debilitating rheumatic symptoms that are experienced by most infected individuals, that could result in a negative impact on everyday activities (Ross, 1956). For this reason, epidemiological studies established unusually severe joint pain as the distinguishing and most common feature of CHIKV infection (Brighton et al., 1983;Powers and Logue, 2007). The severe pain starts in the acute phase of infection, affecting both peripheral and large joints, and becomes chronic, typically lasting from weeks to months (Queyriaux et al., 2008;Vijayakumar et al., 2011). In 25-42% of infections, inflammatory-related affections, like joint effusions, redness, and warmth, can be observed. These joint symptoms are usually polyarticular, bilateral, symmetrical, and can fluctuate, but the anatomical location does not usually change (Deller and Russell, 1968;Queyriaux et al., 2008;Simon et al., 2011;Vijayakumar et al., 2011).
Fever
One of the most common symptoms of the acute phase of infection is an abrupt onset of fever, coincident with the viremia and polyarthralgia, reaching 40 • C in some cases, resulting in chills and rigors (Simon et al., 2011). Fever, in addition to lasting from many days to 2 weeks, are also typically biphasic in nature (with a period of remission of 1-6 days) (Halstead et al., 1969;Thiberville et al., 2013), which means an early elevation in body temperature followed by a later one, caused by a dynamic balance between exogenous and endogenous pyrogens and prostaglandins.
Myalgia
Muscle pain, dissociated from inflammation (myositis), is frequent in 46-59% of cases, mainly affecting arms, thighs, and calves (Zim et al., 2013). It can be a confounding factor, since other arbovirus diseases, such as dengue, can also develop myalgia (Kumar et al., 2017), one of the reasons why some researchers call CHIKV clinical manifestations as a "dengue-like" disease, but with a particular articular tropism.
Dermatologic involvement
The most common cutaneous manifestation of CHIKF is macular or maculopapular rash, distributed mainly in the extremities, trunk, and face, associated with severe pruritus (Shivakumar et al., 2007), observed in up to 50% of cases. In most cases, the lesions follow fever episodes, but they also can occur concomitantly since both depend on viremia. They generally do not produce sequelae, but, in some patients, they induce pigmentary changes, mainly in the malar area of the face, with a predilection for the tip of the nose, but also seen in extremities and trunk, desquamation and xerosis (Prashant et al., 2009). There are other less common dermatological manifestations, which includes erythema and swelling of the pinnae, mimicking erysipelas' Milian ear sign; multiple aphthae, erosions, and cheilitis were also observed in oral mucosa, but they were all no-sequelae self-limited manifestations, except for hyperpigmentation of the hard palate in a few patients; and genital involvement, in the form of ulcers, over the scrotum and base of the penile shaft in men and labia majora in women. The infection can also flare-up of pre-existing psoriasis and lichen planus manifestations.
Other usual manifestations
Pain in the ligaments, headache, fatigue, and severe tiredness, digestive symptoms (diarrhea, vomiting, gastrointestinal bleeding, nausea or abdominal pain), red eyes, conjunctivitis, and lymphadenopathy have also been described, only during the acute phase of infection (Economopoulou et al., 2009); therefore, the impact on the quality of life of people affected by the infection begins with the first symptoms and extends to the remission of polyarthralgia at the end of the chronic phase.
Unusual Manifestations of CHIKV Infection
Atypical manifestations of the infection, unlike the aforementioned typical manifestations, depend mostly on the underlying disease, already manifested and exacerbated by the infection or only predisposing in the affected individual, and in this case, CHIKV can be a trigger for the onset of its clinical syndrome. Of note, the spread of new epidemic strains has the potential to induce new subsets of clinical manifestations.
Neurological complications
In both adults and children, the most prevalent neurological manifestation is encephalitis, during the acute phase of the infection, usually manifested in less than 24 h after the sudden onset of high fever (Robin et al., 2008;Venkatesan et al., 2013). Although the manifestation of encephalitis, in general, is not related to the age of the patient, the incidence of CHIKV-associated encephalitis shows that individuals younger than 3 years old or older than 65 are more likely to develop the syndrome. Retrospective studies have made it possible to estimate a frequency of 8.6 per 100,000 CHIKV infection cases (Simon et al., 2007). Epileptic seizures, meningoencephalitis, syndrome of meningeal irritation and Guillain-Barré syndrome have also been described, but these are considerably less frequent cases (Robin et al., 2008;Tournebize et al., 2009;Venkatesan et al., 2013;Gérardin et al., 2016); further studies still need to address whether the unprecedented epidemics of CHIKV infection in the South American continent was in fact accompanied by a higher frequency of higher morbidity and atypical clinical manifestations. Some reports, however, had already associated CHIKV infection with diverse neurological complications (Pereira et al., 2017;Mehta et al., 2018).
Cardiovascular manifestations
Heart failure was diagnosed in patients with acute infection during La Réunion (island) outbreak of chikungunya fever, in 2005(Robin et al., 2008, but approximately 60% of the cases have a previous cardiovascular pathological history, such as valvular or coronary disease. This scenario allows us to jump to two conclusions: (1) 40% of infected patients had a flaw in one of their most vital systems without first manifesting any symptoms that involved it, which makes CHIKV infection a potential cardiovascular risk factor for healthy patients; and (2) the virus has a potentiating character, that is, it can be an unexpected factor in the prognosis of cardiovascular diseases previously diagnosed. Myocarditis after arboviruses infections has been described since 1972 (Menon et al., 2010), which can be the main cause for other registered manifestations, which include ventricular and atrial gallops, tachycardia and tachypnea, blood pressure instability, chest pain, electrocardiograph (ECG) abnormalities, and acute myocardial infarction (Spodick, 1986;Dec et al., 1992;Touret et al., 2006).
Pregnancy risks and vertical transmission
Although there are reports of concomitance between infection and spontaneous abortions in the second trimester (Dreier et al., 2014), studies have failed to establish a direct relationship between prenatal obstetric complications and CHIKV infection. Regarding its symptoms, on the other hand, the management of infected pregnant women needs to be delicate, since classic high fever can lead to neural tube defects, congenital heart defects, and oral clefts when it occurs in the first trimester of pregnancy (Fritel et al., 2010), and when it occurs in the second and third trimesters, it can result in abrupt uterine contractions and abnormalities in the fetal heart rhythm, resulting in premature births or stillborn babies (Torres et al., 2016). When it comes to mother-to-child transmission, there is no evidence to sustain the antepartum or peripartum risk of fetal transplacental infection and infected newborns are linked only to the intrapartum transmission when the parturient has a positive viremia (Solanki et al., 2007;Gérardin et al., 2008;Sissoko et al., 2008).
Renal disorders
An acute pre-renal failure was reported in several cases, of which one-third of the affected patients with previous kidney disease (Robin et al., 2008). The condition is usually controlled by increasing the patient's blood volume by intravenous hydration, and the reported cases seem to have responded well to this therapeutic approach. There's only one case of a nephritic syndrome that emerged during an outbreak of CHIKV in Delhi with full recovery (Lemant et al., 2008).
Deaths
CHIKV was recognized as a non-lethal infection, however during the outbreak in Reunion Island in [2005][2006], the greater number of patients with atypical manifestations of the infection also contributed to the increase in CHIKV-related deaths, with a mortality rate as high as 48% (Renault et al., 2007). Another study points to a significantly lower rate, of approximately 10% (Economopoulou et al., 2009), but it also links all deaths to the aforementioned atypical manifestations. The major concern of analysts is that many deaths during epidemic periods were underreported by health professionals, which would make the infection mortality rate higher than that already estimated for the disease.
As described above, several atypical manifestations of CHIKV were reported upon recent reemergence and emergence of CHIKV worldwide. In La Reunion Island, clinical features that had never been associated with CHIKF were reported, such as pneumonia, diabetes, bullous dermatosis, toxic hepatitis, encephalitis or meningoencephalitis, myocarditis, and cardiorespiratory failure (Economopoulou et al., 2009). During the 2008 outbreak of CHIKF in South India, various cases of cutaneous manifestations, including vesiculobullous eruptions with significant morbidity in infants were associated with CHIKV infections (Inamadar et al., 2008). The authors hypothesized that these novel manifestations could be associated with the IOL circulating strain of CHIKV. In French Guiana, the introduction of the CHIKV Asian strain was associated with severe forms of the disease, including cases of sepsis and a Guillian-Barrè syndrome (Bonifay et al., 2018). In Brazil, where the ECSA strain predominates, atypical neurological manifestations have been reported (Azevedo et al., 2018). Although it is still early to associate CHIKV infection severity with the introduction of different viral strains in susceptible populations, studies are needed in other to characterize the biological properties of different CHIKV strains.
DISCUSSION
The introduction of CHIKV within the human population is estimated to have occurred at the beginning of the 20th century; still, the highly epidemic potential of this arbovirus was only truly appreciated after the large epidemics occurring from the first decade of the 21st century in Kenya, La Reunion islands, and the Caribbean. Strikingly, the CHIKV genotype responsible for these large epidemics was the ECSA-derived Indian Ocean Lineage. Mutation within the viral envelope glycoproteins that accounted for virus adaptability to Ae. albopictus are regarded as an important factor leading to massive virus dissemination in these regions. However, coincident with this unprecedented spread of CHIKV, descriptions of atypical clinical outcomes began to be reported. In Brazil, a CHIKV ECSA genotype, derived from an ancestral ECSA virus from Central Africa, was responsible for the large epidemic that occurred from 2015 to 2018 in several parts of the national territory affecting at least 700,000 individuals. The severity of the symptoms and the morbidity of CHIKF still need to be accounted, but there are reports of atypical cases of meningoencephalitis and other neurological complications in CHIKV-infected patients in Brazil.
Although the factors involved in the unprecedented dissemination of ECSA-derived IOL could be due to viral determinants related to adaptability to the arthropod vector, as already demonstrated, other viral determinants such as increased viral replication capacity, modulation of host IFN response, that has the potential to increase virus pathogenicity cannot be excluded. In fact, as reviewed here, several studies conducted with the La Reunion CHIKV isolate CHIKV-LR, demonstrated its higher capacity to induce disease symptoms and establish infection in immunocompetent murine models of infection when compared to other CHIKV genotypes. Although in the immunocompromised murine model these results were not reproducible and the ECSA-derived IOL was not able to induce higher mortality rates when compared to the other CHIKV genotypes. This data reinforces the importance of continuous studying CHIKV replication properties, host-cell interaction, and pathogenesis to comprehensively address the epidemic potential of different emerging and reemerging CHIKV genotypes.
The South-American ECSA genotype, on the other hand, does not harbor the vector-adapting mutations observed for the ECSA-derived IOL, and studies are urgently needed to understand the role of unique mutations observed throughout its genome for mosquito adaptability, virus replication, and pathogenesis. Characterization of viral determinants of disease severity and virus pathogenicity in this emerging ECSA-related genotype will help to predict the impact of future epidemics.
As well as the characterization of different genotypes of CHIKV in terms of replication capacity, virus-host interaction, and pathogenesis will be crucial to the development of the best vaccine strategy.
Nonetheless, a comprehensive analysis of the atypical CHIKF symptoms due to the Brazilian outbreak from 2015 to 2019 is still lacking, since clinical data are scarce in the literature. Nevertheless, the impact of CHIKV in the Brazilian population could account for the introduction of a new pathogen into a naïve population with a higher probability to spread due to the highly populated urban areas and the high density of the mosquito vectors. However, the number of cases of CHIKV infection in Brazil, which were several orders of magnitude higher than in any other country of South America, was accompanied by the introduction of the ECSA strain, which substituted the Asian strain that was first introduced into the country. While in other regions of South America and in Central America, the Asian strain was responsible for the outbreaks. Thus, one cannot rule out the contribution of specific viral factors that allowed for the behavior of the epidemics in Brazil.
It is important to point out that the CHIKV introduction and epidemics in South America, and specifically in Brazil, occurred concomitantly with the epidemic of Zika virus, and the ongoing outbreaks of Dengue virus. Co-infections may promote the onset of serious illness, such as those with neurological symptoms. The number of co-infection cases still need to be fully addressed, but its impact on the clinical outcome of co-infected individual need to be anticipated.
Regarding viral-host interactions, it is clear that plenty of information on cellular processes is still a matter of debate, once depending on the cell type or animal model, some outcomes for the same question/issue can be quite contradictory. The deep comprehension of essential cellular processes that CHIKV can interfere with and alter to its own replication is a crucial task that researchers need to face and investigate. Thereby, results from new researches in the field of host-viral interaction could bring new strategies to combat this threat and to minimize the social, economic, and health burden, improving the life quality from the affected population, alleviating symptoms, avoiding some atypical complications, and interrupting viral persistence establishment.
AUTHOR CONTRIBUTIONS
MC, MS, IC, GS, and SC wrote the review and revised the figures. PAC and VF wrote the review. LC wrote and revised the review and revised the figures. All the authors contributed to the article and approved the submitted version.
FUNDING
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -Brasil (CAPES) -Finance Code 001. This work was also supported by FINEP, CNPq, and FAPERJ. PAC, IC, SC, and VF are the recipients of a CNPq fellowship. GS is the recipient of a CAPES fellowship. | 2020-06-26T13:04:23.774Z | 2020-06-26T00:00:00.000 | {
"year": 2020,
"sha1": "00b7fee43b6d0afe57b3a506d5d71edde09b5d30",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01297/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00b7fee43b6d0afe57b3a506d5d71edde09b5d30",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244023224 | pes2o/s2orc | v3-fos-license | Virological and molecular investigation on specimens from buffaloes suspected to be infected by buffalopox virus in Egypt 2019
In this study, skin lesions from buffaloes showing clinical signs of buffalopox infection were tested to isolate and identify the buffalopox virus (BPXV). Clinical examination of infected buffaloes was performed and visible clinical signs recorded. Skin scabs from infected buffaloes were collected and used for virus isolation on embryonated chicken egg (ECE) and tissue culture cell lines. The isolated BPXV was identified and characterized using polymerase chain reaction (PCR). The infected buffaloes displayed fever, skin eruptions, enlargement of superficial lymph nodes, emaciation and drop in milk yield. The ECE inoculated with the prepared skin scab samples showed clear raised white pock lesions on the chorioallantoic membrane (CAM). The inoculated tissue cultures (VERO and BHK cell lines) revealed a cytopathic effect (CPE) including rounding, clumping with cytoplasmic granulation and cluster formation. PCR for the C18L specific BPXV gene was carried out on the virus infected tissue culture produced 368 bp bands. Human infection with BPXV was also recorded. It was concluded that BPXV is circulating in Egyptian buffaloes, causing economical losses and infection in contact humans.
Introduction
Buffalopox virus (BPXV) is a contagious viral disease that mainly infects buffaloes. It has been reported in young and old buffaloes and may also be transmitted by insects (Venkatesan et al., 2010a). The disease occurs in two clinical forms including mild and severe forms.
The incubation period of BPXV in buffaloes is 2 to 4 days (Ghosh et al., 1977). The BPXV disease may be associating with high morbidity (80%) (Borisevich et al., 2016), though it has a low mortality rate and an adverse effect on the productivity and working capacity of animals, resulting in large economic losses (Singh et al., 2007a;Venkatesan et al., 2010b).
Characteristic clinical signs of BPXV infection could be summarized as skin lines lesions on the udder, teats, thighs, hindquarters, inguinal region, base of the ears, over the parotid gland, inner aspect and base of eyes and ear (Sharma, 1934, Bhatia, 1936Wariyar, 1937;Singh and Singh, 1967;Mallick and Dwivedi, 1982;Mallick, 1988). Severe forms show generalized rash and secondary bacterial otitis may be observed (Chandra et al., 1987;Ramakrishna and Ananthapadmanabham, 1957). CABI (2019) reported that BPXV infection is mostly recognised in lactating animals, and severe infections in lactating animals is accompanied with mastitis and reduction of milk yield (Singh et al., 2006a).
Buffalo pox infection was first recorded in India in 1934. Since The known hosts of BPXV are the domestic buffaloes (Bubalus bubalis), cow, guinea pigs, suckling mice and humans (Goyal et al., 2013).
Clinical examination and specimen collection from infected animals is the first step in BPXV infection diagnosis. Samples are examined under electron microscopy, and the virus is isolated from lesion scabs by inoculation on ECE and cell culture lines, plaque assay and neutralization test, using PCR and Partial genome sequencing (Eltom et al., 2020).
BPXV could be propagated in a wide range of cell cultures as chick embryo fibroblast (CEF), pup kidney cells, Vero cells and Baby hamster kidney cells (BHK); showing a cytopathic effect (CPE) (Eltom et al., 2020). Scab samples were collected from infected buffaloes and then prepared and passaged on the BHK-21 and Vero cells to isolate the BPX virus (Goraya et al., 2015;Yadav et al., 2010).
The PCR is a fast and sensitive method for the detection of orthopox viral DNA. Several gel-based PCR methods have been described for the detection of orthopox viral DNA (Meyer et al., 1993(Meyer et al., , 1997Ropp et al., 1995;Balamurugan et al., 2009).
PCR for the BPXV C18L gene could differentiate BPXV from other members of OPV, particularly VACV, to confirm the diagnosis of BPXV (Singh et al., 2008). The BPXV C18L gene encodes the ankyrin repeat protein that determines the virus host range (Borisevich et al., 2016). Primers for the C18l gene amplify a 368 bp PCR product unique for BPXV (Eltom et al., 2020).
FAO/WHO Joint Experts identified BPXV as an important zoonotic disease (Eltom et al., 2020), as it infects humans, especially those in close contact with infected buffaloes. Typical pox lesions have been observed on animal handlers (Goraya et al., 2015;Borisevich et al., 2016). In humans, the virus causes lesions restricted to the hands, forehead, eyes, face, and buttocks, and also causes lymphadenopathy. Milking of infected buffalo is one of the major modes of transmission (Gurav et al., 2011). BPXV infection was also recorded in noncontact children, indicating its high virulence (Gore et al., 2020).
In Egypt, buffalo pox disease reported by Tantawi et al. (1976Tantawi et al. ( , 1977, who isolated four virus isolates in 1973 from an outbreak in Egyptian water buffaloes. There is lack of data about BPXV in Egypt, that indicates the need to focus on its infection, isolation using tissue culture and ECE and identification by PCR.
The goal of this study was to record the incidence of BPXV in Egypt and confirm the viral isolation and molecular characterization of the buffalo pox virus in Egypt as a means of controlling the disease using suitable diagnostic and prophylactic measures.
Materials and Methods Animals
A total of six Buffaloes with clinical signs of BPXV located in the Giza Governorate were investigated by measuring body temperature, examining skin and superficial lymph nodes.
Clinical examinations, observations of lesions and complications for all examined buffaloes were carried out and recorded according to (Radostits et al., 2010). The BPXV clinically infected buffaloes, contact animals and infected humans are presented in Table 1.
Sample collection and preparation
Skin scabs from six clinically buffalopox virus infected buffaloes were collected and transferred to the laboratory under chilled conditions in transport medium -phosphate buffered saline (PBS) at pH 7.4 containing antibiotics. Samples were stored at -20°C for BPXV isolation.
For virus isolation, skin scab samples were minced separately using sterile scissors and forceps and then ground using sterile techniques with a pestle in a mortar containing sterile sand. Ten millilitres of PBS containing gentamycin
Virus isolation BPXV isolation on embryonated chicken eggs (ECEs)
Specific pathogen free (SPF) eggs were obtained from the production farm at Koum Osheim, El-Fayoum, Egypt. Eggs were kept in the incubator at 37°C with humidity of 40-60%. They were used for virus isolation via inoculation of the prepared samples supernatants on the chorioallantoic membrane (CAM) of SPF embryonated chicken eggs at 9-11 days age (Prabhu et al., 2015). ECEs were examined for seven days for the detection of the appearance of pock lesions specific to buffalopox virus.
BPXV isolation on tissue culture
Baby hamster kidney (BHK) cell line BHK cell line was obtained from the Central Laboratory for the Evaluation of Veterinary Biological Products (CLEVB) and propagated with Eagle's minimum essential medium (EMEM) and supplemented with 10% foetal bovine serum for virus isolation from the prepared samples supernatants (according to Goraya et al., 2015;Prabhu et al., 2015). A monolayer of BHK cell culture in a 75 cm 2 flask inoculated with 0.5 mL supernatant was inoculated into confluent cell cultures and fed with maintenance medium containing bovine calf serum. The infected flasks were incubated at 37°C and were observed daily for cytopathic effect appearance (CPE) for seven days.
VERO cell culture
African green monkey kidney (Vero) cells were obtained from CLEVB for virus isolation and maintained (according to Yadav et al., 2010;Goyal et al., 2013). A monolayer of VERO cell culture in a 75 cm 2 flask inoculated with 0.5 mL supernatant was inoculated into confluent cell cultures and fed with maintenance medium containing bovine calf serum. The infected flasks were incubated at 37°C and were observed daily for cytopathic effect appearance (CPE) for seven days.
PCR amplification
The extracted nucleic acids samples were amplified by separate PCR reactions using the primer pair of the C18L gene in a 25 µL reaction containing 12.5 µL EmeraldAmp Max PCR Master Mix (Takara, Japan), 1 µL of each primer of 20 pmoL concentration, 4.5 µL water, and 6 µl DNA template. The reactions were performed in an applied biosystem 2720 thermal cycler according to Yadav et al. (2010).
Cycles of the PCR reaction
The PCR reaction cycles were carried out as previously mentioned by Yadav
Analysis of the PCR products
The PCR products were separated by electrophoresis on 1.5% agarose gel (Applichem, Germany, GmbH) in 1 x TBE buffer at room temperature using gradients of 5V/cm. For gel analysis, 15 µL product was loaded into each gel slot. A generuler 100 bp DNA ladder (Fermentas, Thermo, Germany) was used to determine fragment sizes. The gel was photographed by a gel documentation system (Alpha Innotech, Biometra).
Animal ethics approval
All procedures in this study met the ethics regulations of Cairo University-Institutional Animal Care and Use Committee (CU-IACUC), which granted the study approval number: Vet CU28/04/2021/309.
Results
Generalized skin lesions of BPXV including erythema, papules, vesicle, pustules and scabs, distributed all over the skin of buffalos in different stages, as illustrated in Figures 1-3.
Drop in milk production, progressive emaciation, and poor quality skin were also recorded in BPXV infected buffaloes.
Typical pox lesions in human infected with BPXV (vesicle progressed into a pustule with a central area of necrosis) were observed on the hands and around the mouth of animal handlers with unilateral axillary lymphadenopathy and oedema on the arm of a buffalo owner as presented in Figure 4. BPXV was isolated on ECE that showed clear raised white pocks lesions of pox on the CAM after three blind passages as shown in Figure 5.
Virus isolated on the tissue cultures (VERO cell line) after four blind passages as in Figure 6 and BHK cell line as in Figure 7 after three blind passages showing the cytopathic effect (CPE) included rounding, clumping with cytoplasmic granulation and cluster formation.
The molecular identification of BPXV using PCR for the C18L specific buffalo pox virus gene from two VERO isolated virus samples (T1 and T2) and three BHK isolated virus samples (T3, T4, T5) showed clear bands at 368 bp as illustrated in Figure 8.
Discussion
Buffalo pox is a contagious viral disease of buffaloes that has an economic impact due to reduced milk yield resulting from mastitis and long periods of indigestion, drop in meat production and low hide quality. It also has zoonotic significance, and diagnostic tools are limited and there are no commercial vaccines available.
Clinical signs were recorded among the buffaloes in this study as mild and severe generalized forms, in different ages and both sexes. Generalized skin lesions of BPXV including erythema, papules, vesicle, pustules and scabs, distributed over the skins in different stages as shown in Figures 1 to 3. Drop in milk production, progressive emaciation, and poor quality skin were also recorded in the BPXV infected buffaloes.
These signs agree with literature reports (Singh, 2007;Bhanuprakash et al., 2010c;Yadav et al., 2010;Prabhu et al., 2015), confirming that occurrences of buffalo pox in buffaloes varies in morbidity, mortality and case fatalities rates, where mortality and case fatality rates are usually low.
The zoonotic infection of BPXV was observed in humans as presented in Figure 4 with lesions on the arm, hand and around the lips of infected buffalo owners. This observation agrees with previous reports (Singh et al., 2006a;Prabhu et al., 2015) and the assessment of the Joint Expert Committee on Zoonosis that the BPXV is an important potential zoonotic diseases, naturally transmitted in buffaloes and then consequently to humans. It also agrees with reports Kolhapure et al., 1997;Singh et al., 2006a;2007a) that stated that BPXV is a significant public health threat after the termination of vaccination against smallpox worldwide.
Isolation of BPXV was performed using embryonated chicken eggs (ECE), which showed typical pock lesions as shown in Figure 5. This is in agreement with previous reports (Prabhu et al., 2015;Eltom et al., 2020) that BPXV was isolated on ECE producing raised white pocks.
BPXV was isolated in cell lines (VERO and BHK cell lines) after blind passages as illustrated in Figures 6 and 7 BPXV is close to the clade of VACV. In addition, studies on BPXV have revealed a close association between BPXV and the VACV envelope (Singh et al., 2006b and2007b). Further, nucleotide and deduced amino acid sequences showed high identity with VACV (99%), and the C18L gene showed that BPXV isolates clustered into a different group distinct from VACV (Singh et al., 2008 and2012).
BPXV was identified and confirmed by PCR amplification of the BPXV specific C18L gene showing bands at 368 bp (Figure 8) for the five tissue culture samples. This corresponds with
Conclusion
It was concluded that BPXV is circulating in Egypt and causing economic losses due to drops in milk and meat production, and the production of low quality of hide in infected buffaloes. BPXV infects buffaloes of different ages and sexes. It can be isolated on the CAM of ECE and also in different cell lines (VERO and BHK), and it was identified and confirmed by PCR for the BPXV specific C18L gene. BPXV has proven to have sporadic outbreaks, furthered by a zoonotic risk and economic impacts. Therefore, this infection should not be neglected and we should push for the development of a commercial vaccine to control its occurrence and hazards. | 2021-11-12T16:11:55.132Z | 2021-11-10T00:00:00.000 | {
"year": 2021,
"sha1": "157e82bc2e45aad69deea9bca4eec158a1bc52ec",
"oa_license": null,
"oa_url": "https://hrcak.srce.hr/file/380607",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0cf5f9b4ee8872569116069bae6bff0d8200ab2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
5797078 | pes2o/s2orc | v3-fos-license | The distribution function of dark matter in massive haloes
We study the distribution function (DF) of dark matter particles in haloes of mass range 10^{14}--10^{15}\Msun. In the numerical part of this work we measure the DF for a sample of relaxed haloes formed in the simulation of a standard \LambdaCDM model. The DF is expressed as a function of energy E and the absolute value of the angular momentum L, a form suitable for comparison with theoretical models. By proper scaling we obtain the results that do not depend on the virial mass of the haloes. We demonstrate that the DF can be separated into energy and angular momentum components and propose a phenomenological model of the DF in the form f_{E}(E)[1+L^{2}/(2L_{0}^{2})]^{-\beta_{\infty}+\beta_{0}}L^{-2\beta_{0}}. This formulation involves three parameters describing the anisotropy profile in terms of its asymptotic values (\beta_{0} and \beta_{\infty}) and the scale of transition between them (L_{0}). The energy part f_{E}(E) is obtained via inversion of the integral for spatial density. We provide a straightforward numerical scheme for this procedure as well as a simple analytical approximation for a typical halo formed in the simulation. The DF model is extensively compared with the simulations: using the model parameters obtained from fitting the anisotropy profile, we recover the DF from the simulation as well as the profiles of the dispersion and kurtosis of radial and tangential velocities. Finally, we show that our DF model reproduces the power-law behaviour of phase space density Q=\rho(r)/\sigma^{3}(r).
INTRODUCTION
The distribution function (DF) provides the most general and complete way of statistical description of dark matter (DM) haloes. It carries maximum information on the spatial and velocity distributions of particles in such objects. Our knowledge on the DF is still being improved, mostly due to numerical experiments. In the last few years cosmological simulations have revealed increasingly detailed features of phase-space structure of DM haloes. These numerical results provide useful constraints on theoretical models of the DF. One property of interest in this field is the anisotropy of the velocity dispersion tensor. It has been demonstrated that the outer parts of the haloes exhibit more radially anisotropic trajectories than the halo centre (see e.g. Colín Cuesta et al. 2007). This feature, besides the well-studied density profile, has been considered as the main point of reference in the attempts at construction of a reliable model of the DF.
So far, a few approaches to this problem have been proposed. Cuddeford (1991) generalized the Osipkov-Merritt model (Osipkov 1979;Merritt 1985) to the DF which generates an arbitrary anisotropy in the halo centre and becomes fully radial at infinity. Although an analytical inversion for these models exists, the anisotropy profile cannot be reconciled with the numerical results: the rise from central to outer anisotropy is too sharp and the outer orbits are too radial (see ). An & Evans (2006a) noticed that a non-trivial profile of the anisotropy can be obtained from a sum of DFs with a constant anisotropy for which an analytical inversion is known (Cuddeford 1991;Kochanek 1996;Wilkinson & Evans 1999). However, the resulting anisotropy profiles are decreasing functions of radius and do not agree with those measured in cosmological simulations. Recently a very elegant method has been presented by Baes & van Hese (2007). The authors introduced a general ansatz for the anisotropy profile and then, for a given potential-density pair, derived the DF as a series of some special functions. This approach works well under the condition that the potential can be expressed as an elementary function of the corresponding density. This requirement, however, is not satisfied by many models, including the NFW density profile (Navarro, Frenk & White 1997) which is commonly used as a good approximation of the universal density profile of DM haloes.
The DF inferred from the simulation gives a possibility to test directly the analytical models. According to the Jeans theorem any spherically symmetric system in the state of equilibrium should possess a DF which is a function only of energy and the absolute value of angular momentum. This theoretical postulate was taken into account in the computation carried out by Voglis (1994) and Natarajan, Hjorth & van Kampen (1997). In the first case the DF was obtained for a single relaxed halo which formed from cosmologically consistent initial conditions. It was shown that there were two main contributions to the DF, the halo population and the core population of particles. Both were effectively described by two independent phenomenological fits. Natarajan et al. (1997) determined the DF for a sample of cluster-size haloes formed in cosmological simulations. Their selection of objects included those with substructures and departing from equilibrium. They also discussed and took into account in their calculation the effect of boundary conditions defined by the virial sphere. However, the final results were not used to test quantitatively any model of the DF.
It seems that two main approaches to study the DF, namely theoretical modelling and feedback from the simulations, evolved rather separately barely crossing each other. The rare exceptions include the work of Lokas & Mamon (2001) who used the Eddington formula to derive numerically the DF following from the NFW profile in the isotropic case and that of Widrow (2000) who considered more general cuspy profiles and Osipkov-Merritt anisotropy. This paper is devoted to combining both approaches and providing a coherent analysis of the DF from the viewpoints of the simulations as well as the model construction. Our main aim is to propose a phenomenological model of the DF that recovers the results from the simulations as accurately as possible.
Our effort is mainly motivated by the future applications of the derived DF to the dynamical modelling of galaxy clusters. Although subhaloes in general have different density and velocity distributions than DM particles (Diemand, Moore & Stadel 2004), massive subhaloes (those likely to host galaxies) are distributed like DM particles (Faltenbacher & Diemand 2006). Although the correspondence between the massive subhaloes and galaxies in real clusters remains to be proven, our results should at least in principle be applicable to kinematic data sets for galaxy clusters. The traditional approach to do such modelling was to reproduce the velocity dispersion profile of the galaxies by solving the Jeans equation (see e.g. Katgert, Biviano & Mazure 2004). It is well known however that from the dispersion alone one cannot constrain all the interesting parameters (such as the virial mass, the concentration of the NFW profile and anisotropy) because of the density-anisotropy degeneracy. One can break this degeneracy by using the fourth order velocity moment, the kurtosis, and solving an additional higher-order Jeans equation ( Lokas 2002;Lokas & Mamon 2003;Lokas et al. 2006;. Although this approach has many advantages (e.g. it does not require the knowledge of the full DF), it has been applied till now only for constant-anisotropy models and the calculation of velocity moments requires the binning of the data in which some information is lost. Since the number of galaxies with measured redshifts per cluster is still rather low (of the order of a few hundred for the best-studied, nearby clusters) it is essential that all the available information is used. This can be obtained by fitting the projected DF to the data directly.
A few approaches along these lines have been attempted already. For example, Mahdavi & Geller (2004) used a simple DF of the form f (E, L) ∝ E α−1/2 L −2β (which yields constant anisotropy) to constrain the mass profile and orbital structure using combined kinematic data sets for nearby galaxy groups and clusters, while van der Marel et al. (2000) in their study of CNOC clusters did not assume an explicit form for the energy-dependent part of the DF, but still used the constant anisotropy. used a simplified form of the projected isotropic DF constructed from the projected density combined with a Gaussian distribution for the line-of-sight velocities to study the properties of members versus interlopers in simulated kinematic data sets. None of the DFs used so far, however, reflects accurately the true properties of cluster-size DM haloes found in N -body simulations.
The paper is organized as follows. Section 2 provides the theoretical framework and defines all the basic quantities used later on in the paper. In the next section we discuss the details of the computation of the DF of DM particles in the haloes formed from cosmological simulations and provide examples of the results. Section 4 is devoted to the derivation of a phenomenological model of the DF; we discuss the separability of the DF in energy and angular momentum and present an explicit formula for the L-dependent part of DF. An extensive comparison of the model with the simulations is presented in Section 5, where we also provide an analytical approximation for the energy-dependent part of the DF obtained for an average halo. Finally, the discussion follows in Section 6.
THEORETICAL FRAMEWORK
This section summarizes the theoretical background of the paper. First, we introduce scaling properties consistent with the NFW density profile. We will use this profile in the paper, but our approach is not restricted to this particular density distribution and can be easily generalized to any profile consistent with simulations (see below). Second, we briefly describe the relation between the differential DF and the DF itself. Finally, we discuss the consequences of the finite volume of the virialized area of a halo. In particular, the relation between the DF and its differential form is properly modified to account for this effect.
Scaling properties
It is a well known fact that the density profiles of DM haloes formed in cosmological simulations exhibit striking similarity. NFW showed that most of them are well fitted within the virial sphere by the universal two-parameter profile which can be expressed in the following way where x = r/rs. The two free parameters are the scale radius rs and the mass enclosed within the sphere of this radius Ms. The (positive) gravitational potential inferred from the Poisson equation reads (Cole & Lacey 1996; see also Lokas & Mamon 2001) where the velocity unit Vs is related to the circular velocity Vcir(rs) at the scale radius via Vs = Vcir(rs)(ln 2 − 1/2) −1/2 . Let us note that rs, Vs and Ms define a set of natural units of the NFW model. By scaling any quantity by a proper combination of them we remove the explicit dependence on the free parameters of the NFW model. This is an essential property if we want to study the dynamics of a whole class of haloes with NFW-like density profiles. Hereafter, we will keep this scaling in all equations in the text. In many places we will also use a unit of the angular momentum Ls as a substitute for Vsrs.
The distribution function
The DF is a fundamental concept in statistical mechanics of N -body systems. It describes the phase-space density of particles of such a system without any detailed knowledge of the time evolution of N trajectories. Following the Jeans theorem, a steady-state DF, which is of interest for us here, depends on the phase-space coordinates only through the integrals of motion. Although the shape of DM haloes is in general better approximated by a three-axial ellipsoid rather than a sphere (see e.g. Gottlöber & Yepes 2007), it is still very effective to assume spherical symmetry in dynamical approach. Given that the streaming motions and internal rotation within the virial sphere are negligible compared to higher velocity moments, spherical symmetry implies that the DF can be expressed as where E is the positively defined binding energy and L the absolute value of the angular momentum per unit mass The gravitational potential in equation (4) is related to the DF through the Poisson equation The most natural and straightforward probe of f (E, L) in numerical experiments is the so-called differential DF defined in the following way One may intuitively interpret this function as mass density in energy-angular momentum space. The DF itself can be simply derived dividing N (E, L) by the volume g(E, L) of the hypersurface of constant energy and angular momentum embedded in the phase space It is easy to show that the volume of this hypersurface reads (see Appendix A) g(E, L) = 8π 2 LTr(E, L), where Tr(E, L) is the radial period of an orbit given by the following integral over radius from the pericentre to the apocentre The upper panel of Fig. 1 shows a contour map of g(E, L) (dotted lines) calculated for the NFW gravitational potential (2). The Lmax(E) line is the profile of maximum angular momentum which consists of points corresponding to circular orbits. This curve divides the energy-angular momentum plane into an area describing the physical orbits of a system (below Lmax) and the zone not permitted by mechanics (above Lmax). Note that we are using the scaling relations introduced in the previous subsection so the results do not depend explicitly on the halo mass Ms and the scale radius rs. In some places later on we will refer to the inverse function for Lmax(E) by Emax(L). Voglis (1994) and Natarajan et al. (1997) showed that the dependence of Tr(E, L) on the angular momentum is very weak and could be neglected without loss of precision. This is understandable if we note that the NFW-like potentials are still not so far away from the isochrone potential Ψ(r) ∝ (b + √ b 2 + r 2 ) −1 which leads to purely energydependent Tr proportional to E −3/2 (Binney & Tremaine 1987). Following Natarajan et al. (1997) we will use this feature to simplify expression (9). To do this we first note that the volume of the hypersurface of constant energy gE is given by Taking advantage of the weak dependence of Tr on L, equation (10), we get On the other hand, one can show that gE(E) reads (see Appendix A) where rmax(E) is the apocentre radius of the radial orbit. Inserting (13) into (12) one immediately gets a very simple approximation for g(E, L) involving only a one-dimensional integral without singularities, in contrast with g(E, L) derived by expression (10). We find that this approximation reproduces the exact formula (9) with enough accuracy. Taking In both cases the NFW profile was assumed. Solid lines show the profiles of the maximum angular momentum of a given system. In the lower panel the three shades of gray mark the three characteristic zones according to the orbit size defined by the relation of the virial radius rv to the pericentre radius rp and the apocentre radius ra, as labelled.
advantage of its numerical simplicity we use it in majority of our calculations.
Boundaries of the haloes
So far we have discussed the relation between the DF and its differential form for an infinite system. In practice, however, we restrict our numerical analysis to the interior of the virial sphere which separates the equilibrium part of a halo from the infall region. We define the virial radius rv of this sphere by where Mv is the virial mass, ρc is the present critical density and ∆c is the virial overdensity. Another parameter commonly used to describe the size of the virial sphere in terms of rs is the concentration c = rv/rs. The existence of the boundary of the virialized part of the halo implies that Tr given by (10) must be replaced by where the upper limit of the integral is a minimum of the virial radius and the radius at the apocentre (see Appendix A for details). Combining (15) with (9) one gets a general formula for the volume g(E, L) in the presence of a spherical boundary of a halo. Contrary to the conclusion of Natarajan et al. (1997), we find that the approximation (12) is no longer justified for orbits extending beyond the virial sphere (ra > rv). This follows from the fact that angular momentum dependence of (15) becomes non-negligible and the integral (11) cannot be simplified to the form of (12).
Using (9) and (15) with the NFW potential, we calculated g(E, L) for a halo limited by the virial sphere of radius rv = 5 rs (see the lower panel of Fig. 1). As expected, the result differs from an infinite system by the orbits with ra > rv and remains unchanged for trajectories wholly included within the virial sphere.
The simulation
For our N -body simulation we have assumed the WMAP3 cosmology (Spergel et al. 2007) with matter density Ωm = 0.24, the cosmological constant ΩΛ = 0.76, the dimensionless Hubble parameter h = 0.73, the spectral index of primordial density perturbations n = 0.96 and the normalization of power spectrum σ8 = 0.76. We have used a box of size 160 h −1 Mpc and 1024 3 particles. Thus the particle mass was 3.5×10 8 M⊙. Starting at redshift z = 30 we followed the evolution using the MPI version of the Adaptive Refinement Tree (ART) code (Kravtsov, Klypin & Khokhlov 1997). We identified clusters with the hierarchical friends-offriends (FOF) algorithm (Klypin et al. 1999) with a linking length of 0.17 times the mean inter-particle distance which roughly corresponds to an overdensity of 330. We have selected 36 clusters at redshift z = 0 in the range of virial mass (0.15-2) ×10 15 M⊙, where the virial overdensity parameter appropriate for our cosmological model was assumed ∆c = 93.8 ( Lokas & Hoffman 2001). Our sample did not include clusters with two substructures of approximately the same mass and a poor fit of the NFW profile suggestive of a recent major merger.
Starting from the FOF position of the cluster we have determined the highest density peak as the final centre of the clusters. This centre coincides with the position of the most massive substructure found at the linking length 8 times shorter and also with the position of the halo found by the BDM halo finder (Klypin et al. 1999).
Computation of the distribution function
In the first step of the computation we calculate the binding energy (4) and the angular momentum (5) per unit mass for each particle within the virial sphere of each halo. Spherical symmetry implies that we have to apply in (4) the radial profile of the gravitational potential where Ψ(∞) = 0. However, the mass profile of the equilibrium part of a halo reaches no further than the virial radius.
On the other hand, all analytical models of the DF involve the density profile extending to infinity. We found that the only coherent way to reconcile both facts is to split the integral (16) in two parts The first term is evaluated numerically by integration of a discrete mass profile. The second term is an analytical extension with the NFW density profile which is an assumption of the DF model introduced in the following section. Its contribution to the potential is a constant equal to V 2 s ln(1 + c)/c. Fig. 2 shows the resulting energies and angular momenta of particles inside the virial sphere of one of the simulated haloes. The profile of the maximum angular momentum (solid line) and the profile of vanishing radial velocity at the virial sphere (dashed line) were calculated for the exact gravitational potential given by (17). All particles occupy the area permitted by mechanics or lie very close to the boundary line. Interestingly, quite a large fraction of them have orbits extending beyond the virial sphere. As noted in the previous section, we keep V 2 s and Ls as units of energy and angular momentum respectively. The parameters of the NFW model were obtained for each halo by fitting the NFW formula to the density profile measured in logarithmic radial bins.
In the next step we determine for each halo the differential DF given by (8). In this calculation we used our own version of the FiEstAS (Field Estimator for Arbitrary Spaces) algorithm designed to infer the density field from a scatter diagram embedded in a space of any number of dimensions (see Ascasibar & Binney 2005 for more details). As a result of this computation we get an estimate of N (E, L) at all points of the energy-angular momentum plane corresponding to the particles inside the virial sphere. Once N (E, L) is calculated the DF can be easily obtained via (8). As discussed in section 2, we used approximation (12) for the orbits contained inside the virial sphere and the exact formula (9) with (15) for trajectories extending beyond rv. We found that the additional advantage of expression (12) is that it could be evaluated at any point of the energy-angular momentum plane. This helps us to keep the estimates of the DF obtained for points with angular momentum lying slightly above Lmax(E).
In order to derive a contour map or a profile of the DF we introduce a regular dense mesh on the energy-angular momentum plane and find the median value of the DF in each cell. Such a set of median points is considered as the final numerical approximation of the DF and is used in preparation of all plots in this paper. Fig. 3 shows two examples of the resulting contour maps obtained for two different haloes. The unit of the DF in this and following Figures is The interval between the iso-DF lines is fixed at value 0.25 of the logarithmic scale. The lack of the DF estimation in the lower part of each diagram arises from the fact that this zone is occupied by very few particles (see e.g. Fig. 2) so that no information on the distribution can be retrieved. Let us note that this is an effect of the finite mass resolution.
THE ANALYTICAL MODEL OF THE DISTRIBUTION FUNCTION
A general form of the DF for spherical systems in the state of equilibrium is a function of energy and the absolute value of angular momentum f (E, L). In our approach we assume that the DF is separable in energy and angular momentum This is the first assumption that considerably narrows the family of possible solutions. Therefore, it is necessary to check how robust it is. We address this problem in the next section, where we present an extensive comparison of the analytical model with the simulations. The angular momentum part of the DF in equation (18) specifies the anisotropy of velocity dispersion tensor. This quantity is commonly described with the so-called anisotropy parameter where σr and σ θ are the radial and the tangential velocity dispersions respectively and we assume there are no streaming motions. The values of this parameter range from −∞ for circular orbits to 1 for purely radial trajectories. Fig. 4 shows the average anisotropy profile of the simulated haloes used for the measurement of the DF. The light gray rectangle in the background of the plot indicates the position of the virial radius. It is clearly seen that the anisotropy is typically a growing function of radius, with values ∼ 0.07 in the halo centre and ∼ 0.3 at the virial sphere (see e.g. Cuesta et al. 2007 for comparison). On the other hand, the considerable width of the interquartile range of the measured β(r) (dark gray region) signifies that the profiles of single haloes differ among each other. Occasionally flat or decreasing profiles are measured. It seems that a simple and general enough analytical model of the anisotropy should possess at least three free parameters which determine asymptotic values of β(r) for small and large radii and a scale of transition between them. We proceed with the construction of such a model by introducing a proper ansatz for fL(L). Louis (1993) showed that the following asymptotes of the angular momentum part of the DF where L0 is an angular momentum constant, lead to constant anisotropy β∞ at infinity (r 2 Ψ(r) ≫ L 2 0 ) and β = 0 in the halo centre. This result can be easily generalized to the case of a non-isotropic velocity distribution in both limits of radius. First, let us note that the central part of the halo is dominated by the particles with small angular momenta, namely L 2 ≤ 2r 2 Ψ(r) ≪ L 2 0 . Then, remembering that the DF of constant anisotropy takes the form (Hénon 1973;Binney & Tremaine 1987;Lokas 2002) f it is easy to notice that the formula (20) can be rewritten in the following way where β0 is the central anisotropy of a system. As shown by An & Evans (2006b), the upper limit for β0 is equal to γ/2, where r −γ is the density profile near the halo centre. This means that for the NFW density model we have β0 ≤ 1/2. The simplest function obeying the asymptotic conditions formulated above is a double power-law function As shown in the following section, this ansatz leads to a very realistic anisotropy profile that fits well the β(r) profiles of simulated haloes. Furthermore, the simplicity of formula (23) guarantees that the energy part of the DF can be quite easily calculated via the inversion of the integral equation The key idea of this procedure lies in an analytical simplification of the right-hand side of (24) to a one-dimensional integral. The resulting equation is then solved numerically for fE(E). The technical details of this calculation are summarized in Appendix B. Once the full form of the DF is determined one can also calculate the velocity moments. All formulae are reduced to one-dimensional integrals which can be easily evaluated numerically (see Appendix C). The top row of Fig. 5 shows the anisotropy, dispersion σr and kurtosis κr = v 4 r /σ 4 r of the radial velocity inferred from the model of the DF. The calculations were carried out assuming the NFW density profile and four sets of model parameters chosen to illustrate the flexibility of the model: β0 = 0.1 and β∞ = 0.3, 0.5 (solid and dashed lines respectively); β0 = β∞ = 0.3 (dotted line); β0 = 0.4 and β∞ = 0.1 (dashed-dotted line). In all cases the transition value of L0 = 0.25 Ls was used.
The dispersion profiles for the two models with increasing β(r), as expected, differ only for large radii which is the effect of different values of β∞. Interestingly, the corresponding kurtosis profiles clearly signify flat-topped velocity distribution in the outer part of the halo (κr < 3), highly peaked distribution in the centre (κr > 3) and roughly Gaussian for radii around rs (κr ≈ 3). On the other hand, non-increasing β(r) profiles lead to less peaked velocity distributions in the centre. It seems therefore that the typical anisotropy of DM haloes, as shown in Fig. 4, is expected to coincide with the kurtosis rapidly growing towards the halo centre (see also Fig. 10 below). As we will see in the following section, this is one of the most characteristic features of the phase-space structure of massive DM haloes.
In the bottom panels of Fig. 5 we plotted the DFs corresponding to four sets of model parameters, as described above. The three panels from the left to the right show the energy part of the DF, contour maps and the profiles for three fixed values of angular momentum. The plots reveal some interesting signatures of the specific shape of β(r) profile. For example, the inclination of the iso-DF lines with respect to the energy axis decreases with increasing β0: more isotropic β at the centre corresponds to more vertical iso-DF lines; also the shape of the lines is somewhat different. These features are also to some extent visible in the contour maps of the DF for two simulated haloes in Fig. 3. The upper map represents a halo with an increasing β(r), whereas the second one depicts the case of a decreasing β(r) profile. Both haloes are analyzed in terms of velocity moments and the DF in the following section.
Recently found that the simulation data are well reproduced by the anisotropy profile of the form β(r) = 1 2 where r 1/4 is the radius where β = 0.25. Assuming β0 = 0 and β∞ = 0.5 in our DF model, we made a comparison of the resulting anisotropy with the functional form (25). 6 shows both anisotropies for three values of r 1/4 . Note that both β(r) profiles have similar shapes, although our anisotropy profile has a somewhat sharper rise at small radii. We also find that the radius r0 characteristic of the DF model, for which β is the mean of the limiting values depends weakly on β0 and β∞. For parameter ranges leading to β(r) profiles covering the interquartile area of anisotropy from the simulation (0 < β0 < 0.15, 0 < β∞ < 0.6 and 0.04 < L0/Ls < 25), this radius is well (within 5 percent accuracy) approximated by r0/rs = 3.69(L0/Ls) 0.97 + 2.27(L0/Ls) 1.9 .
The distribution function
The DF proposed in the previous section is a phenomenological model in the sense that it possesses free parameters whose values should be adjusted to the simulation data. All three parameters were introduced to determine a family of anisotropy profiles so that it is β(r) that is most sensitive to the variations of β0, β∞ and L0. Consequently, we decided to constrain the parameters of the model by fitting the β(r) profile inferred from the DF model to the median profile measured in simulated DM haloes. The best-fitting parameters are: β0 = 0.09, β∞ = 0.34 and L0 = 0.198 Ls.
The corresponding best-fitting profile of the anisotropy is plotted as a dashed line in the lower left panel of Fig. 9.
Once the model parameters are adjusted the DF can be compared with its counterpart measured from the simulation. Fig. 7 shows this comparison in terms of a contour map and the profiles for constant angular momentum or energy. Dark gray regions in all panels indicate the interquartile range of the DF values within the halo sample. The lighter gray area in the background of the upper diagram marks the points of vanishing radial velocity at the virial radius rv. Its boundaries are fixed by the first and third quartile of virial radii in the halo sample, 4.1rs and 6.0rs respectively.
Although some deviations of the model (dashed lines) from the results of the simulations are visible, in general the theoretical profiles are included within the interquartile range or lie very close to its boundaries. As expected, the strongest discrepancy between the model and the simulation is present in the part of the energy-angular momentum plane populated by the particles with orbits extending beyond the virial sphere (the area to the left of the ra = rv line). However, given that this is the only part of the energyangular momentum plane affected by the infalling material, we think that the observed differences are acceptable.
The separability of the distribution function
A critical point of the derivation of the DF presented in the previous section was the factorization introduced by equation (18). In order to inspect the robustness of this assumption we propose a simple test. We calculate the ratio of the DF from the simulation to the energy part of the DF model with parameters adjusted to the anisotropy profile from the simulation. Under the assumption that the real DF is factorizable in energy and angular momentum, we can expect that the resulting ratio should be a weak function of energy equal to fL(L) given by (23). Fig. 8 shows that the variations of this ratio with respect to fL(L) are of the same order as the width of the interquartile range which means that separability is acceptable from the statistical point of view. A small systematic deviation can be seen for L ∼ 0.1 Ls. However, this is certainly a local feature since this trend is not repeated in other profiles. Let us emphasize that this test of separability depends strongly on the reliability of fL(L). One can imagine that an incorrect form of fL(L) would likely lead to a negative result of the test, whether f (E, L) is separable or not. On the contrary, a positive result of such a test in our case means that not only is the assumption of factorization valid but the approximation for fL(L) is reasonable as well.
Velocity moments
Further comparison between the simulation and the DF model can be done in terms of velocity moments. This is depicted in Fig. 9 where the dispersion and kurtosis of the radial and tangential velocity are plotted. In the bottom part of this figure we show the profiles of the anisotropy β(r) and β4 parameter which measures the anisotropy of a tensor of the fourth velocity moment. By analogy with the parameter β(r) we defined β4(r) in the following way The dashed lines in each panel of Fig. 9 are the model predictions, except for the β(r) profile (lower left panel) which is a fit of the model providing constraints on parameter values given in the previous subsection. Theoretical dispersion profiles coincide very well with the profiles from the simulation. We notice quite a good agreement also for the β4(r) parameter. On the other hand, theoretical profiles of the kurtosis are systematically biased towards higher values, but typically by less than 10 percent. Nevertheless, their shapes clearly recover the shapes of the median profiles from the simulation. Moreover, for both the radial and tangential velocity a characteristic growth of κ from value 3 around the virial radius up to 4 in the halo centre, is seen.
Although the kurtosis bias is enclosed within acceptable limits (kurtosis is known to be sensitive to any noise), it would be desirable to find out the reason for this behaviour. Since our statistical samples consist of 10 4 − 10 5 particles per radial bin, we ruled out a possibility of a bias of the kurtosis estimator (see Lokas & Mamon 2003). We also excluded the possibility that this is caused by some specific assumptions of the model. For example, changing the NFW density distribution to the 3D Sersic profile, which fits the simulation data even better (Navarro et al. 2004;Merritt et al. 2005;Prada et al. 2006), we still encounter the same bias. In addition, perturbing the model parameters of fL(L) does not explain the situation either. We therefore conclude that the slight discrepancy in the predictions of our model concerning the kurtosis must signify reaching the limitations of the theoretical approach based on using the global, smooth gravitational potential of a system. We suppose that the problem is caused by the presence of substructures which perturb locally the trajectories of particles with respect to the orbits determined by the global potential of a halo. What one gets from the simulation is really a convolution of the velocity distribution expected from the model involving a global potential with the distribution of velocity perturbations occurring due to density fluctuations. The estimation of the importance of this effect is a complicated task since the perturbation of the particle orbit depends on many variables, such as the distribution of substructures, softening of the potential and particle velocity. However, some qualitative conclusions can be drawn. First, note that low-velocity particles are affected by the density perturbations more strongly. Consequently, the peak of the resulting velocity distribution is suppressed and the tails are preserved which may effectively decrease the kurtosis (see Fig. 9). Second, the effect of the perturbation on the velocity dispersion is a higher order correction compared to the dispersion obtained for a system with a global potential. This means that the resulting dispersion profiles are barely changed and they are still expected to coincide well with theoretical predictions.
It seems an intriguing issue that the profile of tangential kurtosis signifies Gaussianity of the velocity distribution at radii around rs where the logarithmic slope of the density profile is equal to −2. One could suppose that some signatures of the so-called isothermal sphere are locally present. Interestingly, this statement is also supported by the shape of the DF for E ≤ Ψ(rs) ≈ 0.7 V 2 s that is the energy range of particles at rs. Referring to the middle panel of Fig. 7 it is easy to notice that the DF grows exponentially with energy, as expected for systems not very different from the isothermal sphere. The distribution of the radial velocity, on the other hand, takes the Gaussian form for radii around 0.3 rs. This difference could be a consequence of the nonvanishing anisotropy parameter, which is not accounted for in the classical formulation of the isothermal model (Binney 1982;Binney & Tremaine 1987).
So far we tested the DF model for a typical halo associated with the median properties of our halo sample. In order to check the applicability of our model more extensively we repeat such comparison for single haloes. The DF in this case is expected to differ from one halo to another due to the observed variety of anisotropy profiles. Results of this analysis are summarized in Fig. 10. To save space we included only five haloes with representative, rather different anisotropy profiles (upper panels), from the most strongly increasing profile in the left panel to a decreasing one on the right-hand side. The second and fifth panels correspond to the haloes for which contour maps of the DF are shown in Fig. 3 (the top and bottom panel respectively). We restricted the number of profiles to those most essential: we plot the dispersion and kurtosis of the radial velocity and the anisotropies β(r) and β4(r). We also show the profiles of the DF for three values of angular momentum or energy. In all panels the solid lines represent simulation results, whereas dashed lines are the predictions of the model. As in Fig. 9, dashed lines in the case of parameter β(r) indicate best fitting profiles of the model. Gray regions in the panels of two bottom rows mark the interquartile ranges of the DF which describe the scatter of points resulting from the FiEstAS algorithm.
From the analysis of Fig. 10 we conclude that all profiles, regardless of the anisotropy, are very well reproduced by our model of the DF. In general, the theoretical DF does not exceed the limits of the interquartile range or lies very close to its boundaries (see two bottom rows of panels in Fig. 10). Surprisingly, we find that the agreement is usually almost equally good when the model is applied to the haloes with massive substructures which were rejected from our sample. This is certainly good news for the future applications of our DF to the dynamical modelling of galaxy clusters which very often display signatures of recent major mergers. r/r s y(r)=σ r y(r)= β y(r)=κ r y(r)=β 4 Figure 11. Relative errors of velocity moments and anisotropies inferred from the DF obtained with the analytical approximation of f E (E) given by (29). All profiles were compared with the results of exact calculations summarized in Appendix C.
0.078 0.07 α 1 2.10 1.74 E 2 0.071 0.085 α 2 2.47 3.01 Table 1. Values of the parameters used in the approximation of the energy part of the DF (29). The first column lists the parameters. The second column gives the parameter values for the model fitted to the anisotropy for the average halo from the simulation (β 0 = 0.09 and β∞ = 0.34) and the third one gives the values which reproduce the DF for the anisotropy profile (25) (β 0 = 0 and β∞ = 0.5). In both cases L 0 = 0.198 Ls was assumed.
Analytical approximation of the distribution function
The DF discussed in the first subsection is typical in a sense that it describes the statistical macrostate of DM particles in a typical massive halo. With future applications in mind we decided to provide an analytical approximation for the energy part fE(E) of this DF which could be used as a substitute for a rather complicated procedure described in Appendix B. We found that the following expression reproduces the numerical DF with good accuracy where the values of the parameters are listed in the middle column of Table 1. For completeness we recall that the angular momentum part of the DF is given by (23) with β0 = 0.09, β∞ = 0.34 and L0 = 0.198 Ls. We have verified that the errors of the dispersion, kurtosis and both anisotropies, when using this approximate formula in the integrals for velocity moments, do not exceed 5 percent within the radial range (0.01rs, 30rs) (see Fig. 11). Note that the general form of expression (29) can be effectively used to ap- proximate the DF model also for other sets of parameters. As a second example we include in the third column of the Table also the parameters of a model with β0 = 0, β∞ = 0.5 and L0 = 0.198 Ls which mimics the anisotropy profile (25) with r 1/4 = 0.9 rs.
DISCUSSION
We have studied the DF of DM particles inside the virial spheres of the haloes of mass 10 14 -10 15 M⊙ formed in the standard ΛCDM cosmological N -body simulation. In the first part of the paper we presented results of the calculation of the DF from the simulation in the form most suitable for comparison with theoretical models. Then we pro-posed a phenomenological model of the DF. The model in its part dependent on angular momentum involves three free parameters which specify the anisotropy profile, namely its asymptotic values and the scale of transition between them. We demonstrated that this parametrization is sufficient to reproduce accurately the simulation results in terms of velocity moments as well as the DF itself. The only discrepant point we encountered was a small but statistically significant bias of the theoretical kurtosis with respect to its profiles measured from the simulation. This is probably caused by the presence of substructures perturbing the trajectories of low-velocity particles.
In section 5 we showed that the velocity distribution of a typical halo changes from a flat-topped distribution (κ < 3) in the outer part to a peaked one (κ > 3) near the centre. This behaviour was noticed and discussed by others before (e.g. Kazantzidis, Magorrian & Moore 2004;Wojtak et al. 2005). The analysis of the DF presented here suggests that this property of the velocity distribution is correlated with the profile of the anisotropy increasing with r: β(r) profiles growing faster with r imply more rapid growth of the kurtosis towards the centre.
As demonstrated by Taylor & Navarro (2001), the profile of the phase-space density Q(r) = ρ(r)/σ(r) 3 in DM haloes is well fitted by a power-law function. It seems that the status of this relation is as well established as the NFW fit of the density profile. We checked that Q(r) profiles inferred from the DF model with parameters adjusted to the median anisotropy (see section 5) coincide well with the corresponding power-law functions (see Fig. 12). In this comparison we assumed the logarithmic slopes of −1.92 (in the case of the dispersion of the radial velocity) and −1.84 (in the case of the total dispersion), the values obtained from the simulations by Dehnen & McLaughlin (2005). The relative residuals of both Q(r) profiles are of the same order as the scatter of points from the simulations in fig. 1 of Dehnen & McLaughlin (2005). Note that this happens when the DF model is tuned to the mean trend of the β(r) parameter. Therefore one could suspect that both relations, the mean profile of the anisotropy and Q(r) ∝ r −γ , are two aspects of some deeper relation. A more general parametrization of the DF might provide some insights towards a more fundamental understanding of this phenomenon.
Although the whole analysis presented in this paper was done in the framework of the NFW density profile, the equations for the numerical inversion (B12) were derived for an arbitrary density distribution. Using this general form one can immediately obtain a family of DFs with our general anisotropy profiles for any potential-density pair. For the commonly used density profiles it is easy to introduce the phase space units analogous to rs, Vs and Ms in our case. This reduces the role of the parameters of the density profile to scaling properties so that the final DF model would not explicitly depend on them.
Given our very general parametrization of the β(r) profile, our DF model is expected to provide some impact on the solution of the classical problem of mass-anisotropy degeneracy for spherical systems. In order to obtain more reliable estimates of mass profiles, one could assume the anisotropy profile from the simulation and keep the density profile as the only degree of freedom of the DF. One could then apply the maximum likelihood approach of projected DF, as de- Fig. 4. The solid lines plot power-law functions with logarithmic slopes from Dehnen & McLaughlin (2005). In the lower panels we show relative residuals. scribed e.g. by Mahdavi & Geller (2004). A more advanced and simulation-independent approach would be to treat the anisotropy profile as an unknown quantity, described by the three parameters introduced in our formulation. As a result one would obtain an estimate of the mass profile as well as anisotropy. Both methods require an additional study of the DF in projection and extensive tests on mock data sets. This will be the subject of our follow-up papers.
ACKNOWLEDGMENTS
The simulations have been performed at the Altix of the LRZ Garching. RW and E L are grateful for the hospitality of Astrophysikalisches Institut Potsdam and Institut d'Astrophysique de Paris where part of this work was done. RW thanks A. Knebe for helpful advice and G. Boué for fruitful discussions. This work was partially supported by the Polish Ministry of Science and Higher Education under grant NN203025333 as well as by the Polish-German exchange program of Deutsche Forschungsgemeinschaft and the Polish-French collaboration program of LEA Astro-PF.
APPENDIX A: THE VOLUME OF THE HYPERSURFACE
The volume g(E, L) of the hypersurface SE L(v, r) of constant energy and angular momentum is defined by the integral Introducing spherical coordinates and changing variables into E, L and radius r one gets where vr is the radial velocity and the integral is equal to the radial period of the orbit. Using the radii of the pericentre rp and the apocentre ra, one can rewrite the final formula for g(E, L) in the following way Integrating g(E, L) over the angular momentum we get the volume gE(E) of the hypersurface of constant energy Changing the order of the integrals and performing the integral over the angular momentum one obtains where rmax(E) is the apocentre radius of the radial orbit (E = Ψ(rmax)).
Considering a system of finite size V (v, r) in phase space, one has to recalculate g(E, L) with the realistic hypersurface of constant E and L given by the Cartesian product SE L(v, r) × V (v, r). In particular, for a spherical system with a boundary in the form of a sphere of radius rv the upper limit of the integral in (A3) and (A5) must be replaced by min{ra, rv} and min{ra, rmax(E)} respectively.
APPENDIX B: THE ENERGY PART OF THE DISTRIBUTION FUNCTION
The energy part of the DF fE(E) introduced in section 4 is related to the density profile by Although the main part of this paper concerns the DF consistent with the NFW profile, we keep within the appendix a general density ρ(r) so that the final formulae of inversion could be applied to any potential-density pair. Changing variables in the integral (B1) into the energy and angular momentum one gets ρ(r) = 2 3/2−β 0 πr −1 L 1−2β 0 where x = r 2 (Ψ − E)/L 2 0 and λ = L 2 /(2L 2 0 ). The integral over the λ variable is evaluated analytically so that (B2) can be rewritten in the form ρ(r)r 2β 0 = (2π) 3/2 2 −β 0 Γ(1 − β0) Γ(3/2 − β0) with a kernel of the integral given by K(Ψ, E) = (1 + x) −β∞+β 0 2F1(1/2, β∞ − β0, 3/2 − β0, x/(1 + x)), where 2F1 stands for the hypergeometric function. Equation (B3) is a Volterra integral of the first kind. In the general case of models with varying anisotropy, when β∞ = β0 and L0 < ∞, it has no analytical solution for fE(E) due to the complexity of expression (B4). However, as shown by Cuddeford & Louis (1995), this kind of integral can be quite easily inverted numerically. Below we adapt their method to our problem. For E → 0 and Ψ ≫ E the integral kernel can encounter a singularity, i.e. K(Ψ, E) ∝ E β∞−β 0 . In order to avoid this behaviour, we define a smooth integral kernel b K(Ψ, E) which is free of such a feature b K(Ψ, E) = E −β∞+β 0 K(Ψ, E).
By analogy we introduce a smooth energy part of the DF which is a regular function for energy approaching 0 b fE(E) = E 3/2−ν fE(E).
Using formulae (B5) and (B6) we can rewrite equation (B3) in the following form b ρ(r) = C β 0 where b ρ = ρ(r)r 2β 0 and C β 0 stands for all coefficients in front of the integral (B3). Following Cuddeford & Louis (1995) we introduce discrete vectors of the potential Ψj = jǫ, radius rj = r(Ψj) and density b ρj = ρ(rj)r 2β 0 j , where j is an integer number and ǫ = 1/j ≪ 1. For any b ρj we can split the integral (B8) into a sum In order to apply any numerical algorithm to invert equation (B9) with respect to b fE(E), one has to assume that ǫ is sufficiently small so that the variations of b fE(E) and b K(jǫ, E) within subsequent integration ranges are negligible. Then one can approximate both functions by their values at (i − 1/2)ǫ, i.e. the middle points of the energy range. This approach was used by Cuddeford & Louis (1995) and favoured over other methods involving higher order interpolation (see e.g. Saha 1992). Applying this approximation to equation (B9) we get with b fE i = b fE((i − 1/2)ǫ) and the matrix Iij defined in the following way where Bz(x, y) is the incomplete beta function. As shown by Cuddeford & Louis (1995), the solution of (B10) for b fE i can be obtained by evaluating iteratively the following expression b with the initial value b fE 1 given by b fE 1 = b ρ1/C β 0 b K(ǫ, ǫ/2)I11 . (B13)
Once fE(E) is calculated in terms of the fE i vector it is easy to evaluate numerically the integral (C2) and obtain the profile of any velocity moment.
An interesting property of the model is the ratio of any non-vanishing moment of the tangential velocity to the corresponding moment of the radial velocity in the limit of small and large radii. Introducing spherical coordinates in (C1) and performing the integral with two asymptotes of fL(L) given by (22), one can show that this ratio is the following function of β0 or β∞ v 2n θ v 2n r = 8 > > > < > > > : for r → 0 Γ(1 + n − β∞) Γ(1 − β∞)Γ(1 + n) for r → ∞. (C5) | 2008-05-16T13:22:01.000Z | 2008-02-04T00:00:00.000 | {
"year": 2008,
"sha1": "e1ba2d0bb6a11746721db50beab88c56177da6e4",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/388/2/815/3011086/mnras0388-0815.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e1ba2d0bb6a11746721db50beab88c56177da6e4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
33308361 | pes2o/s2orc | v3-fos-license | Spin polynomial functors and representations of Schur superalgebras
We introduce categories of homogeneous strict polynomial functors, $\Pol^\I_{d,\k}$ and $\Pol^\II_{d,\k}$, defined on vector superspaces over a field $\k$ of characteristic not equal 2. These categories are related to polynomial representations of the supergroups $GL(m|n)$ and Q(n), respectively. In particular, we prove an equivalence between $\Pol^\I_{d,\k}$, $\Pol^\II_{d,\k}$ and the category of finite dimensional supermodules over the Schur superalgebra $\Sc(m|n,d)$, $\Qc(n,d)$ respectively provided $m,n \ge d$. We also discuss some aspects of Sergeev duality from the viewpoint of the category $\Pol^\II_{d,\k}$.
Introduction
Strict polynomial functors were introduced by Friedlander and Suslin in [FS] as a tool for use in their investigation of rational cohomology of finite group schemes over a field. Let us briefly recall the definition.
Suppose k is an arbitrary field, and let vec k denote the category of finite dimensional k-vector spaces. Also, let sch k be the category of all schemes over k. Then, by identifying each hom-space with its associated affine scheme, we obtain an sch k -enriched category vec k (in the sense of [Ke]) with the same objects as vec k . Although stated somewhat differently in [FS,Definition 2.1], a strict polynomial functor may be defined as an sch k -enriched functor from vec k to itself. From this perspective, it is clear that a strict polynomial functor T yields, by evaluation at any V ∈ vec k , a polynomial representation T (V ) of the affine group scheme GL(V ). Let us denote by pol d (GL(V )) the category of finite dimensional polynomial representations of GL(V ) which are homogeneous of degree d. Then a strict polynomial functor T is said to be homogeneous of degree d if T (V ) ∈ pol d (GL(V )) for all V ∈ vec k . We denote by P d the category of all such homogeneous strict polynomial functors. The morphisms in P d are sch k -enriched natural transformations.
Assume that n ≥ d. Then, evaluation at V = k n in fact gives an equivalence of categories This follows from the definition of the Schur algebra S(n, d) in terms of the coordinate ring of GL n (as in Green's monograph [G]) and [FS,Theorem 3.2], which provides an equivalence between P d and the category of finite dimensional modules over S(n, d). We remark that there is an alternate Date: February 4, 2013. This work was supported by NRF grant #2011-0027952 and NRF grant #2012-005700. 1 definition of the category P d which makes the relationship with S(n, d)modules more transparent (see e.g. [Kr, P]). In this new definition, sch kenriched functors are replaced by k-linear functors defined on a category of divided powers.
The aim of this paper is to provide an analogue of [FS,Theorem 3.2] for Schur superalgebras. More specifically, suppose now that k is a field of characteristic p = 2. In this context, the Schur superalgebras S(m|n, d) and Q(n, d) were studied by Donkin [D] and Brundan and Kleshchev [BrK1], respectively. In both works there was obtained a classification of finite dimensional irreducible supermodules over the corresponding Schur superalgebra. (In [BrK1] the field k is assumed to be algebraically closed.) In this paper, we introduce categories of strict polynomial functors defined on vector superspaces, and we show that each such category is equivalent to the category of finite dimensional supermodules over one of the above Schur superalgebras. To define strict polynomial functors on superspaces, it is more convenient for us to follow the approach involving categories of divided powers. In the last section, however, we provide a definition of strict polynomial functors as "enriched functors" which is closer to Friedlander and Suslin's original definition.
The contents of the paper are as follows. In Section 2, we give necessary preliminary results concerning superalgebras and supermodules. In Section 3, we introduce the categories Pol d,k ( † = I, II) of homogeneous strict polynomial functors, whose objects are k-linear functors defined on categories of vector superspaces. We also discuss some of the usual facets of polynomial functors such as Kuhn duality and Yoneda's lemma in this new context. (See [Kr,P,T2] for descriptions of the corresponding classical notions).
In Section 4, we prove our main result, Theorem 4.2, which gives an equivalence between Pol (I) d , Pol (II) d and the category of finite dimensional supermodules over S(m|n, d), Q(n, d) respectively for m, n ≥ d. We are then able to obtain in a classification of irreducible objects in both categories using the classifications of [D] and [BrK1]. As another application of Theorem 4.2, we give an exact functor from the category Pol (II) d to the category of finite dimensional left supermodules over the Sergeev superalgebra W(d). This functor may be viewed as a categorical analogue of Sergeev duality, as described by Sergeev in [Ser] when p = 0 and by Brundan and Kleshchev [BrK1] in arbitrary characteristic. Since the representation theory of W(d) is closely related to that of the spin symmetric group algebra k − S d (c.f. [BrK1]), we may refer to objects of Pol (II) d as spin polynomial functors. In Section 5, we conclude by describing categories Pol (I) d and Pol (II) d consisting of homogeneous ssch k -enriched functors, where ssch k denotes the category of all superschems over k. This definition may be viewed as a "super analogue" of Friendlander and Suslin's original definition of strict polynomial functors. In Theorem 5.4 we show that our two definitions of strict polynomial functors are equivalent. One of the benefits of the classical approach is that the relationship between strict polynomial functors and polynomial representations of the supergroups GL(m|n) and Q(n) appears naturally from the definition of ssch k -enriched functors.
Finally, let us mention our original motivation for considering categories of polynomial functors defined on vector superspaces. In [HTY], J. Hong, A. Touzé and O. Yacobi showed that the category of all classical polynomial functors defined over an infinite field k of characteristic p, provides a categorification of level 1 Fock space representations (in the sense of Chuang and Rouquier) for an affine Kac-Moody algebra g of type A ∞ (if p = 0) or of type A (1) p−1 (in case p > 0). We conjecture that the category of all spin polynomial functors defined over an algebraically closed field k of characteristic p = 2 provides a categorification of certain level 1 Fock spaces for an affine Kac-Moody Acknowledgements. The author wishes to thank Masaki Kashiwara and Myungho Kim for many helpful conversations and suggestions.
Superalgebras and supermodules
In this section, we give preliminary results on superalgebras and supermodules needed for the remainder. See [BrK1], [K,, [L,Ch.1] and [Man,Ch.3] for more details. Although our notation sometimes differs from these references.
2.1. Preliminaries. Let us fix a field k, which we assume is of characteristic p = 2. A vector superspace is a Z 2 -graded k-vector space M = M 0 ⊕ M 1 . We denote the degree of a homogeneous vector, v ∈ M , by |v| ∈ Z 2 . A subsuperspace of M is a subspace N of M such that N = (N ∩M 0 )⊕(N ∩M 1 ). We let M denote the underlying ordinary vector space of a given superspace M , and we write sdim(M ) = (m, n) if dim(M 0 ) = m and dim(M 1 ) = n.
Given a pair of vector superspaces M, N we view the direct sum M ⊕ N and the tensor product M ⊗N as vector superspaces by setting: We also consider the vector space Hom(M, N ) = Hom k (M, N ) of all k-linear maps of M into N as a superspace by letting Hom(M, N ) i consist of the homogeneous maps of degree i for i ∈ Z 2 , i.e. the maps f : M → N such that f i (M j ) ⊆ N i+j for j ∈ Z 2 . The elements of Hom(M, N ) 0 are called even linear maps, and the elements of Hom(M, N ) 1 are called odd. The k-linear dual M ∨ = Hom(M, k) is a superspace by viewing k as vector superspace concentrated in degree 0. Let svec k denote the category of all finite dimensional k-vector superspaces with arbitrary linear maps as morphisms.
If M ∈ svec k , then for f ∈ M ∨ and v ∈ M , we write to denote the pairing between M and M ∨ . We identify M with (M ∨ ) ∨ as superspaces by setting for A superalgebra is a superspace A with the additional structure of an associative unital k-algebra such that A i A j ⊆ A i+j for i, j ∈ Z 2 . By forgetting the grading we may consider any superalgebra A as an ordinary algebra, denoted by A. A superalgebra homomorphism ϑ : A → B is an even linear map that is an algebra homomorphism in the usual sense; its kernel is a superideal, i.e., an ordinary two-sided ideal which is also a subsuperspace. An antiautomorphism τ : A → A of a superalgebra A is an even linear map which satisfies τ (ab) = τ (b)τ (a).
Given two superalgebras A and B, we view the tensor product of superspaces A ⊗ B as a superalgebra with multiplication defined by We note that A ⊗ B ∼ = B ⊗ A, an isomorphism being given by 2.2. Tensor powers. Let M be a vector superspace. The tensor superalgebra T * M is the tensor algebra regarded as a vector superspace. It is the free associative (Z-graded) superalgebra generated by M . The symmetric superalgebra S * M is the quotient of T * M by the super ideal I = x ⊗ y − (−1) |x||y| y ⊗ x; x, y ∈ M . Since I is a Z-graded homogeneous ideal, there exists a gradation S * M = d≥0 S d M . Now we may view the ordinary symmetric algebra Sym * M 0 as a superspace concentrated in degree zero. We may also view the ordinary exterior algebra Λ * M 1 as a superspace by reducing its Z-grading mod 2Z. In this way both Sym * M 0 and Λ * M 1 may be regarded as Z-graded superalgebras. One may check that we have a Z-graded superalgebra isomorphism: A superalgebra A is called commutative if ab = (−1) |a||b| ba for all a, b ∈ A. The superalgebra S * M is the free commutative (Z-graded) superalgebra generated by M .
2.3. Divided powers. There is a unique (even) right action of the symmetric group S d on the tensor power M ⊗d such that each transposition (i i + 1) for any v 1 , . . . , v d ∈ M with v i , v i+1 Z 2 -homogeneous. Denote the invariant subsuperspace of this action by
Now the symmetric power is the coinvariant superspace
Hence, given arbitrary vector superspaces V, W there are natural even isomorphisms where V and W are considered as trivial S d -modules. There is also a right action of S d on (M ⊗d ) ∨ given by Now let Γ * M be the Z-graded superspace d≥0 Γ d M . Also let D * M 0 denote the ordinary divided powers algebra of the vector space M 0 (cf. [B]). Viewed as a vector superspace concentrated in degree zero, D * M 0 is a Zgraded superalgebra. Also note that we have a natural embedding of superspaces: Λ d M 1 ֒→ (M 1 ) ⊗d . We then have an even isomorphism of Z-graded superspaces The isomorphism (4) defines a superalgebra structure on Γ * M which we call the divided power superalgebra.
Supermodules. Let
One may similarly define right A-supermodules. A homomorphism ϕ : V → W of left A-supermodules V and W is a (not necessarily homogeneous) linear map such that We denote by A smod the category of finite dimensional left A-supermodules with A-homomorhpisms. A homomorphism, ϕ : V → W , of right Asupermodules V and W is a (not necessarily homogeneous) linear map such that Let smod A denote the category of finite dimensional right A-supermodules with A-homomorphisms.
2.5. Parity change functor. Suppose V is a left or right A-supermodule.
Then define a new supermodule ΠV which is the same vector space as V but with opposite Z 2 -grading. For right supermodules, the new right action is the same as in V . For left supermodules, the new left action of a ∈ A on v ∈ ΠV is defined in terms of the old one by a · v := (−1) |a| av. On a morphism f , Πf is the same underlying linear map as f . Let us write k m|n = k m ⊕ (Πk) n .
Examples 2.1. We have the following examples of finite dimensional associative superalgebras.
(i) If M is a superspace, then End(M ) = Hom k (M, M ) is a superalgebra. In particular, we write M m,n = End(k m|n ).
(ii) Let V ∈ svec k , and suppose J is a degree one involution in End(V ). This is possible if and only if dim V 0 = dim V 1 . Let us consider the superalgebra Suppose that sdimV = (n, n), and let {v 1 , . . . , v n } (resp. {v ′ 1 , . . . , v ′ n }) a basis of V 0 (resp. V 1 ). Let J V be the unique involution in End k (V ) such that Jv i = v ′ i for 1 ≤ i ≤ n. Then we may write elements of Q(V, J V ) with respect to the basis {v 1 , . . . , v n , v ′ 1 , . . . , v ′ n } as matrices of the form where A, B are n × n matrices, with A = 0 for odd endomorphisms and B = 0 for even ones. Suppose that k is algebraically closed. Recall (cf. [K, ch.12]) that all odd involutions J ∈ End(V ) are then mutually conjugate (by an invertible element of End(V ) 0 ). Hence, any superalgebra Q(V, J) is isomorphic to the superalgebra Q n , consisting of all matrices of the form (5). (iii) The Clifford superalgebra, C(d), is the superalgebra generated by odd elements c 1 , . . . , c d subject to the relations c 2 i = 1 for i = 1, . . . , d and c i c j = −c j c i for all i = j. There is an isomorphism defined by mapping c i → c i ⊗ 1 and c d 1 +j → 1 ⊗ c j , for 1 ≤ i ≤ d 1 and 1 ≤ j ≤ d 2 . Hence, we have C(d) ≃ C(1) ⊗ · · · ⊗ C(1) (d copies).
(6) 2.6. Categories enriched over svec k . We say a category V is an svec kenriched category if the hom-sets hom V (V, W ) (V, W ∈ V) are finite dimensional k-superspaces while composition is bilinear and even. I.e., if U, V, W ∈ V, then composition induces an even linear map: Let V ev denote the subcategory of V consisting of the same objects but only even homomorphisms.
For a superalgebra A, the categories A smod and smod A are naturally svec k -enriched categories. Furthermore, the subcategories ( A smod) ev and (smod A ) ev are abelian categories in the usual sense. This allows us to make use of the basic notions of homological algebra by restricting our attention to only even morphisms. For example, by a short exact sequence in A smod (resp. smod A ), we mean a sequence with all the maps being even. All functors between the svec k -enriched categories which we consider will send even morphisms to even morphisms. So they will give rise to the corresponding functors between the underlying even subcategories. Now if V is an svec k -enriched category, let V − denote the category with the same objects and morphisms as V but with modified composition law: Given a superaglebra A, also define a new superalgebra A − , with the same elements as A and modified multiplication law a · b = (−1) |a||b| ab. Notice that for any V ∈ V, the superspace end 2.7. Schur's lemma. It is possible that an irreducible A-supermodule may become reducible when considered as an A-module. We say that an irre- and otherwise we say that V is of type Q. We have the following criterion.
Lemma 2.3 (Schur's lemma). Suppose A is a superalgebra, and let V be a finite dimensional irreducible left A-supermodule. Then Example 2.4. The superspace k m|n is naturally an irreducible left M m,nsupermodule of type M. On the other hand, the superspace V = k n|n is naturally an irreducible left Q n -supermodule. Since dim End Qn (V ) > 1, it follows that V is of type Q. This explains the given names for the types.
Given a finite dimensional superalgebra A and some V ∈ A smod (resp. smod A ), we have a natural isomorphism of vector superspaces. Let A be a superalgebra. A subsupermodule of a left (resp. right) Asupermodule is a left (resp. right) A-submodule, in the usual sense, which is also a subsuperspace. A left (resp. right) A-supermodule is irreducible if it is non-zero and has no non-zero proper subsupermodules. We say that a left (resp. right) A-supermodule is completely reducible if it can be decomposed as a direct sum of irreducible subsupermodules. Call A simple if A has no non-trivial superideals, and a semisimple superalgebra if A is completely reducible viewed as a left A-supermodule. Equivalently, A is semisimple if every left A-supermodule is completely reducible. We have: Theorem 2.5. Let A be a finite dimensional superalgebra. The following are equivalent: (i) A is semisimple; (ii) every left (resp. right) A-supermodule is completely reducible; (iii) A is a direct product of finitely many simple superalgebras.
Example 2.6. The Clifford superalgebra C(1) may be realized as the superalgebra of 2 × 2 matrices of the form a b b a a, b ∈ k . The generator c 1 of C(1) corresponds to the matrix J 1 = 0 1 1 0 . One may check that C(1) is a simple superalgebra with a unique right (resp. left) irreducible supermodule up to isomorphism. In fact, C(1) is an irreducible supermodule over itself with respect to right (resp. left) multiplication, and we denote this supermodule by U r (1) (resp. U l (1)). In the sequel, we usually write Hence, we have sdim(V ) = (n, n) and sdim(V ′ ) = (n ′ , n ′ ), and there exists a basis of V (resp. V ′ ) such that c 1 ∈ C(1) acts on V (resp. V ′ ) via multiplication by the matrix where I N is the N × N unit matrix for N = n, n ′ respectively. Now let V, W ∈ C(1) smod (resp. smod C(1) ). As mentioned above, we may assume that sdim(V ) = (m, m) (resp. sdim(W ) = (n, n)) for some m, n ∈ Z ≥0 . By equation (10), we may choose respective bases of V and W such that Hom C(1) (V, W ) consists of all matrices of the form where A, B are n × m matrices in k, and A = 0 (resp. B = 0) for odd (resp. even) homomorphisms.
Remark 2.7. Notice that C(1) is commutative as an ordinary algebra even though C(1) is not a commutative superalgebra. Hence, the objects of C(1) smod can be identified with the objects of smod C(1) . It can be checked using (11) that we have an equivalence given by mapping V → V and ϕ → ϕ − for all V, W ∈ ( C(1) smod) − and ϕ ∈ Hom C(1) (V, W ).
Remark 2.8. Suppose that V ∈ C(1) smod and sdim(V ) = (n, n). Then it is clear from (11) that we have a superalgebra isomorphism Q n ∼ = End C(1) (V ). Now suppose that there is a √ −1 ∈ k. If V ′ ∈ smod C(1) and again sdim(V ′ ) = (n, n), then it is not difficult to check that we also have an isomorphism Q n ∼ = End C(1) (V ′ ) of superalgebras.
2.9. Wreath products. Suppose A is an associative superalgebra. Notice that the right action of σ ∈ S d on the tensor power A ⊗d is in fact a superalgebra automorphism. Denote by A ≀ S d the vector superspace (where the group algebra kS d is viewed as superspace concentrated in degree zero). We then consider A ≀ S d as a superalgebra with multiplication defined by the rule 2.10. Tensor products of supermodules. Given left supermodules V and W over arbitrary superalgebras A and B respectively, the tensor product (13) (The previous statement holds also for right supermodules. I.e., the outer tensor product ϕ ⊠ ϕ ′ of right supermodule homomorphisms, ϕ : V → W and ϕ ′ : V ′ → W ′ , is given by the same formula (13).) As a particular example, if M, M ′ , N, N ′ ∈ svec k , then (13) gives a natural isomorphism which More generally, we have the following.
Proof. It suffices to consider d = 2. The map f ⊗ g → f ⊠ g is clearly injective. To check that it is surjective we may use Lemma 2.3 together with Theorem 2.5 and [K,Lemma 12.2.13].
Strict polynomial functors of types I and II
We now introduce the categories Pol d,k consisting of homogeneous strict polynomial functors. Such polynomial functors are realized as k-linear functors between an appropriate pair of svec k -enriched categories.
3.1. Categories of divided powers. Suppose that B is a simple finite dimensional superalgebra, and let V = smod B . We then define a new category In order to define the composition law, we make use of the following lemma.
Proof. By Lemma 2.10, V ⊗d ∈ smod B . One may check that for any where S d acts on B ⊗d on the right as in the definition of B ≀ S d . Now given a homomorphism ϕ ∈ Hom B (V ⊗d , W ⊗d ), it follows from (17) that ).σ for any v ∈ V ⊗d . One may then check that It is also not difficult to check that the isomorphism (15) is in fact an isomorphism of S d -modules. Hence we have a canonical isomorphism Using the isomorphism in the previous lemma for any V, where S(m|n, d) is the Schur superalgebra defined in [D].
where Q(n, d) is the Schur superalgebra defined in [BrK1]. Given S, T ∈ Pol Notice that for any M ∈ svec k , we have a canonical isomorphism Let us identify smod C(1) as a subcategory of svec k . Since we may view kS d as a subsuperalgebra of W(d), there is a restriction functor from smod W(d) to smod kS d . This in turn yields an even k-linear functor, Res : Γ d Q → Γ d M , which acts as the identity on objects and by restriction on morphisms. Hence, composition yields a functor − • Res : Pol (i) We use the same notation, Id = Id • Res : smod C(1) → svec k , to denote the restriction of the identity functor. Clearly Id∈ Pol (II) 1 . Also, note that we have an even isomorphism to the same underlying map regarded as an element of Hom k (V ⊗d , W ⊗d ).
Given V ∈ smod C(1) , notice that we have a canonical isomorphism 3.4. Duality. Suppose τ is an antiautomorphism of a superalgebra B, and let V ∈ smod B . Then we can make the dual space V ∨ into a right Bsupermodule by defining We denote the resulting supermodule by V τ,∨ . If V, W ∈ smod B and ϕ ∈ Hom C(1) (V, W ), then let ϕ ∨ : W τ,∨ → V τ,∨ be defined by , and we furthermore have a natural isomorphism Given any svec k -enriched category V, let us write V op,− = (V − ) op to denote the opposite category of V − . Now let V = smod B . Then (20) gives an equivalence of categories An antiautomorphism τ of B induces an antiautomorphism τ 2 of B ⊗ B by setting τ 2 (a ⊗ b) = (−1) |a||b| τ (a) ⊗ τ (b). In general, this gives an antiautomorphism τ d of B ⊗d for all d ≥ 1. If V, W ∈ smod B , we have a canonical isomorphism of B ⊗ B-supermodules Suppose now that B is a simple finite dimensional superalgebra. Let us fix generators s i = (i i + 1) ∈ S d for i = 1, . . . , d − 1. Then τ d extends uniquely to an antiautomorphism of B ≀ S d , also denoted τ d , such that τ d (s i ) = s i for i = 1, . . . , d − 1. So that the equivalence (21) with respect to B ≀ S d and τ d induces a correspoding equivalence Example 3.4. If B = k or C(1), then τ (a) = a ( ∀ a ∈ B) defines an antiautomorphism of B. Hence we have equivalences for † = I or II, respectively.
As an example, for V ∈ Γ d M (resp. Γ d Q ), we define S d,V := (Γ d,V ) # . In particular, let us write S d,m|n = S d,k m|n and S d,n = S d,U (1) ⊕n . It then follows from equation (3) that we have canonical isomorphisms 3.5. Yoneda's lemma. We have the following analogue of Yoneda's lemma in our setting. 3.6. Tensor products. Given nonnegative integers d and e, we have an embedding S d × S e ֒→ S d+e . This induces an embedding for any M ∈ svec k , given by the composition of the following maps Now we may consider the categories Γ d M ⊗ Γ e M , Γ d Q ⊗ Γ e Q whose objects are the same as svec k , smod C(1) and whose morphisms are of the form for M, N ∈ svec k and V, W ∈ smod C(1) . Then, one may show that (24) yields embeddings of categories Now suppose S ∈ Pol
Strict polynomial functors, Schur superalgebras and Sergeeev duality
We show that the categories of strict polynomial functors of types I and II defined above are equivalent to categories of supermodules for the Schur superalgebras S(m|n, d) and Q(n, d), respectively. We then describe a functorial analogue of Sergeev duality for type II strict polynomial functors.
Equivalences of categories. Let
which belongs to (M ⊗t 1 ) St .
Proof. It follows from (4) that we have isomorphisms of superspaces is ; I) 0 ; |I| = k and 1 ≤ i 1 < · · · < i s ≤ m} is a basis of Γ k M 0 . It is also not difficult to verify that (1) j l ) 1 ; 1 ≤ j 1 < · · · < j l ≤ n} is a basis of Γ l M 1 . The lemma then follows from (26).
We are now ready to prove the main theorem.
Theorem 4.2. Assume m, n ≥ d. Then evaluation on k m|n , U (1) ⊕n yields equivalences of categories: Proof. We prove only the second equivalence, since the proof of the first equivalence is similar. Recall that According to Proposition A.1, it suffices to show that the map induced by composition, is surjective for all V, W ∈ Γ d Q . From Example 2.6 in Section 2, it follows that for any r ∈ Z 2 there exist bases (x (r) (j, i)), (y (r) (k, j)) and (z (r) (k, i)) of Hom C(1) (V, U (1) ⊕n ) r , Hom C(1) (U (1) ⊕n , W ) r and Hom C(1) (V, W ) r respectively, such that: for r, r ′ ∈ Z 2 , where δ j 1 ,j 2 is the Kronecker delta.
From the previous thereom and the classifications given in [D], [BrK1] we obtain the following corollary. By a partition we mean an infinite nonincreasing sequence λ = (λ 1 , λ 2 , . . . ) of nonnegative integers such that the sum |λ| = λ i is finite. Let P denote the set of all partitions.
4.2.
Spin polynomial functors and Sergeev duality. In this section we limit our attention to the objects T ∈ Pol (II) d . We may refer to such strict polynomial functors as spin polynomial functors. The explanation for this term is given by Theorem 4.4 below, which describes a relationship between Pol (II) d and finite dimensional representations of the Sergeev superalgebra, which is "super equivalent" to the spin symmetric group algebra k − S d (cf. [BrK1]).
Let us denote There is a bifunctor Pol (II) × Pol (II) → Pol (II) given by the (external) tensor product − ⊗ − : Pol d+e , defined in Section 2.5.
Suppose M, N ∈ svec k . Then Γ * ( ) satisfies the exponential property which follows from (4) and the corresponding properties for D * ( ) and Λ * ( ). It follows from (26) and (29) that Recall the objects Γ d,n ∈ Pol (II) d which are projective by Yoneda's lemma (see Section 3). It follows from (30) that we have a decomposition of strict polynomial functors. Now let Λ(n, d) denote the set of all tuples λ = (λ 1 , . . . , λ n ) ∈ (Z ≥0 ) n such that λ i = d. Given λ ∈ Λ(n, d), we will write Γ λ = Γ λ 1 ,1 ⊗ · · · ⊗ Γ λn,1 . By (31) and induction, we have a canonical isomorphism It follows that the objects Γ λ are projective in Pol ( (ii) There is a canonical isomorphism of superalgebras: (iii) We have an exact functor Remark 4.5. One may refer to the functor in Theorem 4.4.(iii) as the Sergeev duality functor. A similar functor related to classical Schur-Weyl duality was studied in [HY] in the context of g-categorification.
Categories of ssch k -enriched functors
In this section, we provide an alternate definition of strict polynomial functors which is a 'super analogue' of Friedlander and Suslin's original definition [FS,Definition 2.1]. We also introduce categories Pol (I) d and Pol (II) d whose objects are homogeneous ssch k -enriched functors between a pair of ssch k -enriched categories. Familiarity with the notation and material from Appendix B will be assumed throughout this section. 5.1. Definition of ssch k -enriched functors. Recall that we may identify ssch k as a full subcategory of the functor category Fct(salg k , sets). Given superschemes X, Y ∈ ssch k , the functor X×Y is again a superscheme. Let I 0 be a constant functor such that I 0 (A) = {0} for all A ∈ salg k . Then I 0 is an affine superscheme with k[I 0 ] = k. The monoidal structure on the category sets with respect to direct product induces a corresponding (symmetric) monoidal structure on ssch k , such that I 0 is an identity element.
Let X, Y ∈ ssch k , with X an affine superscheme. An analogue of [Jan, I.1.3] (Yoneda's lemma for ordinary schemes) gives a bijection Let B an associative superalgebra, and suppose U, V, W ∈ B smod. Then, there is a natural transformation given by the isomorphism (36) and composition of A-linear maps, for all A ∈ salg k . We also have for each V ∈ svec B a natural transformation which is the element of hom ssch k (I 0 , End B (V ) a ) mapped onto Id V ∈ End B (V ) 0 under the bijection (33). It then may be checked that we obtain an ssch k -enriched category B smod (in the sense of [Ke]) which has the same objects as B smod and hom-objects Definition 5.1. Suppose B is an associative superalgebra. Let V = B smod, and let V denote the corresponding ssch k -enriched category. A ssch kenriched functor (or ssch k -functor) consists of an assignment and a morphism of superschemes such that the following two diagrams commute for all U, V, W ∈ V: and with horizontal maps being given by composition in V and svec k , respectively.
The categories Pol
Notice that if f : M → N is an even linear map of vector superspaces, then f may be identified with the associated natural transformation η f : M a → N a which is given by the k-linear maps Definition 5.2. Let V = B smod, and let V = B smod. Suppose that S, T : V → svec k are both ssch k -functors. Then a ssch k -natural transformation, α : S → T , is defined to be a collection of even k-linear maps α V : S(V ) → T (V ) such that the following diagram commutes for all V, W ∈ V: where we have identified the even linear maps, α W • − and − • α V , with their corresponding natural transformations as described in the preceding paragraph. Denote by Fct ssch k (V, svec k ) the category of all ssch k -functors, T : V → svec k , and ssch k -natural transformations.
Let V = B smod, and suppose V ∈ B smod. Given T ∈ Fct ssch k (V, svec k ) consider the algebraic supergroup G = GL B,V and recall that End B,V = End B (V ) a . Then, by the definition of ssch k -functor, the induced natural transformation T V,V : End B,V → End k,T (V ) restricts to a natural transformation of supergroups, which preserves identity and products. Hence η T,V is a representation of the supergroup G.
Now T (V ) may also be considered as a G-supermodule with a corresponding structure map ∆ T,V : T (V ) → T (V ) ⊗ k [G].
Notice that for any M, N ∈ svec k , Yoneda's lemma gives a canonical isomorphism for the corresponding affine superschemes. Using (34), let us identify the natural transformation T V,V with an element of the set It is then not difficult to see how T V,V gives rise to the structure map ∆ T,V . Hence the image of ∆ T,V lies in T (V )⊗k[End B,V ], and T (V ) is a polynomial representation of G.
Definition 5.3. Let V = B smod. We define Fct ssch k (V, svec k ) (d) to be the full subcategory of Fct ssch k (V, svec k ) consisting of all ssch k -enriched functors T : V → svec k such that for all V, W ∈ B smod (where we have identified both sides of (34)). We write From Theorem 5.4 below, it follows that these categories are equivalent to Pol d). We also write G I = GL(m|n) and G II = Q(n). If † = I, let V l = V r = k m|n ∈ svec k ,and if † = II, let V l = U l (1) ⊕n ∈ C(1) smod and V r = U (1) ⊕n ∈ smod C(1) .
Theorem 5.4. Suppose m, n ≥ d. Then we have equivalences of categories: Proof. Proof of (i). Let B = k, C(1) if † = I, II respectively. It suffices to show that we have an isomorphism of superalgebras Using Proposition B.1.(iii), (8), (7) and (12), we have Proof of (ii). Let V = B smod for B as above. Then we identify V − with svec k , smod C(1) respectively, using (7) and (12). Hence the objects of V are identical to the objects of either ) denote the image of T V,W under the above isomorphism. Then it may be checked that Φ(T ) ∈ Fct(Γ d (V − ), svec k ), and that this gives an equivalence of categories Corollary 5.5. Suppose m, n ≥ d, and let V l , V r be as above. Then we have a commutative diagram where the vertical arrow on the left is evaluation at V l and the vertical arrow on the right is evaluation at V r . In particular, evaluation at V l gives an equivalence Pol Proof. We know that the vertical arrow on the left is an equivalence by Theorem 4.2. It is then not difficult to see from the definitions of the functors Φ and Ψ that the diagram is combative. Hence, from Theorem 5.4 the commutativity implies that the evaluation at V r also gives an equivalence.
Appendix A. Representations of svec k -enriched categories Recall that k is a field of characteristic not equal 2, and svec k denotes the category of finite dimensional vector superspaces over k. Suppose V is a category enriched over svec k . In this appendix we describe the relationship between the following two categories: (i) The category V-smod = Fct k (V, svec k ) of all k-linear representations of V. It consists of all even k-linear functors V → svec k .
(ii) If P ∈ V, then E = End V (P ) is an associative superalgebra with product given by composition. We may then consider the category E smod of finite dimensional left supermodules over E. The categories V-smod and E smod are both svec k -enriched categories. We denote by (V-smod) ev , ( E smod) ev the corresponding even subcategories. Recall from Section 2 that ( A smod) ev is an abelian category for any finite dimensional superalgebra A. In particular, ( E smod) ev and (svec k ) ev are both abelian categories. Now since direct sums, products, kernels and cokernels can be computed objectwise in (the even subcategory of) the target category svec k , we see that (V-smod) ev is also an abelian category.
The relationship between V-smod and E smod is given by evaluation on P . If F ∈ V-smod, the (even) functoriality of F makes the k-superspace F (P ) into a supermodule over E = end V (P ). We thus have an evaluation functor: There is another interpretation of this evaluation functor. Since the covariant hom-functor h P := hom V (P, −) is an even k-linear functor, it must belong to V-smod. In this situation, Yoneda's lemma takes the form of an even isomorphism for any F ∈ V-smod. In particular, Hence, Yoneda's lemma allows us to interpret "evaluation at P " as the functor hom V-smod (h P , −) : V-smod → E smod.
We are interested to know if there is some condition on P which ensures that evaluation is in fact an equivalence of categories. The next proposition, which is a super analogue of [T2,Prop. 7.1], provides such a criterion.
Proposition A.1. Let V be an svec k -enriched category. Assume that there exists an object P ∈ V such that for all X, Y ∈ V, the composition induces a surjective map Then the following hold.
(i) For all F ∈ V-smod and all Y ∈ V, the canonical map (P ). Then evaluation on P induces an equivalence of categories V-smod ≃ E smod.
Proof. Proof of (i). The canonical map is: Now suppose that y ∈ F (Y ). Then one may check that the element is sent onto y by the canonical map. Proof of (ii). The Yoneda isomorphism hom V-smod (h P , F ) ≃ F (P ) ensures that h P is projective. One may check that Πh P is then also a projective object of (V-smod) ev . Next, by the naturality of the canonical map, (i) yields an epimorphism h P ⊗ F (P ) ։ F . Now F (P ) is a finite dimensional superspace. By choosing a (Z 2 -homogeneous) basis of F (P ), we have F (P ) ≃ k m|n where sdim(P ) = (m, n). Hence, there exists an epimorphism ϕ : (h P ) ⊕m ⊕(Πh P ) ⊕n ։ F , and we may write ϕ = ϕ 1 +· · · ϕ m +ϕ ′ 1 +· · ·+ϕ ′ n for some ϕ i : h P → F (resp. ϕ ′ j : Πh P → F ), where i = 1, . . . , m (resp. j = 1, . . . , n). Then we may finally decompose ). It then follows that {h P , Πh P } is a generating set.
Proof of (iii). We first verify that evaluation is fully faithful. For this purpose, it suffices to check for any F, G ∈ V-smod that we have an isomorphism: hom V-smod (G, F ) ≃ Hom E (G(P ), F (P )). Notice that there is a commutative triangle: where the horizontal arrow is the Yoneda isomorphism, and the diagonal arrow is the isomorphism (9) from Section 2. Hence the diagram induces an (even) isomorphism. By additivity of homs, we also have an isomorphism for any m, n ∈ N. Now by (ii) we may find (for any G ∈ V-smod) an exact sequence It then follows by the left exactness of hom V-smod (−, F ) and Hom E (−, F (P )) that evaluation on P is fully faithful.
Next, we verify that evaluation is essentially surjective. Suppose M ∈ E smod. If follows from (35) that one may find a presentation of the form Since evaluation on P is fully faithful, there exists a natural transformation ϕ : h P ⊗ k m 2 |n 2 → h P ⊗ k m 1 |n 1 which coincides with ψ upon evaluation at P . Let us define a functor F M : V → svec k by F M (X) = coker(ϕ X ). Then F M ∈ V-smod is a functor whose evaluation at P is isomorphic to M . Thus, evaluation at P is essentially surjective.
Appendix B. Superschemes and supergroups
We briefly recall the definitions and some basic properties of cosuperalgebras, superschemes and supergroups. For more details, see [BrK1], [BrK2] and the references therein.
B.1. Cosuperalgebras. A cosuperalgebra is a superspace A which is a coalgebra in the usual sense such that the comultiplication ∆ A : A → A ⊗ A and the counit ǫ : A → k are even linear maps. The notions of bisuperalgbra and Hopf cosuperalgebra can be defined similarly.
If A is a cosuperalgebra, a right A-cosupermodule is a vector superspace M together with a structure map ∆ M : M → M ⊗ A which is an even linear map that makes M into an ordinary comodule. Denote by cosmod A the category of all right A-cosupermodules and A-cosupermodule homomorphisms (which are just ordinary A-comodule homormorphisms).
If B is a finite dimensional associative superalgebra, then multiplication in B gives an even linear map m : B ⊗ B → B. Taking the dual of this map we obtain a linear map ∆ = m ∨ : Conversely, suppose that A is a finite dimensional cosuperalgebra. Then we make A ∨ into a superalgebra by defining the product f g of Z 2homogeneous f, g ∈ A ∨ as f g, a := f ⊠ g, ∆ A (a) , for all a ∈ A. Recall from [BrK1] that there is an equivalence (in fact isomorphism) of categories between cosmod A and A ∨ smod.
Suppose B is an associative superalgebra. Then S d acts (on the right) on B ⊗d via superalgebra automorphisms. Hence, Γ d B = (B ⊗d ) S d is also a superalgebra.
Now let A be a cosuperalgebra. Since T * A is the free associative superalgebra generated by A (considered as a superspace), there is a unique superalgebra homomorphism ∆ : T * A → T * A ⊗ T * A such that ∆(a) = ∆ A (a) for all a ∈ A, and T * A is a cosuperalgebra with respect to this homomorphism. Similarly, since S * A is a free commutative superalgebra, there exists a unique superalgebra homomorphism ∆ : S * A → S * A ⊗ S * A such that ∆(a) = ∆ A (a) for all a ∈ A. (We note that a tensor product of commutative superalgebras is commutative.) The homomorphism ∆ makes S * A into a cosuperalgebra.
One may check that we have Hence, both T d A and S d A may be considered as cosuperalgebras by restricting ∆ and ∆ respectively.
is a finite dimensional associative superalgebra (resp. cosuperalgebra). Then we have the following isomorphisms of superalgebras.
Proof. For (i) and (ii), the isomorphisms are given by the canonical even linear isomorphisms (1) and (22), respectively. It is then straightforward to check from the definitions that they are indeed superalgebra isomorphisms. For (iii), one may check from parts (i) and (ii) that we have the following superalgebras isomorphisms: B.2. Superschemes. Let salg k denote the category of all commutative superalgebras and even homomorphisms. Also, let ssch k be the category of superschemes as in [BrK2]. We may identify ssch k with a full subcategory of the category Fct(salg k , sets) consisting of all functors from salg k to sets. An affine superscheme is a representable functor X = hom salg k (k[X], −), for some k[X] ∈ salg k which is called the coordinate ring of X. Given M ∈ svec k , let M a : salg k → sets denote the functor defined by for all A ∈ salg k . Then M a is an affine super scheme with coordinate ring given as follows. Suppose N is an arbitrary superspace, not necessarily finite dimensional. Then we may identify M ∨ ⊗ N with Hom k (M, N ) by setting Then, for any A ∈ salg k , we have Hence M a is an affine superscheme with k[M a ] = S * (M ∨ ). Now suppose B is an associative superalgebra. Let V, W ∈ B smod and A ∈ salg k . Then it may be checked that formula (13) gives the following isomorphisms: where A is viewed as a supermodule over itself with respect to left multiplication. Let End B,V denote the functor in Fct(salg k , sets) such that End B,V (A) consists of the even B ⊗ A-linear endomorphisms from V ⊗ A to itself. Then, by identifying the left and right hand sides of (36), we see that End B,V = (End B (V )) a . So that End B,V is an affine superscheme with Since End B (V ) is a superalgebra, we may regard k[End B,V ] as a cosuperalgebra via the map ∆ described above.
B.3. Supergroups. A supergroup is defined to be a functor G from the category salg k to the category groups. An algebraic supergroup is a supergroup G which is also an affine superscheme, when viewed as a functor from salg k to sets, such that the coordinate ring k [G] is finitely generated. In this case, k [G] has a canonical structure of Hopf superalgebra. In particular, the comultiplication ∆ : k [G] → k [G] ⊗ k [G] and counit E : k[G] → G are defined, respectively, as the comorphisms of the multiplication and the unit of G.
Suppose B is an associative superalgebra and V ∈ B smod. Let GL B,V denote the subfunctor of End B,V such that GL B,V (A) is the set of all even B ⊗ A-linear automorphisms of V ⊗ A. Then GL B,V is an algebraic supergroup, and k[End B,V ] = S * (End B (V ) ∨ ) is a subcoalgebra of k[GL B,V ] with respect to the comultiplication ∆ defined above.
Example B.2.
(i) Suppose m, n are nonnegative integers. We use the notation M at m|n = End k,k m|n and GL(m|n) = GL k,k m|n .
If A ∈ salg k , then M at m|n (A) may be identified with the set of all matrices of the form where: A is an A 0 -valued m × m-matrix, B is an A 1 -valued m × nmatrix, C is an A 1 -valued n × m-matrix, and D is an A 0 -valued n × n-matrix. The matrix (37) corresponds to an even (resp. odd) linear operator if B and C (resp. A and D) are both zero. From [L,Lemma 1.7.2 ], it follows that GL(m|n, A) consists of all matrices (37) such that det(A) det(D) = 0. Let M = k m|n . If f ∈ End k (M ), we may decompose f = f 0 + f 1 , where f 0 is even and f 1 is odd. Let det ∈ S m+n (End k (M ) ∨ ) denote the element such that: for all f ∈ End k (M ), det(f ) = det(f 0 ), where the latter is the usual determinant of the induced linear operator f 0 : M → M of ordinary vector spaces. Then GL(m|n) is an affine subsuperscheme of M at m|n , and k[GL(m|n)] is the localization of the coordinate ring k[M at m|n ] at the element det.
Then we write M at n = End C(1),V and Q(n) = GL C(1),V .
From Example 2.6, it follows that M at(A) may be identified with the set of matrices of the form where S (resp. S ′ ) is an A 0 -valued (resp. A 1 -valued) n × n-matrix.
The matrix (38) corresponds to an even (resp. odd) linear operator if S ′ = 0 (resp. S = 0). Then Q(n, A) consists of all invertible matrices of the form (38). We may define an element det ∈ k[M at] = S * (End C(1) (V ) ∨ ) in a way analogous to the previous example. It follows from [BrK2] that k[Q(n)] is the localization of k[M at] at det.
A representation of an algebraic supergroup G is defined to be a natural transformation η : G → GL k,M for some M ∈ svec k such that η A : G(A) → GL k,M (A) is a group homomorphism for each A ∈ salg k . On the other hand, a G-supermodule is defined to be a right cosupermodule for the Hopf superalgebra k [G]. The two notions of supermodule and representation are equivalent (cf. [BrK2]). In particular, given a representation η : G → GL B,M , there is a corresponding structure map | 2013-02-01T00:00:42.000Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "5eb4e7616157c398e1d35c5f4e59ce0ce159c57b",
"oa_license": null,
"oa_url": "https://www.ams.org/ert/2013-17-20/S1088-4165-2013-00445-4/S1088-4165-2013-00445-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5eb4e7616157c398e1d35c5f4e59ce0ce159c57b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
115165662 | pes2o/s2orc | v3-fos-license | Continuous approximation of breathers in one and two dimensional DNLS lattices
In this paper we construct and approximate breathers in the DNLS model starting from the continuous limit: such periodic solutions are obtained as perturbations of the ground state of the NLS model in $H^1(\RR^n)$, with $n=1,2$. In both the dimensions we recover the Sievers-Takeno (ST) and the Page (P) modes; furthermore, in $\RR^2$ also the two hybrid (H) modes are constructed. The proof is based on the interpolation of the lattice using the Finite Element Method (FEM).
Introduction
In this paper we study the problem of constructing breathers in the one and two dimensional discrete nonlinear Schrödinger (DNLS) equation starting from the continuous limit.
The breathers we construct are critical points of the Hamiltonian function constrained to the surface of constant ℓ 2 norm. Such critical points are obtained by continuation from the continuous model constituted by the nonlinear Schrödinger (NLS) equation. The connection between the discrete and the continuous system is obtained by using the finite elements (FEM). This allows to identify the phase space of the discrete system with a subspace of the phase space of the continuous system.
For example, consider the one dimensional case. The space of the finite elements is constructed as follows: first we associates to the j-th point of the discrete lattice a continuous piecewise linear function s j (x), whose value is 1 at x = j and which vanishes for |x − j| ≥ 1 (see Fig. 1). To a sequence ψ j , we associate the function ψ(x) := j ψ j s j (x/µ), where µ > 0 is a small parameter representing the mesh of the lattice. The space generated by the functions s j (x/µ) will be denoted by E µ .
Once this is done one can compare the functionals of the continuous system and those of the discrete one. In order to do this, denote by H c and N c the Hamiltonian and the norm of the continuous system, and consider the restriction of such functions to the space of the discrete system E µ . By the standard theory of integration one can say that the restricted functionals are close to the Hamiltonian H d and the norm N d of the discrete system. So the idea is to consider a non-degenerate critical point of the functional of the continuous system, a critical point laying close to the manifold of the finite elements and to continue such a critical point to a critical points of the discrete functional.
However there is a delicate point in the game: namely that the difference between the discrete functional and the continuous one should be small when the phase space is endowed with the energy norm. This turns out to be true thanks to a special property of the finite elements: the fact that one has with no error. Due to this property the difference between the continuous and the discrete functional turns out to be a functional which is small and smooth on the energy space. This allows to apply the implicit function theorem and to continue critical points of the continuous system to critical points of the discrete one. In order to be concrete we study in detail a one dimensional and a two dimensional model. We use known results on existence and non-degeneracy of the ground state of the continuous system in order to apply the above theory. In these paper we construct two (resp. four) kinds of discrete breathers in the 1-(resp. 2) dimensional case, which are the continuation of the continuous breather. In order to avoid problems related to the translational invariance of the continuous system we work here in spaces of reflection invariant sequences. Thus the breathers we find for the discrete system are reflection invariant too.
In dimension one, the breather of the first kind is centered at a lattice site and corresponds to the so called Sievers-Takeno mode (ST), while the breather of the second kind is centered in the middle of a cell of the lattice and corresponds to the so called Page mode (P). In dimension two, besides the ST and P modes, we have two other localized solutions, usually called hybrid (H) modes since they are centered in the middle of one of the two face of the cell.
As far as we know, the result of the present paper is the first one in which the continuous approximation is used in order to construct exact breathers of a lattice model. In dimension 1 the method of spatial dynamics also allows to construct and approximate breathers (see [Jam03]). However such a method is strictly one dimensional, while our method in principle applies to any dimension. Existence of breathers was also proved variationally in [Wei99] and in [AKK01], but such methods do not allow to approximate the breathers and only allow to find one breather for each model. Breathers in DNLS have also been widely studied numerically (see for example [KRB01, CJK + 08, FW98].
The main advantage of our method is that it is quite flexible and allows to directly deduce informations on the shape of the breather starting from the continuous limit.
We recall that the possibility of using the continuous limit in order to approximate the dynamics of discrete systems has been widely investigated, in particular we recall the papers [BCP02,Sch98,KSM92,SW00, BP06,BCP09] in which an approximation valid for long but finite times and the papers [FP99, FP02, FP04a, FP04b, HW08, MP08] where an infinite time approximation has been obtained.
The plan of the paper is the following. In Section 2 we present the result and motivate our continuum limit approach. In Section 3 we formulate in Theorem 3.1 the Implicit Function Theorem applied to our problem and in Section 4 we construct the FEM to interpolate the discrete model and we verify the hypothesis of Theorem 3.1.
Main result.
We study here the discrete focusing nonlinear Schrödinger equation (DNLS) in R n with n = 1, 2 where ∆ 1 is the n-dimensional discrete Laplacian defined by and µ is the lattice mesh. In particular we look for solutions of the form Then the sequence ψ l fulfils and thus it is a critical point of the Hamiltonian function constrained to a surface of constant value of the norm where the factors µ n have been inserted for future convenience. The main result of the present paper consists in showing that such a solution can be constructed and approximated starting from the continuous model constituted by the Nonlinear Schrödinger Equation (NLS), namely More precisely, consider the Hamiltonian H c and (the square of) the L 2 norm N c , given by then a periodic solution ψ(x, t) = e −iλt ψ(x) of (6) fulfils the following continuous approximation of (3) According to classical results on (8) (see [BL83,BLP81,CGM78]), there exists a unique real valued, positive, radially symmetric and exponentially decaying function ψ c which realizes the minimum of H c | Nc=1 . For example, in the case n = 1 and p = 1 it can be computed explicitly If we interpret the discrete functionals H d , N d as µ-perturbations of H c , N c and we restrict to a class of "even" functions in order to remove any possible degeneracy of the minimum ψ c , then we can continue the solution ψ c of (8) to a solution ψ(µ) of (3). In order to state the precise result we are going to prove, we first need to define the configuration space Q µ for ψ l : Definition 2.1. The space ℓ 2 (Z n , R) will be denoted by Q µ when endowed with the norm Theorem 2.1. For any µ small enough and 1 2 ≤ p < 2 n there exist 2 n distinct real valued sequences ψ i l (µ) which are solutions of (3). Such solutions are even sequences ψ −l = ψ l lying on the surface N d = 1. One has where Ψ i is defined by 2.1 Comments.
1. the first of (11) is not empty since by using (47) we get Moreover, we stress that by its definition the approximating sequence Ψ i l is bounded uniformly in µ 2. the first of (11) immediatly implies an estimate which is empty in the case n = 2. Lemma 4.7 in Section 4.3 is necessary to improve the above result. We do not know whether the exponent 3 2 − n 2 is optimal or not. 1 We can fix the set where Ψ i l is localized as Ω : 3. We stress that the problem (3) is equivalent to the µ-independent onẽ with the constrain This can be seen by the scaling and observing that 3 The Implicit Function Theorem.
The situation we will meet is summarized in the following abstract scheme. Let H be a Hilbert space, and for any µ, let E µ be a subspace of H. Let H c ∈ C 2 (H) and N c ∈ C ∞ (H) be two functionals, with N c being a submersion. Correspondingly we define Then we define the "discrete" objects: let H d := H ǫ 1 ,µ ∈ C 2 (E µ ) and N d := N ǫ 2 ,µ ∈ C ∞ (E µ ) be functionals depending smoothly on two additional parameters ǫ 1 , ǫ 2 . Define We make some assumptions.
i. There exists ψ c ∈ H which is a coercive minimum of H c S , namely it is a minimum and fulfills for all µ small enough.
Let ψ 0 ∈ E µ be such that ψ c − ψ 0 ≤ Cµ and let U ⊂ E µ be an open neighborhood of ψ 0 then we assume ii.
for some large enough k Theorem 3.1. Under the above assumptions, for any ǫ 1 , ǫ 2 , µ small enough, there exists a unique ψ ǫ 1 ,ǫ 2 ,µ , which is a coercive minimum of H d N d =1 . Moreover one has Proof. The result is local, so we restrict to a neighborhood of ψ c . Define and take ψ 0 ∈ S 0,µ . Remark that, due to smoothness of H c one has By coercivity (13) and Lax-Milgam Lemma, the second differential defines an isomorphism bounded together with its inverse uniformly with respect to all the parameters. From assumption ii, there exists a local isomorphism The statement is then equivalent to the existence of a coercive minimum of H ǫ 1 ,µ • I ǫ 2 ,µ . To get it remark that Due to (18) and (13), the Implicit Function Theorem applies and gives the result.
Applications to breathers
In order to avoid gauge and the translational invariance of the problem, in particular of the continuous system, we will work in a space of real valued functions "invariant" under the involution More precisely, in H 1 (R 2 , R) we will consider functions fulfilling which is equivalent to (20) almost everywhere and is a condition well defined in H 1 (R 2 , R).
Lemma 4.1. Let ψ c be a solution of (8) with p < 2 n and let then assumption (13) of Theorem 3.1 holds.
Proof. This Lemma directly follows from Proposition D.1 of [FGJS04] by remarking that T ψc S ⊂ X, with X defined in the statement of Prop. D.1. .
Remark 4.1. We stress that the constrain (21) is "natural" for the problem (3), since (20) is a symmetry for both the Hamiltonian (4) and the Norm (5). Hence, a critical point for the restricted problem is also a critical point for the original problem.
In the following subsections we construct the linear manifold E µ of the finite elements, and prove the estimates (14) and (15) for the two considered applications. We deal with the ST-breather, since the other ones follow by small changes in the definition of E µ .
The case n = 1
Let l = j and define the sequence of functions s j (x) by and, to a sequence ψ j ∈ Q µ , we associate a function On the interval T j := [µj, µ(j + 1)) the above function reads Definition 4.1. We denote by E µ the linear space composed by the functions of the form (23) with ψ j ∈ Q µ .
The following Lemma gives the equivalence between the function space E µ and the sequence space Q µ .
Proof. Let us first decompose R = ∪ j∈Z T j . The weak derivative of Ψ is If we plug (24) in the integral Ψ 2 L 2 , a direct computation gives the estimate (26).
Proposition 4.1. Let Ψ ∈ E µ be as in (23) and let us define if q ≥ 1 then R G ∈ C 2 (E µ ) and for any bounded open set U ⊂ E µ , there exists C(U) such that Proof. The term R G can be represented through the Euler-MacLaurin formula Indeed, if we set f (y) = |Ψ(µy)| q+2 , we have Hence A direct computation of the firsy and second differential shows that The smallness is represented by the prefactor µ: so Sobolev embedding Theorems and |P 1 (x/µ)| ≤ 1/2 yield (29).
We have thus verified the assumptions of Theorem 3.1 which implies the existence and the estimate of the ST-mode for the case n = 1. The same statement for the P-mode follows by a translation of the basis of E µ with s j defined in (22).
The case n = 2
Let us take ψ j,k ∈ Q µ . For each multindex l = (j, k), let us consider the function s j,k (x, y) which represents the exagonal pyramid of height one centered in (j, k) whose support is the union of the six triangles of figure 2. More precisely we define T + j,k the triangle whose vertexes are (j, k), (j + 1, k), (j, k + 1) and T − j,k the one whose vertexes are (j, k), (j − 1, k), (j, k − 1). Hence, for example, on T + j,k the function s j,k represents the plane in R 3 The set of functions {s j,k (x/µ, y/µ)} (j,k)∈Z 2 is a basis which generates a piecewise linear function Ψ(x, y) interpolating ψ j,k Notice that on the triangle T ± j,k the function Ψ is the plane Definition 4.2. We denote by E µ the linear space composed by the functions of the form (32) with ψ j,k ∈ Q µ .
The following Lemma gives the equivalence between the function space E µ ⊂ H 1 and the sequence space Q µ . Lemma 4.3. Let Ψ ∈ E µ then it holds true (34) Moreover Proof. from (33) we have that on each triangle T ± j,k it holds Formula (35) follows from a direct computation as in Lemma 4.2.
The next three Lemmas provide the proof of the following main Proposition 4.2. Let Ψ ∈ E µ be as in (32) and let us define if q ≥ 1 then R G ∈ C 2 (E µ ) and in any open set U ⊂ E µ one has Let us set f (x, y) = |Ψ(x, y)| q+2 and let us take (x, y) ∈ T ± j,k , then we can use a Taylor expansion with integral remainder is the segment connecting (µj, µk) with (x, y) and lies in the triangle T ± j,k . Hence x, y))dtdxdy + (37) By the initial definition of f (x, y) one has |∂ y f | = (q + 2)|Ψ| q+1 |Ψ y | = (q + 2)|Ψ| q+1 |ψ j,k±1 − ψ j,k | µ .
Collecting we get hence the thesis.
Lemma 4.6. Under the assumptions of Propositions 4.2 one has also Proof. Also in this case, a direct computation easily gives for any h ∈ E µ In order to estimate We proceed as in the previous Lemmas, by defining so that |f x | + |f y | ≤ q|Ψ| q−1 |∇Ψ|h 2 + 2|Ψ| q |h∇h|.
q > 1 The steps are the same as usual; the only difference is that we have to deal with R 2 |Ψ| 2σ |∇Ψ| 2 , σ > 0, but it is enough to notice again that the integral above is the (square) L 2 norm of ∇|Ψ| 1+σ , thus This concludes the case related to the construction and approximation of the ST-mode. The other three modes (the P-mode and the two H-mode) are obtained by translation of the basis s j,k either in one or in both the two directions.
Proof of Theorem 2.1.
We begin with the following Definition 4.3. Let n = 1, 2 and consider ψ ∈ H 2 (R n ) ֒→ C 0 on E µ . We define Π µ : ψ → Π µ ψ = l∈Z n ψ(µl)s l (x/µ), x ∈ R n (46) the projection of H 2 (R n ) on E µ . By classical results on polynomial approximation in Sobolev spaces (Chapter 4 of [BS08]) one has We need also a simple lemma to obtain the second estimate of (11) Lemma 4.7. For any l ∈ Z n we have Proof. We write the proof for the case n = 2. The case n = 1 is simpler. Denote l = (j, k); one has ψ 2 j,k = Now we easily verify the hipothesis of the abstract Theorem 3.1. First, we define ψ c as the (smooth) solution of (8) and ψ 0 = νΠ µ ψ c , with ν such that N 0,µ (ψ 0 ) = 1. Then, condition (13) follows from Lemma 4.1 while condition (14) comes from the above (47). Finally, requirement ii is given by Lemmas 4.2 and 4.3 and by Propositions 4.1 and 4.2. This directly gives the first of (11). The second of (11) is a byproduct of either the first and Lemma 4.7, indeed | 2009-09-10T13:34:39.000Z | 2009-09-10T00:00:00.000 | {
"year": 2009,
"sha1": "7745e96f372a62c1dac1f6b1ccf4098a1a5d85c2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0909.1942",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7745e96f372a62c1dac1f6b1ccf4098a1a5d85c2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
264366891 | pes2o/s2orc | v3-fos-license | Similarities and differences in the microbial structure of surface soils of different vegetation types
Background Soil microbial community diversity serves as a highly sensitive indicator for assessing the response of terrestrial ecosystems to various changes, and it holds significant ecological relevance in terms of indicating ecological alterations. At the global scale, vegetation type acts as a major driving force behind the diversity of soil microbial communities, encompassing both bacterial and fungal components. Modifications in vegetation type not only induce transformations in the visual appearance of land, but also influence the soil ecosystem’s material cycle and energy flow, resulting in substantial impacts on the composition and performance of soil microbes. Methods In order to examine the disparities in the structure and diversity of soil microbial communities across distinct vegetation types, we opted to utilize sample plots representing four specific vegetation types. These included a woodland with the dominant tree species Drypetes perreticulata, a woodland with the dominant tree species Horsfieldia hainanensis, a Zea mays farmland and a Citrus reticulata fields. Through the application of high-throughput sequencing, the 16S V3_V4 region of soil bacteria and the ITS region of fungi were sequenced in this experiment. Subsequently, a comparative analysis was conducted to explore and assess the structure and dissimilarities of soil bacterial and fungal communities of the four vegetation types were analyzed comparatively. Results Our findings indicated that woodland soil exhibit a higher richness of microbial diversity compared to farmland soils. There were significant differences between woodland and farmland soil microbial community composition. However, all four dominant phyla of soil fungi were Ascomycota across the four vegetation types, but the bacterial dominant phyla were different in the two-farmland soil microbial communities with the highest similarity. Furthermore, we established a significant correlation between the nutrient content of different vegetation types and the relative abundance of soil microorganisms at both phyla and genus levels. This experiment serves as a crucial step towards unraveling the intricate relationships between plants, soil microbes, and soil, as well as understanding the underlying driving mechanism.
INTRODUCTION
Soil is a vital resource that sustains the livelihoods of the global population, impacts various ecosystem functions, and directly and indirectly affects human health and well-being (Bach et al., 2020;Lehmann et al., 2020).Among the soil components, microorganisms play a crucial role in nearly all ecological processes and exhibit the highest abundance, diversity and metabolic activity, serving as a critical ''link'' that maintains ecosystem services (Rampelotto et al., 2013;Mendes et al., 2015;Wang et al., 2019a;Wang et al., 2019b).They are actively involved in the material cycle and energy transformation within ecosystems (Van der Heijden, Bardgett & Van Straalen, 2008;Cardinale et al., 2011;Jing et al., 2015;Delgado-Baquerizo et al., 2016).The major groups comprising soil microorganisms include mainly archaea, bacteria, fungi and protozoa (Bach et al., 2020;Fierer, Wood & Bueno de Mesquita, 2021).The diversity of soil microorganisms serves as a sensitive indicator of the changes in the terrestrial ecosystem and holds significant ecological significance in assessing alterations in the ecological environment (Bardgett & Van der Purren, 2014;Chen et al., 2020;Schloter et al., 2018).Studies have demonstrated that the loss of soil microbial diversity and simplification of soil microbial community composition can potentially compromise multiple ecological services, such as plant diversity, litter decomposition, nutrient utilization and nutrient cycling, thereby posing threats to ecosystem sustainability (Bardgett & Van der Purren, 2014;Bahram et al., 2018).Therefore, it is crucial to investigate the composition and diversity of soil microbial communities under different vegetation types, particularly in the context of China's extensive vegetation construction and the complexity of vegetation cover types.
Biodiversity plays a crucial role in maintaining the proper functioning of ecosystems, and the conversion of land for human use has resulted in a substantial reduction of biodiversity in primary habitats, estimated to be 13.6% (Newbold, 2018).Human activities have had a significant impact on terrestrial ecosystems, and it is projected that global biodiversity will decrease by 3.4% by the end of the 21st century, which will have a s detrimental effect on ecosystem function in many parts of the terrestrial biosphere (Newbold et al., 2015).Given the importance of biodiversity in ecosystem function, there has been a growing body of research focusing on biodiversity and community structure at local scales in recent years (Newbold, 2018).Soil microbes power all biogeochemical cycles on Earth and are an important basis for ecosystem function, influencing the planet's biodiversity (Loreau et al., 2001;Santillan, Constancias & Wuertz, 2020).
Land use change is a significant environmental factor that can profoundly impact soil environmental factors, nutrient conditions and biological interactions (Engelhardt et al., 2018;Wang et al., 2019a;Wang et al., 2019b;Fang et al., 2020).Consequently, it has a substantial influence on soil microbial community diversity and building processes (Cheng et al., 2021).It has been shown that land use type explained 97% of the variability in soil quality indices and that different vegetation measures had significant effects on vegetation composition and structure, biomass, litter, soil moisture, soil nutrients and soil microorganisms of the ecosystem (Xu et al., 2014;Zhang et al., 2011;Liu et al., 2022a;Liu et al., 2022b).Changes in vegetation type not only alter the landscape appearance of the land, but also the material cycle and energy flow of the soil ecosystem, while having a profound impact on the structure and function of soil microorganisms (Wan & He, 2020;Zhang et al., 2013;Tian et al., 2017;Delgado-Baquerizo et al., 2018).Moreover, modifications in plant community composition can indirectly influence microbial diversity and activity by altering the input of carbon resources into the soil through the production of various apoplastic and root secretions (Zhong et al., 2020).
Soil microorganisms play a crucial role in the interaction with plants.Numerous studies have demonstrated that vegetation type is a key driver of soil microbial diversity, encompassing both bacterial and fungal communities, on a global scale (Delgado-Baquerizo & Eldridge, 2019;Chu et al., 2020).In temperate forests, plant diversity has been identified as a significant determinant of subsurface soil microbial community composition (Prober et al., 2015).Soil microorganisms are highly sensitive to environmental changes, and discrepancies in dominant species, microenvironmental improvement, material metabolism, and disturbance history between vegetation types can considerably shape the evolution of soil microbial communities (Ayres et al., 2009;Wan & He, 2020;Zhang et al., 2013;Tian et al., 2017;Delgado-Baquerizo et al., 2018).Above-ground vegetation and soil microorganisms are intricately connected, with the former profoundly influencing the composition of the latter by altering abiotic factors, while the latter responds to vegetation by modifying soil physicochemical properties (De Deyn & Van der Putten, 2005;Heerdt et al., 2017).The diversity of microbial communities serves as a pivotal indicator of soil microbial characteristics (Murphy et al., 2011) and is a vital bioindicator for evaluating soil fertility (Wei et al., 2018).Consequently, it has emerged as a prominent area of research in plant-soil ecosystems in recent years.
To assess the variation in soil microbial community structure and diversity across different vegetation types, we conducted a study in Longzhou County, Chongzuo City, Guangxi Zhuang Autonomous Region.Specifically, we selected four distinct vegetation types: a woodland dominated by Drypetes perreticulata, a woodland dominated by Horsfieldia hainanensis, a maize (Zea mays) farmland, and a citrus (Citrus reticulata) field.Prior to human intervention, all of these sites were natural forests.Our objective was to compare the structure and diversity of soil bacterial and fungal communities among these vegetation types.To achieve this, we employed PCR amplification and high-throughput sequencing techniques.This allowed us to analyze and characterize the changes in soil microbial communities and their functions in response to different vegetation types.
Site information
The sites were situated in Longzhou County, Chongzuo City, Guangxi Zhuang Autonomous Region, China (106 • 33 11 -107 • 12 43 E, 22 • 8 54 -22 • 44 42 N).It is characterized by a southern subtropical monsoon climate, characterized by high temperatures, abundant rainfall, and ample sunshine throughout the year.The region experiences a consistent pattern of hot and dry seasons, with approximately 350 frost-free days annually and a frost period lasting for 13 days.Four representative vegetation types were selected for the experiment, namely a woodland with the dominant tree species Horsfieldia hainanensis (HH), a woodland with the dominant tree species Drypetes perreticulata (DP), a Zea mays farmland (ZM) and a Citrus reticulata farmland (CR).Both farmland areas were natural forests that were deforested to establish agricultural land.The Zea mays farmland has been under cultivation for 20 years, while the Citrus reticulata farmland has been cultivated for 10 years.Both farmland areas have been regularly fertilized as part of their daily management.
Soil sampling
In December 2022, soil samples of HH, DP, ZM and CR were taken from 0 to 10 cm.According to the S-type sampling principle, eight soil cores of 0-10 cm were randomly selected in each plot and mixed to obtain a total of 16 samples.Each composite soil sample was carefully collected and placed in a plastic bag.The bag was labeled to ensure proper identification of the sample.Subsequently, the samples were transported to the laboratory in an ice box to maintain their integrity.In the lab, stones and plant residues such as roots and litter were removed from the soil.The soil material was then passed through two mm sieve to remove any coarse particles.Following this, the sieved soil was immediately transferred into 2 ml centrifuge tubes and frozen at a temperature of −80 • C.This freezing process was carried out to facilitate the extraction of microorganisms from the soil at a later stage.
Soil chemical properties determination
10g of soil sample was weighed and placed in a 50 mL conical flask, and double distilled water was added according to the principle of soil/water ratio of 2.5:1.After 2 min of high-speed shaking, the soil pH value was measured for half an hour by using a pH meter (Lei-ci PHS-3C).Dried soil samples of 10-13 mg were weighed and sealed in a tin container and then the total carbon and nitrogen contents of the soils were determined by the elemental analyzer (Elementar Vario EL III Germany).Using a molybdenum antimony anti-colorimetric method and HClO 4 -H 2 SO 4 heating digestion, the total phosphorus content of the soil was determined. 1 g of the sample was to be weighed in a cleaned and dried digestion tube.eight mL of concentrated sulfuric acid (H 2 SO 4 ) were then required to be mixed, and allowed to soak for an overnight period.Finally, 10 drops of HClO 4 were supposed to be put in.The digestion tube might then be heated on the digestion machine until the boiling liquid is clarified.The sample was kept in a 100 mL volume bottle after digestion and cooling.The molybdenum-antimony resistance colorimetric method was used to perform colorimetric analysis on five mL of filtrate using a spectrophotometer (P4 UV-Visible China) and a 50 mL volumetric bottle (Liu et al., 2022b).
DNA extraction and high-throughput sequencing
The OMEGA Soil DNA Kit (M5635-02) (Omega Bio-Tek, Norcross, GA, United States) was used to extract DNA from all the samples.A NanoDrop NC2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, United States) was used to measure the quantity and quality of DNA and agarose gel electrophoresis for extracted DNA (agarose concentration of 1.2%).With the assistance of the primers 338F (5 -ACTCCTACGGGAGGCAGCA-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ), the 16S V3_V4 region of the soil bacterium was amplified (Claesson et al., 2009).Additionally, the fungal ITS region was amplified using primers ITS5 (5 -GGAAGTAAAAGTCGTAACAAGG-3 ) and ITS2 (5 -GCTGCGTTCTTCATCGATGC-3 ) (White et al., 1990).The PCR system had a total volume of 25 µL, consisting of the following components: 5 µL of 5 × reaction buffer, 5 µL of 5 × GC buffer, 2 µL of dNTP (2.5 mM), 1 µL of forward primer (10 µM), 1 µL of reverse primer (10 µM), 2 µL of DNA template, 8.75 µL of ddH2O, and 0.25 µL of Q5 DNA polymerase.The amplification process involved an initial denaturation at 98 • C for 2 min, followed by cycling at 98 • C for 15 s, 55 • C for 30 s, and 72 • C for 30 s for a total of 25 cycles.A final extension was performed at 72 • C for 5 min, with a 10 • C hold for the 25 cycles.To purify the PCR amplicons, Vazyme VAHTSTM DNA Clean Beads from Vazyme in Nanjing, China were utilized.The Quant-iT PicoGreen dsDNA Assay Kit from Invitrogen in Carlsbad, CA, United States was employed for quantification.250 pair-end sequencing was carried out using the Illlumina NovaSeq platform and NovaSeq 6000 SP Reagent Kit (500 cycles).Amplicons were pooled in equal amounts following the individual quantification step.The above operations were completed in Shanghai Personal Biotechnology Co., Ltd.
Sequence analysis
Microbiome bioinformatics analysis was carried out using QIIME 2 (2019.4),with a slight modification based on the methodology described by Bolyen et al. (2019).The initial step involved the removal of primers using the cutadapt plugin, following the demultiplexed of raw sequence data using the demux plugin (Martin, 2011).Subsequently, the DADA2 plugin was employed to perform quality filter, denoising, merging, and removal of chimera from the obtained sequences (Callahan et al., 2016).To construct a phylogenetic tree, non-singleton amplicon sequence variants (ASVs) were utilized in conjunction with the fasttree2 and mafft tools, which were implemented as part of the pipeline (Katoh et al., 2002;Price, Dehal & Arkin, 2009).Alpha-diversity metrics including Chao1 (Chao, 1984), Shannon (1948), and Pielou's evenness (Pielou, 1966), were estimated by employing the diversity plugin.In addition, beta diversity metrics (Bray-Curtis dissimilarity) were calculated.The feature-classifier plugin was utilized to assign taxonomy to the ASVs using the naive Bayes classifier and two reference databases: SILVA Release 132 Database for bacteria and UNITE Release 8.0 Database for fungi, which were selected based on the studies conducted by Kõljalg et al. (2013) and Bokulich et al. (2018), respectively.
Data analytics
The main tools used to analyze sequence data were QIIME2 (2019.4) and R packages (v3.2.0; R Core Team, 2015) (New Zealand).Using the ASV table in QIIME2 (2019.4),alpha diversity indices at the ASV level, including the Chao1 richness index, Shannon index, and Pielou's evenness, were calculated and displayed as box plots.To visualize the differences in alpha diversity between the various specimen groups, the data in the table above were plotted as box plots using QIIME2 (2019.4) and the ggplot2 package for the R package (v3.2.0; R Core Team, 2015).The significance of the differences could be confirmed using the Kruskal-Wallis rank sum test and the dunn'test as a post hoc test (Wickham, 2016).Based on the occurrence of ASVs across samples/groups regardless of their relative abundance, a Venn diagram was created to visualize the shared and unique ASVs among samples or groups using the R package ''VennDiagram'' (Zaura et al., 2009).PERMANOVA (Permutational Multivariate Analysis of Variance) was used to evaluate the significance of the differences in microbiota structure between groups (Anderson & Willis, 2003).Using abundance information from the top 20 orders of average abundance, the heatmap was produced using R's pheatmap package.A traditional multidimensional scaling (cMDScale) analysis technique is principal coordinates analysis(PCoA) (Ramette, 2007).This was accomplished by maximising the distance relationships between the original samples and expanding the sample distance matrix in low-dimensional space following projection.The number of permutation tests used in the ''permanova'' analysis of variance between groups was set to 999.The analysis of variance between groups was performed using the scikit-bio package in Python.After removing singletons from the feature list, the QIIME2 (2019.4)''qiime taxa barplot'' was used to visualize the compositional distribution of each sample at the taxonomic levels of phylum and genus.In order to better understand the relationship between soil nutrient content and soil microbial community composition (phylum and genus level), we performed correlation heat map analysis of them based on Spearman rank correlation coefficient.This analysis was performed by the genescloud tools, A free online platform for data analysis (https://www.genescloud.cn)(Liu et al., 2022a).The differences between the chemical characteristics of the soils of the various vegetation types were examined using a one-way ANOVA test.Waller-Duncan post-hoc multiple comparisons were performed, and IBM SPSS Statistics 26 was used to process the data.
Comparative analysis of the chemical properties of soils of different vegetation types
The soil chemical properties of the four vegetation types were significantly different (p < 0.01; Table 1).pH value (mean 6.91), total soil carbon (mean 208.9 g/kg), total nitrogen(mean 13.69 g/kg), C/N (mean 15.27), N/P (mean 33.62) and C/P (mean 513.13) were all highest in DP and were significantly higher than in the other three vegetation types.The mean content of total phosphorus was 0.48 g/kg for ZM, which was significantly higher than the others (p = 2.61 × 10 −3 ).HH and DP had significantly higher contents of each soil chemical property than CR and ZM.
Comparative analysis of soil microbial community diversity in different vegetation types
In total, a total of 1,143,966 high-quality bacterial sequences and 1,040,630 high-quality fungal sequences were obtained from all samples, which provided an opportunity to delve deeper into the bacterial and fungal communities.On average, each sample contained 71,498 bacterial sequences and 65,039 fungal sequences.The number of bacterial sequences ,481, 10,518, 10,226, 8,874, respectively (Fig. 1A).The number of ASVs in soil fungi was 1,410, 1,090, 563, 384 (Fig. 1B).
The soil alpha diversity analysis of the different vegetation types was shown in Fig. 3.As a whole, the Chao 1 index, Shannon index and Pielou's evenness index were significantly different between the bacterial communities of the four soil samples (Fig. 3A).HH had the highest Chao 1 index (mean 4572.95),Shannon index (mean 10.88) and Pielou's evenness index (mean 0.91), while the lowest mean Chao 1 index, Shannon index and Pielou's evenness index were found in CR at 3628.18, 10.22 and 0.88, respectively.Of the four soil samples, only HH and CR differed significantly between the two pairs (Chao 1 index p = S1).The rest of the treatments were not significantly different between the two pairs (Table S1).Unlike the bacterial community, only the Chao 1 index and Shannon index were significantly different between the fungal communities of the four soil samples, while the Pielou's evenness index was not significantly different between them (Fig. 3B).The mean values of the Chao 1 index and Shannon index followed the same pattern as the bacterial community, being HH (527.34;6.87) >DP (489.56;6.34) >ZM (231.67;6.06) >CR (155.71;5.54).The Chao 1 index and Shannon index of HH were significantly higher than that of CR (p = 0.0065; Table S1).The Chao 1 index of CR was significantly lower than that of DP (p = 0.038; Table S1).In contrast to the bacterial community, ZM had the highest mean Pielou's evenness index and DP the lowest.the mean Pielou's evenness index of HH was only 0.0004 higher than that of CR.Soil bacterial communities had higher Chao 1 index, Shannon index, and Pielou's evenness index than soil fungal communities.
Comparative analysis of soil microbial community composition in different vegetation types
The relative abundance of soil microbial (bacterial and fungal) communities in each of the four vegetation types was analyzed at the phylum level and genus level, and communities with relative abundances greater than 1% were selected for comparative analysis (Table S2).Eight bacterial phyla with relative abundance greater than 1% were Actinobacteria, Proteobacteria, Acidobacteria, Chloroflexi, Gemmatimonadetes, Rokubacteria, Planctomycetes, Bacteroidetes (Fig. 4A).Among the four vegetation types, the phylum Actinobacteria, Proteobacteria, Acidobacteria, and Chloroflexi had average relative abundances greater than 10%, and they were also among the top 5 phyla in terms of the number of ASVs shared by all soil bacterial communities (Fig. 2A, Table S2).HH had the highest relative abundance of Proteobacteria and Acidobacteria of all samples at 36.31% and 15.38%, while Actinobacteria had the lowest relative abundance at 24.81%.CR had the highest relative abundance of Actinobacteria (42.99%) and Chloroflexi (18.57%) and the lowest of Proteobacteria (17.39%).Acidobacteria had the lowest relative abundance of 9.63% in ZM, while Chloroflexi had the lowest relative abundance of 3.28% in DP, the lowest of all.
At the fungal phylum level, relative abundances greater than 1% were Ascomycota, Basidiomycota, Mortierellomycota (Table S2).The only soil fungal phyla with average relative abundances greater than 10% were Ascomycota and Basidiomycota, which were also the phyla to which all the sampled fungal communities shared ASVs (Fig. 2B, Table S2).The relative abundance of the remaining eight phyla, with the exception of Mortierellomycota, was not higher than 0.01%.CR had the highest relative abundance of Ascomycota (85.46%) and the lowest relative abundance of Basidiomycota (12.44%).However, in DP, Ascomycota had the lowest relative abundance at 55.17% and Basidiomycota had the highest relative abundance at 31.91%.
Based on the soil microbial community order level, the four sample sites were clustered using the average algorithm.At the bacterial order level, the HH and DP communities were combined into a single branch, and the ZM and CR communities were in the same branch (Fig. 4A).The bacterial communities of the two sample sites in the same branch, i.e., between HH and DP and between ZM and CR, were more similar.At the fungal order level, the ZM and CR communities merged into one branch and then coalesced with DP; the HH community was a separate branch (Fig. 4B).This indicated that the CR and ZM communities were more similar and least similar to HH.This clustering result reflected that the soil microbial community characteristics of the four sample site communities were closely related to the sample site plant community composition.
Further, a PERMANOVA test was conducted to examine the differences in soil microbial community composition between the four vegetation types (Table 2).As a result, the differences in soil microbial community composition between the four samples reached a significant level (p < 0.05).Among them, DP and ZM had the most significant differences in microbial community composition.The study showed that the type of above-ground vegetation was an important driver of structural differences in soil microbial community composition, and that differences in soil microbial community composition were more pronounced between vegetation types.
Conversely, a significant positive correlation was observed with the relative abundance of Hygrocybe (r > 0, p < 0.05) (Fig. 6B).The relative abundances of Penicillium, Humicola and Saitozyma were negatively correlated with soil total carbon, total nitrogen, N/P and C/P (r < 0, p < 0.05).Additionally, the relative abundance of Saitozyma displayed a negative correlation with soil C/N (r < 0, p < 0.01).The relative abundances of Aspergillus and Mortierella were negatively correlated solely with soil total phosphorus content (r < 0, p < 0.05).On the other hand, the relative abundance of Geastrum exhibited significant positive correlations with soil total carbon, total nitrogen, C/N, N/P and C/P (r > 0, p < 0.05), while displaying a negative correlation with soil total phosphorus content (r < 0, p < 0.05).
DISCUSSION
Soil is widely recognized as a crucial habitat for microbial communities, making it a key component of the Earth's ecosystem (De Vrieze, 2015).In agriculture and forestry, soil microbial communities, composed of bacteria and fungi, play a vital role in the cycling of materials within the ecosystem (Štursová et al., 2012).The activities of soil microorganisms are intricately linked environment, and alterations in environmental factors, such as human disturbances and variations in vegetation types, can have a significant impact on microbial composition and cause changes in their distribution patterns (Nakamura et al., 2003).It has been observed that topsoil from agricultural land (ZM and CR) possessed a lower number of ASVs compared to woodland areas (HH and DP).Futhermore, microbial community diversity, richness and evenness were lower in farmland than in woodland.This trend may be attributed to the frequent tillage practices in farmland, which disturb the soil and subsequently reduce microbial community richness (Zhang et al., 2018).Additionally, the alpha diversity index of soil microbial communities in Zea may is higher than that of Citrus reticulata, despite both being agricultural crops.This difference could be explained by the fact that Zea may is an annual herb, while Citrus reticulata is a perennial tree.Repeated tillage activities in Zea may cultivation may moderately disturb the microbial community, while the addition of a substantial amount of litter during the wilt period of Zea may could enhance microbial community richness (Bressan et al., 2008;Araujo et al., 2023).The significant influence of different vegetation types on soil microbial community diversity can also be attributed to variations in plant characteristics and rooting systems.Perennial plants, with their extensive root systems and greater carbon and nitrogen availability through root deposition and turnover, provide a favorable environment for microbial colonization and nutrient cycling.In contrast, annual crops like maize have a shorter photosynthetic life cycle, whereas citrus trees typically have shallow rooting depths.
The microbial communities in soil exhibit variations in structure and diversity across different vegetation types, as highlighted by studies conducted by Szoboszlay et al. (2017), George et al. (2019), andSantos et al. (2020).Our study also revealed significant disparities in soil microbial composition between woodland and agricultural land, with the highest similarity observed between ZM and CR soil microbial communities.The dominant bacterial phyla in the topsoil of the different vegetation types were found to be the consistent and included Actinobacteria, Proteobacteria, Acidobacteria, and Chloroflexi.The prevalence of these phyla has been previously demonstrated in multiple studies (Peiffer et al., 2013;Byers et al., 2020;Gao et al., 2020), and demonstrates the prevalence and importance of these phyla.Acidothermus is an important genus of bacteria in the Acidobacteria.The relative abundance of Acidothermus was higher in agricultural soils than in woodlands due to the fact that Acidothermus could decompose difficult components and convert them into organic components of the soil after breaking them down into humus, while growing crops would increase the number of soil inter-root bacteria, so the bacterial population showed a higher number in agricultural fields than in woodlands (Rajkumar et al., 2010).Fertilizer application also resulted in significant nutrient inputs, which could also increase the abundance and activity of Acidothermus, which was essential for the soil carbon and nitrogen cycle (Kielak et al., 2016).Furthermore, the abundance of Gemmatimonadetes was observed to be higher in ZM and CR compared to other land use types.Members of this phylum play a role in essential nutrient recycling and the decomposition of cellulose and lignin, highlighting their significance in ecosystem functioning (Xu et al., 2019).
In terms of soil fungal community composition, Ascomycota emerged as the predominant phylum across different vegetation types, exhibiting the highest relative abundance.Basidiomycota was the next most prevalent phylum, which aligns with findings from previous studies (Porras-Alfaro et al., 2011;Maestre et al., 2015;Prober et al., 2015).Moreover, our study revealed that the total abundance of fungal genera within the top 10 CR relative abundances at the genus level was considerably higher than in the other three soils, with agricultural soils exhibiting a higher overall abundance compared to woodland soils.This is can be attributed to the influence of human activities, particularly agricultural management, which has a discernible impact on factors such as vegetation composition, soil water and temperature levels, and mineralization of soil organic matter.These factors subsequently lead to structural changes in soil fungal communities, resulting in variations in diversity and the emergence of new species (Arévalo-Gardini et al., 2020).At the genus level, Fusarium was relatively abundant in ZM and CR soils.This can be attributed to the fact that certain Fusarium species are major causal agents of Fusarium crown rot and Fusarium root rot in crops (Beccari, Covarelli & Nicholson, 2011).
The various vegetation types exhibited differences not only in their role of regulating the microclimate of the habitat, but also in their processes of material cycling.These processes involved the input of plant-derived nutrients, as well as the decomposition, transformation and accumulation of nutrients.These factors, in turn, influenced the collaborative evolution of the physico-chemical properties of the soil system and the microbial community (Zhang et al., 2013;Tian et al., 2017;Delgado-Baquerizo et al., 2018).The present study found a significant correlation between the relative abundances of dominant genus taxa in the soil microbial community and environmental factors.Furthermore, there were significant differences in the relationships between different taxa and various environmental factors.These findings may be attributed to the varying degrees of influence that different environmental factors have on microorganisms in the soil, thereby closely relating to the ecological niche of microorganisms (Wan & He, 2020).Previous studies have reported significant variations in soil physicochemical factors among different vegetation communities (Xu et al., 2014;Chen et al., 2018).Moreover, research has shown that abiotic properties of the soil, such as soil total nitrogen and total carbon content, are the primary regulators of the structure and composition of soil bacteria and fungi, (Thakur & Geisen, 2019).In this study, the relevant heat map analysis also confirmed this point.There were significant differences in the relative abundance of soil microorganisms (bacteria and fungi) with different soil nutrient factors.
CONCLUSIONS
In conclusion, soil microorganisms play a crucial role in forest ecosystems by facilitating interactions between plants and soil.The composition and distribution of soil microorganisms are influenced by different vegetation types, making it essential to investigate the structural and functional diversity of soil microbial communities to understand plant-soil microbial-soil relationships and the underlying driving mechanisms.In this study, we employed 16S rRNA and ITS sequencing to analyze the characteristics of soil microbial communities in four distinct vegetation types in the subtropical region of Guangxi, China, yielding noteworthy findings.We observed significant variations in soil microbial structure and diversity among the different vegetation types, with the most pronounced differences occurring between woodland and agricultural land.Overall, woodland soils exhibited greater soil microbial community diversity compared to agricultural soils.The dominant phylum among soil fungi across all samples was Ascomycota while the dominant phylum varied between Proteobacteria in woodland soils and Actinobacteria in agricultural soils among bacteria.Moreover, we identified a significant correlation between soil nutrient content of different vegetation types and the
Figure 2
Figure 2 Soil microorganisms of different vegetation types share phyla and classes of microorganism.(A) Bacteria; (B) fungi The horizontal coordinates were phyla, and those indicated in the legend were phyla.HH, a woodland with the dominant tree species Horsfieldia hainanensis; DP, a woodland with the dominant tree species Drypetes perreticulata; ZM, a Zea mays farmland; CR, a Citrus reticulata farmland.Full-size DOI: 10.7717/peerj.16260/fig-2
Figure 4
Figure 4 Heatmap and cluster analysis based on relative abundance of the top 20 orders identified in soil microbial communities.(A) Bacterium; (B) fungus.The samples are grouped according to their similarity to each other.In the figure, brown represents the genus with lower abundance in the corresponding sample, green represents the genus with higher abundance, and the color change represents the level of abundance.HH, a woodland with the dominant tree species Horsfieldia hainanensis; DP, a woodland with the dominant tree species Drypetes perreticulata; ZM, a Zea mays farmland; CR, a Citrus reticulata farmland.Full-size DOI: 10.7717/peerj.16260/fig-4
Figure 5
Figure 5 Correlation heatmap of correlations between phyla with relative abundance greater than 1% and soil nutrient factors based on Spearman's algorithm.HH, a woodland with the dominant tree species Horsfieldia hainanensis; DP, a woodland with the dominant tree species Drypetes perreticulata; ZM, a Zea mays farmland; CR, a Citrus reticulata farmland.Full-size DOI: 10.7717/peerj.16260/fig-5
Figure 6
Figure 6 Correlation heat maps of soil nutrient factors in bacteria (A) and fungi (B) with relative abundance greater than 1% based on Spearman algorithm.TC, total carbon; TN, total nitrogen; TP, total phosphorus; C/N, carbon to nitrogen ratio; N/P, nitrogen to phosphorus ratio; C/P, carbon to phosphorus ratio.Full-size DOI: 10.7717/peerj.16260/fig-6
Table 1 Soil chemical properties of different vegetation types.
Notes.Data were average ± standard error.Different lowercase letters meant significant difference at 0.05 level.C/N, carbon to nitrogen ratio; N/P, nitrogen to phosphorus ratio; C/P, carbon to phosphorus ratio; HH, a woodland with the dominant tree species Horsfieldia hainanensis; DP, a woodland with the dominant tree species Drypetes perreticulata; ZM, a Zea mays farmland; CR, a Citrus reticulata farmland.
Table 2 Analysis of differences between groups.
Notes.HH, a woodland with the dominant tree species Horsfieldia hainanensis; DP, a woodland with the dominant tree species Drypetes perreticulata; ZM, a Zea mays farmland; CR, a Citrus reticulata farmland. | 2023-10-21T15:09:47.476Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "060d90e302d365a046a2549d245cfe6bd2eb5bd2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ebfffb16ebcc82189dcc318c2c163315e5a75db4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
} |
215577701 | pes2o/s2orc | v3-fos-license | Increased mitochondrial ATP production capacity in brain of healthy mice and a mouse model of isolated complex I deficiency after isoflurane anesthesia
We reported before that the minimal alveolar concentration (MAC) of isoflurane is decreased in complex I-deficient mice lacking the NDUFS4 subunit of the respiratory chain (RC) (1.55 and 0.81 % at postnatal (PN) 22–25 days and 1.68 and 0.65 % at PN 31–34 days for wildtype (WT) and CI-deficient KO, respectively). A more severe respiratory depression was caused by 1.0 MAC isoflurane in KO mice (respiratory rate values of 86 and 45 at PN 22–25 days and 69 and 29 at PN 31–34 days for anesthetized WT and KO, respectively). Here, we address the idea that isoflurane anesthesia causes a much larger decrease in brain mitochondrial ATP production in KO mice thus explaining their increased sensitivity to this anesthetic. Brains from WT and KO mice of the above study were removed immediately after MAC determination at PN 31–34 days and a mitochondria-enriched fraction was prepared. Aliquots were used for measurement of maximal ATP production in the presence of pyruvate, malate, ADP and creatine and, after freeze-thawing, the maximal activity of the individual RC complexes in the presence of complex-specific substrates. CI activity was dramatically decreased in KO, whereas ATP production was decreased by only 26 % (p < 0.05). The activities of CII, CIII, and CIV were the same for WT and KO. Isoflurane anesthesia decreased the activity of CI by 30 % (p < 0.001) in WT. In sharp contrast, it increased the activity of CII by 37 % (p < 0.001) and 50 % (p < 0.001) and that of CIII by 37 % (p < 0.001) and 40 % (p < 0.001) in WT and KO, respectively, whereas it tended to increase that of CIV in both WT and KO. Isoflurane anesthesia increased ATP production by 52 and 69 % in WT (p < 0.05) and KO (p < 0.01), respectively. Together these findings indicate that isoflurane anesthesia interferes positively rather than negatively with the ability of CI-deficient mice brain mitochondria to convert their main substrate pyruvate into ATP.
Introduction
Inhaled anesthetics such as isoflurane and sevoflurane are extensively used in clinical practice but much concern remains regarding their possible detrimental effects, particularly on the developing brain (Loepke and Soriano 2008;Hays and Deshpande 2011;Chiao and Zuo 2014). Recent work shows that these anesthetics can induce mitochondrial fission in the developing brain, suggesting a mitochondrial component in the process of anesthesia-induced brain damage (Boscolo et al 2013).
Further evidence for a putative role of mitochondria in the process of anesthesia-induced brain damage comes from the observation that children with a deficiency of complex I (CI), but not complex III (CIII), of the respiratory chain (RC), are hypersensitive to volatile anesthetics (Morgan et al 2002;Driessen et al 2007). However, it is still debated whether these anesthetics put CI-deficient children at increased risk of neurological complications (Niezgoda and Morgan 2013). Recent studies with mice genetically engineered to lack the NDUFS4 subunit of CI of the RC (referred to herein as "CIdeficient KO mice") corroborate the finding that the sensitivity to volatile anesthetics is increased in CI deficiency (Quintana et al 2012a, b;Roelofs et al 2014).
Regarding a putative role of CI in the mechanism of action of volatile anesthetics, studies investigating the effects of direct application of these anesthetics to intact and broken mitochondria conclude that low concentrations reversibly inhibit the oxidation of CI-, but not CII-, linked substrates (Miller and Hunter 1970;Harris et al 1971). Another line of evidence supporting a direct action of volatile anesthetics on CI comes from studies with C. elegans. This organism does not possess specialized respiratory systems and complex circulatory organs and relies entirely on the diffusion of gases across the gut lumen and the cuticle (Van Voorhies and Ward 2000). Loss-of-function mutations in CI, but not CII, genes, render these worms hypersensitive to volatile anesthetics in terms of immobility induction (Kayser et al 1999;Falk et al 2006). Analysis of several CI-deficient worm strains with different oxidation rates of CI-linked substrates revealed that the anesthetic sensitivity increased with decreasing oxidation rate (Falk et al 2006). In agreement with these observations, isoflurane was demonstrated to bind to a site distal to the flavoprotein subcomplex of CI (Kayser et al 2011). Taken together, these studies indicate that volatile anesthetics bind to CI to reduce its activity and that this inhibitory effect is increased by loss-of-function mutations in CI.
The RC generates the proton motive force used by CV (F0F1-ATP synthase) to produce ATP and with the activity of CI being reduced in CI-deficient KO mice and CI being a direct target of volatile anesthetics, it is speculated that these anesthetics reduce brain mitochondrial ATP production to a much larger extent in these KO mice than in WT mice. As a consequence, brain ATP levels would be much more decreased in anesthetized KO mice thus explaining their increased sensitivity to volatile anesthetics (Kayser et al 2004).
To test this idea, we determined the maximal rate of ATP production and the maximal activity of the individual RC complexes in a whole brain mitochondria-enriched fraction from the WT and CI-deficient KO mice described in the previous study (Roelofs et al 2014).
We show that in vivo treatment of WT mice with isoflurane decreased the maximal activity of CI, while it increased maximal ATP production. Isoflurane anesthesia also increased maximal ATP production in CI-deficient KO mice. In both WT and KO mice, the effect of isoflurane on brain mitochondrial ATP production was accompanied by an increase in CII and CIII maximal activity and a tendency to increase for CIV. Our data show that isoflurane anesthesia improves rather than worsens the ATP generating ability of brain mitochondria in both WT and CI-deficient KO mice.
Materials and methods
All experiments were approved by the Regional Animal Ethics Committee (Nijmegen, The Netherlands) and performed under the guidelines of the Dutch Council for Animal Care. All efforts were made to reduce animal suffering and number of animals used in this study.
Animals
This study uses the brains from the WT (ndufs4 +/+ ) and KO (ndufs4 -/-) mice included in our previous study on the anesthetic and respiratory depressant effects of isoflurane (Roelofs et al 2014). The genotype of the mice was confirmed by polymerase chain reaction and both male and female mice were included. Mice were group-housed at the central animal facility (CDL) of the Radboud University at 22°C on a day and night rhythm of 12 h. The animals had ad libitum access to food and water and were fed on a standard animal diet (Ssniff GmbH, Soest, 76. Germany. V1534-300 R/M-H).
Isoflurane administration
WT (n=5) and KO (n=7) mice were subjected twice, i.e., at PN 22-25 and PN 31-34 days, to a well-established anesthesia protocol to determine the minimal alveolar concentration (MAC) of isoflurane (Roelofs et al 2014). Briefly, the isoflurane concentration was increased with steps of 0.2 % until the response to electrical stimulation of the hind paw was lost. When this point was reached the isoflurane concentration was decreased until return of the response. After the first MAC determination at PN 22-25 days, the animals were returned to their housing and determination of the MAC was repeated at PN 31-34 days.
Tissue harvesting for biochemical analyses
Animals were sacrificed at PN 31-34 days by cervical dislocation. Isoflurane-treated mice were sacrificed immediately after determination of the MAC at PN 31-34 days. Whole brains were transferred to ice-cold SEF buffer (0.25 mol/L sucrose, 2 mmol/L EDTA, 10 mmol/L potassium phosphate, pH 7.4), minced with a Sorvall TC2 tissue chopper and homogenized with a glass/Teflon Potter Elvehjem homogenizer within 1 h of harvest. The homogenate was centrifuged at 600 g and a portion of the supernatant was used for measurement of the maximal rates of pyruvate oxidation and ATP production. The remainder of the 600 g supernatant was frozen in 10 μl aliquots in liquid nitrogen and kept at −80°C for maximal enzymatic activity measurements. The protein concentration was measured according to Rodenburg (Rodenburg 2011).
Pyruvate oxidation and ATP production measurements
To determine the maximal rate of pyruvate oxidation, the freshly prepared 600 g supernatant was incubated with radiolabeled substrate ([1-14C] pyruvate). After 20 min the reaction was stopped and the amount of liberated radioactive CO 2 ( 14 CO 2 ) was quantified (Janssen et al 2006). The assay medium (pH 7.4) contained K + -phosphate buffer (30 mM; source of Pi), KCl (75 mM), Tris (8 mM), K-EDTA (1.6 mM), P1,P5-Di (adenosine-5′) pentaphosphate (Ap5A; 0.2 mM), MgCl 2 (0.5 mM), ADP (2 mM), creatine (20 mM), malate (1 mM), and [1-14C] pyruvate (1 mM). AP5A is a potent adenylate kinase inhibitor required to prevent interference of the adenylate kinase reaction with the levels of produced ATP, as well as with the excess of ADP required for this assay. For measurement of the maximal rate of ATP production, the same assay medium was used but with pyruvate instead of [1-14C] pyruvate. After 20 min, the reaction was stopped by addition of 0.1 M HClO 4 . The reaction mixture was centrifuged at 14, 000 g for 2 min at 2°C. To the supernatant, 1.2 vol (V/V) of 0.333 M KHCO 3 was added, and this mixture was diluted twofold. The amount of ATP and phosphocreatine formed during the reaction were measured in the supernatant using a Konelab 20XT auto-analyzer (Thermo Scientific). Mitochondrial ATP production rate was corrected using a parallel assay in which residual glycolysis was blocked by arsenite (2 mM) (Janssen et al 2006).
Respiratory chain enzyme assays
The liquid nitrogen frozen portion of the 600 g supernatant was thawed and used for measurement of the maximal activity of the complexes I (CI), II (CII), III (CIII), and IV (CIV) and citrate synthase (CS), as described by Rodenburg (Rodenburg 2011).
Statistical analysis
Statistical analysis was performed using Prism 5 (GraphPad Software Inc., La Jolla, Ca). Normal distribution of the datasets was confirmed using Lilliefors test. Results were expressed as mean ± SD and comparisons between groups were performed using a two way analysis of variance (ANOVA) and Bonferroni's post test. Statistical significance was set at (p<0.05).
Results
In mitochondrial enzyme diagnostics, the activity of CS, which is an indicator of mitochondrial mass, is used for normalization between mitochondria-enriched preparations (Rodenburg 2011). The CS activity per mg protein was the same for WT and KO mice and did not change upon isoflurane anesthesia (Fig. 1). In the remainder of this paper all values are expressed per mg protein.
The maximal rate of ATP production was significantly decreased by 26 % in untreated KO as compared to untreated WT (Fig. 2a). Unexpectedly, isoflurane anesthesia significantly increased this rate in both WT and KO by 52 and 69 %, respectively. The maximal rate of pyruvate oxidation revealed a tendency to be lower in untreated KO as compared to untreated WT and isoflurane anesthesia tended to increase this rate in both WT and KO by 26 and 50 %, respectively (Fig. 2b). To evaluate the efficiency by which the oxidation of pyruvate was coupled to the production of ATP, we calculated the ratio of ATP production to pyruvate oxidation. The ratios obtained showed similar values for untreated and treated WT and KO mice (Fig. 2c).
Analysis of the maximal activity of CI, revealed a significant decrease by 30 % in isoflurane-treated WT as compared to untreated WT (Fig. 3a). As expected, this activity was virtually absent in untreated KO and isoflurane treatment did not lead to any alteration. The maximal activity of CII, was similar between untreated WT and untreated KO (Fig. 3b). Isoflurane anesthesia significantly increased this activity by 37 and 50 % in WT and KO, respectively. The same results were obtained for CIII (Fig. 3c). Isoflurane anesthesia significantly increased the maximal activity of this complex by 37 and 40 % in WT and KO, respectively. For CIV, no difference in maximal activity was observed between untreated WT and untreated KO (Fig. 3d). Although isoflurane anesthesia tended to increase this activity in both WT (17 %) and KO (16 %), no statistical significance was reached. (Roelofs et al 2014). This decrease in respiratory rate was paralleled by neurological complications that progressed with age (Quintana et al 2010). Although a decrease in respiratory rate suggests ATP shortage, other disease mechanisms remain to be considered, including increased production of reactive oxygen species (Koopman et al 2013) and triggering of innate immune responses (Yu et al 2015).
Here, we show that maximal ATP production from pyruvate and malate is 25 % decreased in a mitochondria-enriched fraction from KO mice brain at PN 31-34 days. The same observation was reached in another NDUFS4 KO mouse model (Leong et al 2012). At first glance, this relatively moderate decrease seems difficult to reconcile with the virtually complete absence of active CI in a freeze-thawed aliquot of this mitochondria-enriched fraction (see also, Kruse et al 2008 andCalvaruso et al 2012). However, there is evidence that CIII stabilizes NDUFS4-lacking CI to provide partial activity (Calvaruso et al 2012). This stabilization may be lost upon freeze-thawing of the mitochondria-enriched fraction. Intriguingly, isoflurane anesthesia of KO mice restored brain mitochondrial ATP production to WT levels. This result indicates that the underlying process does not involve irreversible damage, as is thought to occur at increased levels of reactive oxygen species. It is tempting to speculate that isoflurane improves the stabilization of NDUFS4-lacking CI by CIII.
The most intriguing observation of the present study is that isoflurane anesthesia increased rather than decreased brain mitochondrial ATP production in both WT and CI-deficient KO mice. To our best knowledge, this is the first report that describes such an effect of a volatile anesthetic, see also (Miro et al 1999). A recent in vivo study showed that isoflurane anesthesia decreased ATP levels in mouse brain (Wang et al 2015). Together, these data lead us to postulate that isoflurane acts outside the mitochondrion to reduce the supply of pyruvate, which is the main mitochondrial substrate in brain.
The increase in brain mitochondrial ATP production observed in anesthetized WT and KO mice was paralleled by a tendency to increase for the pyruvate oxidation rate, whereas the CS activity remained unaltered. This may suggest that pyruvate dehydrogenase is not the rate-limiting enzyme in the untreated condition. Isoflurane anesthesia significantly increased the activities of CII and CIII and tended to increase the activity of CIV. This may suggest that the activity of the RC is rate-limiting in the untreated condition and that isoflurane can simultaneously increase their activity by a hitherto unknown mechanism, which may, however, be triggered by ATP shortage (see above). In sharp contrast, the activity of CI was significantly decreased in anesthetized WT mice, indicating that this enzyme is not rate-limiting in the untreated condition. Fig. 2 Effect of isoflurane on ATP production. Untreated and isofluranetreated WT and KO mice were sacrificed at PD 31-34 and a mitochondriaenriched fraction was prepared from total brain. The rates of ATP production and pyruvate oxidation were measured at non-rate-limiting concentrations of pyruvate, malate, ADP, and creatine and expressed per mg protein. a The ATP production rate measured under these conditions was significantly decreased in untreated KO as compared to untreated WT (indicated with a* as compared with a). Isoflurane anesthesia significantly increased this rate in both WT and KO. b The maximal pyruvate oxidation rate tended to be decreased in untreated KO and increased in isofluranetreated WT and KO. However, none of these differences reached statistical significance. c The ratio of the rate of ATP production to that of pyruvate oxidation, reflecting the coupling efficiency, was not significantly different between the different experimental conditions. The data presented are the mean ± SD of the number of animals indicated in the caption to Fig. 1. Statistical significance is displayed as * (p<0.5) and ** (p<0.01) Analysis of the ratio of ATP production to pyruvate oxidation revealed similar values for WT and CI-deficient KO mice regardless whether they were anesthetized or not. This result indicates, firstly, that the absence of the NDUFS4 subunit does not alter the coupling of pyruvate oxidation to ATP production and, secondly, that in vivo exposure to isoflurane does not alter this efficiency. Pyruvate oxidation is measured in the presence of an excess of pyruvate and malate, which is converted into oxaloacetate to trap acetyl-CoA, and an excess of ADP, which is converted into ATP. Under these conditions, a decreased efficiency of the coupling of pyruvate oxidation to ATP production would be indicative of an increased proton leak across the inner mitochondrial membrane. The present finding that in vivo exposure to isoflurane does not alter the coupling efficiency is of relevance since in vitro studies showed that direct addition of halothane to isolated mitochondria caused a limited uncoupling at concentrations between 0.5 and 2 % used clinically to achieve anesthesia (Miller and Hunter 1970).
Thus far, only inhibitory effects of volatile anesthetics on mitochondrial ATP production have been reported (Miller and Hunter 1970;Harris et al 1971). Available evidence indicates that volatile anesthetics act directly on CI (Kayser et al 2011) to decrease its activity (Miller and Hunter 1970;Harris et al 1971). Crucially, these studies employ direct application of the anesthetic to isolated mitochondria. For example, halothane, was shown to dose-dependently inhibit the rate of ADP-stimulated oxygen consumption in the presence of CI-, but not CIIlinked substrates (Miller and Hunter 1970). Direct application of halothane to deoxycholate-treated mitochondria confirmed that CI, and not CII, was the primary target of the anesthetic (Harris et al 1971). The present study shows that the in vitro activity of CI was also decreased after in vivo exposure. The inhibitory effect in direct application studies was reversed in less than 5 min (Miller and Hunter 1970;Harris et al 1971), indicating that the mechanism of inhibition must be different from that in the Fig. 3 Effect of isoflurane on the enzymatic activities of the mitochondrial respiratory chain complexes. Untreated and isofluranetreated WT and KO mice were sacrificed at PD 31-34 and a mitochondria-enriched fraction was prepared from total brain. The activities of the four respiratory chain complexes were measured under non-rate-limiting substrate conditions and expressed per mg protein. The values obtained reflect the maximum catalytic capacities of the complexes. a As expected, KO brain was virtually devoid of CI activity (indicated with a** as compared with a). Isoflurane anesthesia significantly decreased this activity in WT. b The activity of CII did not differ between WT and KO. Isoflurane anesthesia significantly increased this activity to the same extent in both WT and KO. c The activity of CIII was the same for WT and KO and also in this case isoflurane anesthesia increased this activity to the same extent in both WT and KO. d Also the activity of CIV was the same for WT and KO. Isoflurane anesthesia tended to increase this activity in both cases but this effect did not reach statistical significance. The data presented are the mean ± SD of the number of animals indicated in the caption to Fig. 1. Statistical significance is displayed as ** (p<0.01) and *** (p<0.001) present study. Our finding of a sustained effect of volatile anesthetics is corroborated by a recent study showing that 4 h of anesthesia at the larval stage of C. elegans caused a marked reduction of the chemotactic response at day 4 of life (Gentry et al 2013). Also in this study, it was observed that the degree of reduction was significantly more in worms with a loss-of-function mutation in a CI gene than in wild type worms.
Unfortunately, our anesthesia protocol does not allow to draw conclusions on whether the effects of isoflurane anesthesia at PN 31-34 days are acute or whether, and, if so, to which extent, they are a consequence of processes triggered during the first period of isoflurane anesthesia at PN 22-25 days. Long-term effects of volatile anesthetics have been reported in the literature and can be neuroprotective, as observed in a variety of animal stroke models, reviewed in (Burchell et al 2013) A limitation of our study is that we used a mitochondria-enriched fraction from whole brain homogenate. Since regional susceptibilities to mitochondrial dysfunction have been reported within the CNS (Leong et al 2012;Pinto et al 2012;Quintana et al 2012a, b), a further detailed investigation should involve specific regions of the brain. Thus, from the results presented in this study, we conclude that isoflurane exposure might preserve the ATP production capacity in brain mitochondria of CIdeficient KO mice and that the isoflurane hypersensitivity in these mice, is not a consequence of ATP deficits in the brain. | 2016-05-12T22:15:10.714Z | 2015-08-27T00:00:00.000 | {
"year": 2015,
"sha1": "6b239f61c354768d8893af6b78195500ad7cd019",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10545-015-9885-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b239f61c354768d8893af6b78195500ad7cd019",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62832089 | pes2o/s2orc | v3-fos-license | Friction Anomalies at First-Order Transition Spinodals: 1T-TaS$_2$
Revealing phase transitions of solids through mechanical anomalies in the friction of nanotips sliding on their surfaces is an unconventional and instructive tool for continuous transitions, unexplored for first-order ones. Owing to slow nucleation, first-order structural transformations generally do not occur at the precise crossing of free energies, but hysteretically, near the spinodal temperatures where, below and above the thermodynamic transition temperature, one or the other metastable free energy branches terminates. The spinodal transformation, a collective one-shot event with no heat capacity anomaly, is easy to trigger by a weak external perturbations. Here we propose that even the gossamer mechanical action of an AFM tip may locally act as a surface trigger, narrowly preempting the spontaneous spinodal transformation, and making it observable as a nanofrictional anomaly. Confirming this expectation, the CCDW-NCCDW first-order transition of the important layer compound 1T-TaS$_2$ is shown to provide a demonstration of this effect.
INTRODUCTION
The development of fresh theoretical and experimental tools aimed at revealing and understanding solid state phase transitions through their surface nanomechanical and nanofrictional effects is an ongoing unconventional, yet very useful approach. Friction of nanosized tips on dry solid surfaces has been proposed to represent what one might term "Braille spectroscopy" -reading the physics underneath by touching [1]. For second-order, continuous phase transitions, a notable example has been the detection of displacive structural transformations as reflected by AFM dissipation anomalies caused by critical fluctuations, predicted [2] and observed in noncontact friction on SrTiO 3 [3]. Another non-structural example is the drop of electronic friction, observed upon cooling a metal below the superconducting Tc in correspondence to the opening of the BCS gap [4]. The injection of a 2π phase slip in the local order parameter of an incommensurate phase is an additional interesting event that can be triggered by an AFM tip. [5] A vast majority of solid state structural and electronic phase transitions is, however, of discontinuous, first-order type. Should one expect a frictional anomaly at the surface of a solid which undergoes a first-order structural transition? Lacking critical fluctuations, that frictional signature might it not just consist of some unpredictable and unremarkable jump? This scenario is, we propose, unduly pessimistic, countering that not one but two frictional anomalies are to be expected at a first order transition. They should occur at the hysteresis end-point temperatures, where both heating and cooling transformations are close in character to spinodalthe point where the dissolution of a metastable state takes place. At these two temperatures, on both sides of the thermodynamic transition temperature, an Atomic Force Microscope/Friction Force Microscope (AFM/FFM) dissipation peak is to be expected as the tip moves on, sweeping in the course of time newer and newer surface areas where the near-spinodal transformation can be "harvested". These predictions are first argued theoretically and then demonstrated experimentally in the important layer compound 1T-TaS 2 .
MEAN-FIELD THEORETICAL MODEL
Beginning with theory, we adopt the simplest mean-field Landau-Ginzburg-Wilson [6] or Cahn-arXiv:1709.02602v1 [cond-mat.mes-hall] 8 Sep 2017 Allen [7] approach, which works reasonably well for many structural transitions. Assume the schematic model bulk solid free energy density (where r, u, J are positive parameters) governing the evolution of a generic, non-conserved real order parameter Ψ supposed to represent collectively all mechanically relevant thermodynamic variables, as a function of spatial coordinate ρ (in this schematic outline, we provisionally ignore the distinction between surface and bulk). The external field h includes here a uniform term describing the free energy imbalance between the two minima at negative and positive Ψ (h thus represents here the temperature deviation from the first-order thermodynamic transition point) plus a localized mechanical perturbation representing the tip which, when moving in the course of time, will undergo mechanical dissipation, observable as friction. In the spatially uniform, field-free case (∇Ψ) 2 = 0, h = 0, two equivalent free energy minima F 0 = −r 2 /4u occur at Ψ ± 0 = ± r/u identifying the two phases. A first order transition occurs between them when a growing uniform h causes Ψ to switch from initially positive to negative or viceversa. For the transition to occur near h = 0 nucleation is required, allowing thermal crossing of the large free energy barrier between the two nearly equivalent states. Nucleation is a generally slow process so that a Ψ > 0 metastable state often persists up to large positive fields h. Upon reversing the field, the Ψ < 0 state can similarly persist for negative h, giving rise to hysteresis. The maximum theoretical width of the hysteresis cycle is determined by the two spinodal points h s = ±2r 3/2 /3 3/2 u 1/2 , where the transformation must necessarily take place because at each of them the metastable free energy minimumum disappears, as sketched in Fig.1(a). At the spinodal points Ψ s = ± r/3u and f s = (1/12)r 2 /u the transformation occurs collectively rather than locally, as amply described in literature, reviewed in a different context in [8].
A spinodal point is associated with the collective dynamics of all macroscopic variables accompanying the first order transition including structure, volume, conductivity, etc. As that point is approached, even a small perturbation can locally overcome the marginal free energy barrier and trigger a large-scale transformation from the metastable to the stable state, as pictured in Fig. 1(a). If that perturbation is provided by a sliding tip, the small but finite triggering work will show up as a a frictional dissipation burst. As the tip moves on, it can convert newer and newer patches from metastable to stable, Fig. 1 (b). The pursuit of the frictional consequences of a first order transition close to its spinodal points is our goal. Starting with Ψ > 0, and turning up the uniform field (i.e., lowering the temperature approaching from above the real system spinodal temperature on cooling down) h → h s , we observe that f [Ψ] is a local minimum -a metastable state -protected by a marginal barrier ∆ which disappears at the spinodal point h = h s , Fig. 1
(c).
Before that point is reached, in the metastable state, a weak static local perturbation δh(ρ) imparts to the state variable Ψ a local modification, whose effect, initially small, grows as the spinodal point is approached. If moreover the perturbing agent, in our case the nano-tip, moves in space with velocity v, so that h tip (ρ, t) = h 0 (ρ − vt), then it may or it may not succeed to locally trigger the spinodal transformation. If it does, then some mechanical work will be spent, and that expense will reflect in the form of a burst in the tip's mechanical dissipa-tion.
Four different frictional regimes are crossed as a function of h (i.e., of temperature) -for example when evolving from a high temperature metastable state Ψ m to a low temperature stable state Ψ M on cooling. In regime (I), h is still far from the spinodal point h s , the free energy barrier protecting the metastable state is substantial, the tip perturbation is too weak to push the system over it, and the tip friction is unaffected. In a second regime (II), the tip may succeed to "wet" its surrounding with a small converted nucleus Ψ M of radius R tip , yet still unable to overcome the nucleation barrier if R tip < R c , the effective "inhomogeneous" nucleation critical radius. Depending whether this nucleus does or does not reconvert back to Ψ m as the tip moves on, there will or will not be frictional work. Assuming reconversion (for slow tip motion), the friction is again zero, because the transformed nucleus is carried along adiabatically by the tip. In regime (III), as h increases, the nucleation radius eventually gets smaller than the tip perturbation radius, R c < R tip . The system suddenly overcomes the barrier as in Fig. 1(a), thus provoking the irreversible transformation Ψ m → Ψ M extending in principle out to infinite distance -in practice, out to some macroscopically determined radius L defined by the sample quality, defects, and morphology. At this threshold temperature the mean tip frictional dissipation will suddenly jump from zero to finite, thereafter decreasing smoothly and eventually vanishing when the true spinodal point h → h s is reached, and dissipation again disappears, regime (IV). Our model predicts the frictional dissipation burst in the shape shown Fig. 2, a behaviour which we now describe in our model before seeking an experimental demonstration.
Consider a configuration where, as in classical nucleation theory (CNT) the system is forced to evolve from Ψ = Ψ M at ρ=0, to Ψ = Ψ m at ρ = ∞. A trial function that shows this behaviour is constructed as Ψ(ρ) = Ψ m + (Ψ M − Ψ m )/2 tanh((ρ − R 0 )/γ), where R 0 is the radius of the droplet and γ its interface width. We consider the nucleation barrier F [ρ; R 0 , γ], which depends variationally on the two parameters R 0 and γ. Based on that we can numerically calculate the homogeneous nucleation radius R c and its corresponding barrier. Now add to F [ρ] the tip perturbation h tip (ρ) = h tip Θ( ρ−x tip −R tip ) whose effect is to lower the local barrier as sketched in Fig. 1(a). At h = h c the nucleation radius becomes smaller than the wetting radius R c < R tip , the local nucleation barrier disappears and the massive transformation is triggered. The tip will spend at that point the one-shot triggering work W = F 0 . This work, as mentioned, is paid only once, because after conversion the stable phase Ψ M extends macroscopically away, and the system becomes subsequently insensitive to the tip. It should be stressed here that, unlike second order transitions between equilibrium states, which take place reversibly as the temperature is cycled across the critical point, the spinodal transformation takes place only once as the spinodal point is first crossed (unless the system is, as it were, "recharged"). In a real system, however, the size of the transformed region is limited by defects to some average radius L determined by, e.g., grain boundaries, steps, etc., so that newer and newer metastable surface areas can be "harvested" in the course of time, as sketched in (Fig. 1(b)). The tip moving with velocity v will explore fresh untransformed metastable regions with a rate µ ∼ v/L, (Fig 1(c)) therefore dissipating a frictional power P = W µ = F 0 v/L, a quantity which is nonzero in the temperature range corresponding to h c < h < h s as depicted in Fig. 2.
FIG. 2: Predicted tip dissipation W as a function of h (red area). In the cooling example, increasing h stands for decreasing temperature, and h = hs is the spinodal point. The sharp dissipation peak occurs at the threshold temperature where the tip perturbation succeeds in triggering the metastable state's demise, thus preempting the spontaneous spinodal transformation before hs is reached. The threshold hc depends on details including the tip nature, radius, and load. In this figure the tip radius was Rtip ∼ 1.1 r/u, other parameters were u = 10, r = 10 and j = 1. The regions I, II, III and IV are defined in text.
EXPERIMENTAL VERIFICATION IN 1T-TaS2
Thus far the theory. To verify predictions in a well defined, physically interesting case we choose a thermally driven structural transition with an established hysteresis cycle between phases that do not differ too strongly from one another. Such is the case of the transitions in the celebrated layer compound 1T-TaS 2 between a low-temperature Commensurate Charge-Density-Wave (CCDW) √ 13 × √ 13 phase, [9,10] believed to be Mott insulating [11,12], and a Nearly Commensurate Charge-Density-Wave (NCCDW) phase, metallic and even superconducting under pressure [10]. This first-order transition takes place reproducibly in a single stage near T N C ∼ 173 K upon cooling, and in two stages, at T CT ∼ 223 K, T T N ∼ 280 K, upon heating. That very reproducible hysteresis pattern, partly reproduced from Ref. [10] in Fig. 3, suggests that T N C and T CT are, to a good approximation, spinodal points of 1T-TaS 2 . That is strongly confirmed by very recent heat capacity data by Kratochvilova et al. [13], showing no anomaly at T N C and T CT , where at the same time large electrical and structural bulk transformations take place. A spinodal transformation occurs, upon temperature cycling ±∆T , only once, thus averaging all internal energy effects to zero upon repeated passage.
It should also be mentioned that 1T-TaS 2 and its phases are and have been the subject of very intense studies over the last five years, in connection especially with transient or hidden metastable phases under high excitation [14][15][16] and/or in connection with unusual substrate, thickness, and disorder dependence of its phase transitions [17][18][19][20]. To begin with, we restrict here to bulk 1T-TaS 2 in equilibrium. Focusing for definiteness on the NCCDW ↔ CCDW transition upon cooling, and consider the phenomena which we might expect in AFM/FFM friction measurements as temperature crosses that transition. First, frictional heat dissipation into the substrate (phononic friction) could in principle differ in the two phases, because their structures, phonon spectra, mechanical compliances are, even if mildly, different -for example, the NCCDW structure possesses a network of "soli-ton" defects, absent in the CCDW. Second, electronic friction due to creation of electron-hole pairs could be present in the NCCDW phase which is metallic, and not in the CCDW which is insulating. Both mechanisms do suggest a higher noncontact friction in the NCCDW phase above T N C ∼ 173 K than in the CCDW phase below that. Our experiment however measures hard contact friction, where these contribution turn out to be undetectable. The third, and central dissipation route described earlier is the main frictional feature which we observe near the spinodal points.
In our friction force microscopy experiments we used 1T-TaS 2 -flakes with a size of approximately 4 × 4 mm 2 and a thickness before cleaving of about 50µm. To yield clean surface conditions, the samples were freshly cleaved directly before transfer to the UHV chamber of a commercial Omicron-VT-AFM/STM system. Inside the UHV chamber the samples were additionally heated to 100 • C for 1h to remove residual adsorbates from the surface. To ensure that the sample is apt to detect the anticipated effects, we first used high resolution STM imaging to identify the most characteristic feature related to the NCCDW and CCDW, namely the √ 13 × √ 13 superstructure formed by 13 Ta-atoms arranged in a star shape around a central atom [9]. This superstructure is revealed in Fig. 3a, measured at 296K. At this temperature, the superstructure forms separate hexagonal domains, which coalesce during the phase transformation on cooling, leading to CCDW-1T-TaS 2 [10].
Subsequently we use high resolution friction force microscopy (FFM) for a first analysis of the sample with respect to tribological properties. Atomically resolved stick-slip is regularly observed and Fig. 3b shows an example, which was measured in the CCDW phase at 173K using a standard Si-cantilever (Nanosensors LFMR, normal force constant k=0.2N/m). Discerning the superstructure from the lateral force data in Fig. 3b is difficult but can be achieved by calculating the Fourier transform, where the superstructure leads to characteristic bright spots as shown in the inset of Fig. 3b.
In the first experimental run, which serves as a coarse-scale reference, we measure the temperature dependence of friction over a wide tempera- ture range from room temperature down to 160K. Fig. 4(bottom) shows that the friction remains constant within errors in the relevant range from 160K to 260K. In particular, there are no discontinuities across the transition temperatures T N C ∼ 173 ± 2 K and T CT ∼ 223 K indicated by the dashed lines. Also there is no hysteresis between the cooling and heating cycles. One can see that incommensurability and metallization do not impact friction on this coarse scale. A much more detailed study is necessary to discern the influence of the hysteretic (spinodal) transformations, present in structure and in conductivity, on the tip friction.
We therefore focus on the friction signal in a narrow temperature window around the anticipated spinodal transition points. We use a specific experimental protocol to measure lateral forces while crossing the transitions. First, the NCCDW to CCDW transformation is analyzed FIG. 4: Top) Bulk resistivity of 1T-TaS2 versus temperature across the CCDW to NCCDW transitions, reproduced from Ref. [10]. Black dashed lined mark TNC ∼ 173 ± 2K and TCT ∼ 223K on cooling and on heating respectively, transitions that are spinodal in character. Bottom) Coarse-scale temperature dependence of FFM friction relative to the average room temperature value, measured during cooling and heating. No direct correlation between friction and the change of electrical or of structurally commensurate or incommensurate characters is found across TNC and TCT within experimental error.
during cool down. For this the sample temperature was set to a constant value slightly above the transition point (appr. 195K). Once a stable sample temperature is established, continuous scanning of FFM images with a size of 50×50 nm 2 at a normal force set-point of F N = 14 nN and a scan speed of v scan = 250 nm/s is started. Then the sample temperature is slowly reduced at a rate of appr. 0.2 K/min until the minimum temperature of 170K is reached, while the scanning is continously running with the normal force feedback enabled. The temperature change induces a z-drift of the sample, and therefore only a small temperature window of about 10-20K is accessible with this method. Once the sample has been cooled down to the CCDW phase, the same procedure was used to analyse the transition from CCDW to NCCDW during heating. Here, 215K was chosen as a starting point and the temperature was increased at a similar rate up to 225K, thereby spanning the full phase transition.
FIG. 5: Measured nanoscale friction on 1T-TaS2 as a function of sample temperature across the two spinodal transformations. The cooling sequence (blue, upper part) shows the NCCDW to CCDW transformation, while the heating sequence (red, lower part) crosses the CCDW to trigonal NCCDW transition. There is no appreciable difference between stable and metastable state friction. Between the two, friction shows clear peaks at 186 ±2K and at 220 ±2K indicating the tip-induced preempting of the bulk spinodal transformations at TNC and TCT of 173K and 223K respectively (dashed lines).
For both cooling and heating sequences, the average friction force is calculated from each pair of lateral force images recorded for forward and backward scanning. Fig. 5 shows the resulting friction during cooling and heating as a function of the simultaneously recorded temperature. In both cases we see a clear peak in the average friction signal at the transition temperatures. The peak height is appr 1.5 to 2 times higher than the average friction signal away from the transition point, while the peak width is about 2K to 5K. Results from further experiments reproduce these values. In contrast to published contact friction versus temperature results [22,23], our result shows a very sharp and distinct transformation behavior, as is indeed expected from the spinodal theory.
Other details also fall qualitatively in place. The frictional peak on cooling occurs near 186 K, which is more than ten degrees higher than 173 ± 2K, the tip-free bulk transformation, assumed to coincide with the spinodal temperature. This is precisely what our theory predicts, the temperature difference corresponding to h c − h s , a quantity in principle dependent on details including tip size and applied load. Moreover, comparison of heating and cooling frictional peaks shows that the heating peak is lower in magnitude and deviates less from the bulk temperature T CT ∼ 223K. This is in agreement with a smaller difference expected in this case between Ψ m and Ψ M , reflecting the weaker character known for the transformation on heating relative to cooling. [24] The finite domain size L which limits the tip-triggered transformation could in 1T-TaS2 be determined, besides the omnipresent defects shown in Fig. 3c, also by the recently discovered interplanar mosaic structure of this material. [25] CONCLUSIONS We have proposed theoretically a mechanism predicting frictional anomalies connected with spinodal points which end the hysteresis cycles of first order phase transitions. Direct experimental demonstration of the anomaly is provided by FFM nanofriction measured at the two transformations which occur upon cooling (173K) and upon heating (223K) of the NCCDW ↔ CCDW transition of layered 1T-TaS 2 , transformations which we argue are to a good approximation spinodal in character. Near the spinodal temperature the free energy barrier protecting the metastable phase decreases enough that the small mechanical perturbation provided by the pressing and sliding tip is sufficient to locally trigger the transformation, thus preempting its spontaneous occurrence. The frictional anomaly predicted is transient, but can nonetheless be measured in steady sliding as the tip explores newer and newer untransformed areas. These results show that nanoscale friction, easy to interpret as it is, is as sensitive as resistivity or structural tools such as X-rays, and unlike thermodynamic quantities like heat capacity that are totally insensitive when applied to spinodal points of firstorder phase transitions. In the specific case of 1T-TaS 2 , a possible interplay between the known electrical and structural characters of the transformations -characters which apparently do not impact the contact friction -and their spinodal nature, which we exploit here for the first time, will deserve renewed attention in the future. Of special interest appears to be for example the possibility to trigger tip-induced frictional transformations from hidden states [15], and/or in the ultrathin material, where the spinodal temperature is strongly thickness dependent [17,18]. | 2017-09-08T09:16:12.000Z | 2017-09-08T00:00:00.000 | {
"year": 2018,
"sha1": "74515f4a782da7398d5acacc1cfbc5b1d4729296",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aaac00",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "74515f4a782da7398d5acacc1cfbc5b1d4729296",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
256613922 | pes2o/s2orc | v3-fos-license | Kazinol B protects H9c2 cardiomyocytes from hypoxia/reoxygenation-induced cardiac injury by modulating the AKT/AMPK/Nrf2 signalling pathway
Abstract Context Kazinol B (KB), an isoprenylated flavan derived from Broussonetia kazinoki Sieb. (Moraceae) root, has long been used in folk medicine. Objective This study examines the protective effects of KB and its underlying mechanisms in hypoxia and reoxygenation (H/R)-induced cardiac injury in H9c2 rat cardiac myoblasts. Materials and methods H9c2 cells were incubated with various concentrations of KB (0, 0.3, 1, 3, 10 and 30 μM) for 2 h and then subjected to H/R insults. The protective effects of KB and its underlying mechanisms were explored. Results KB significantly elevated cell viability (1 μM, 1.21-fold; 3 μM, 1.36-fold, and 10 μM, 1.47-fold) and suppressed LDH release (1 μM, 0.77-fold; 3 μM, 0.68-fold, and 10 μM, 0.59-fold) in H/R-induced H9c2 cells. Further, 10 μM KB blocked apoptotic cascades, as shown by the Annexin-V/PI (0.41-fold), DNA fragmentation (0.51-fold), caspase-3 (0.52-fold), PARP activation (0.27-fold) and Bax/Bcl-2 expression (0.28-fold) assays. KB (10 μM) downregulated reactive oxygen species production (0.51-fold) and lipid peroxidation (0.48-fold); it upregulated the activities of GSH-Px (2.08-fold) and SOD (1.72-fold). KB (10 μM) induced Nrf2 nuclear accumulation (1.94-fold) and increased ARE promoter activity (2.15-fold), HO-1 expression (3.07-fold), AKT (3.07-fold) and AMPK (3.07-fold) phosphorylation. Nrf2 knockdown via using Nrf2 siRNA abrogated KB-mediated protective effects against H/R insults. Moreover, pharmacological inhibitors of AKT and AMPK also abrogated KB-induced Nrf2 activation and its protective function. Discussion and conclusions KB prevented H/R-induced cardiomyocyte injury via modulating the AKT and AMPK-mediated Nrf2 induction. KB might be a promising drug candidate for managing ischemic cardiac disorders.
Introduction
Coronary heart disease is one of the most common causes of mortality and morbidity worldwide (Moran et al. 2014;Thomas et al. 2018). Myocardial ischemia injury is the primary factor causing cardiovascular dysfunctions in patients suffering from coronary heart diseases (Moran et al. 2014; Thomas et al. 2018). Extreme intracellular reactive oxygen species (ROS) production and cellular injury directly affect cellular structures and functions in myocardial tissues with ischemic and reperfusion insults (Kalogeris et al. 2012;Chatauret et al. 2014). Thus, preventing oxidative stress and cardiomyocyte injury is one of the effective strategies for treating myocardial ischemia and coronary heart disease (Kalogeris et al. 2012;Chatauret et al. 2014).
Nuclear factor erythroid 2-related factor 2 (Nrf2) is a redoxsensitive transcriptional factor that modulates cellular antioxidant defence and maintains redox homeostasis (Bubb et al. 2017;Strom and Chen 2017;Chen G et al. 2019). Under stress or stimulation, Nrf2 dissociates, translocates into the nucleus, and binds to the antioxidant response element (ARE) in the promoters of genes encoding for antioxidants and detoxifications that protect against oxidative stress damage, including heme oxygenase-1 (HO-1), glutathione peroxidase (GSH-Px) and superoxide dismutase (SOD) (Bubb et al. 2017;Strom and Chen 2017;Chen G et al. 2019). The phosphatidylinositol 3-hydroxy kinase/protein kinase B (PI3K/AKT) axis is an essential signalling pathway involved in myocardial ischemia-reperfusion injury Huang J et al. 2019). The AMP-activated protein kinase (AMPK) signalling pathway, a critical regulator of energetic stress, controls glucose uptake and glycolysis and protects myocardial tissue from ischemic injury (Shibata et al. 2005;Penumathsa et al. 2009;Feng et al. 2018). The phosphorylation of AKT and AMPK induces the activation of Nrf2 pathways (Chen X et al. 2018;Nudelman et al. 2020). Thus, Nrf2, AKT and AMPK are targeted to develop novel agents for myocardial ischemia and coronary heart disease.
Kazinol B (KB, Figure 1(A)) is an isoprenylated flavan that is derived from the root of Broussonetia kazinoki Sieb. (Moraceae) (Ryu et al. 2003;Lee H et al. 2016). The B. kazinoki plant is widely distributed in the East Asia, especially in China, Korea and Japan. This plant has a long history of being used as a folk medicine (Zhang PC et al. 2001;Lee DY et al. 2010). Previous studies demonstrate that KB suppresses oxidative stress and inflammatory response in lipopolysaccharide-induced macrophages (Ryu et al. 2003). Moreover, KB confers antidiabetic effects by modulating the AKT and AMPK pathways in mouse 3T3-L1 preadipocytes (Lee H et al. 2016). Thus, KB-mediated cytoprotective effects and activation of AKT and AMPK pathways protect against myocardial ischemia. Hence, in this study, we investigate whether KB could protect hypoxia and reoxygenation (H/R)-induced ischemic injury in H9c2 rat cardiac myoblasts. We also explored how KB modulates the Nrf2, AKT and AMPK signalling pathways to mediate its protective effect.
Chemicals and reagents
Kazinol B (purity in HPLC ! 98%, the chemical structure is shown in Figure 1(A)) was purchased from Weikeqi Biotechnology (Chengdu, China). LY294002 and Compound-C were purchased from Selleck (Houston, TX). DMEM and FBS were obtained from Gibco (Grand Island, NY).
Cell culture and treatment
The H9c2 rat cardiac myoblasts were obtained from the American Type Culture Collection or ATCC (Manassas, VA) and maintained in DMEM with 10% FBS in a humidified atmosphere at 37 C with 5% CO 2 (95% air). The chemical compound, KB, was first dissolved in dimethyl sulphoxide (DMSO) and then added to the medium. The vehicle control was dissolved with DMSO (final concentration was 0.1%, v/v).
Hypoxia and reoxygenation
Hypoxia and reoxygenation were induced according to previous studies (Peng et al. 2019;Jia et al. 2022). Briefly, the H9c2 cells were incubated with the serum-free and glucose-free medium and cultured in a hypoxic chamber (95% N 2 and 5% CO 2 , STEMCELL Technologies, Vancouver, Canada) for 6 h. Next, the cells were subjected to reoxygenation with a normal culture medium in a normal incubator (95% air and 5% CO 2 ) for 24 h.
Cell viability and cytotoxicity
Viability and toxicity were measured using the CCK8 (Dojindo, Kumamoto, Japan) and LDH assay kits (Roche, Mannheim, Germany). In brief, cells were seeded into the culture plate for 24 h and treated with KB or specific inhibitors followed by the H/R insults. Subsequently, the supernatant was removed for the LDH assay, while the cells were subjected to the CCK8 assay. The absorbances were measured at 450 nm and 490 nm for the CCK-8 and LDH assay, respectively.
Annexin V-FITC and PI apoptosis analysis
Cell apoptosis and death were measured using the Annexin V-FITC and PI apoptosis detection kits (Beyotime, Shanghai, China). Quantitative analysis was performed using a C6 flowcytometer (BD Biosciences, San Jose, CA).
Determination of DNA fragmentation and caspase-3 activity DNA fragmentation was measured using the DNA Fragmentations ELISA kit (Roche, Mannheim, Germany). The absorbance (490 nm) was detected and represented as DNA fragmentations. Caspase-3 activity was analysed using the caspase-3 activity assay kit (Beyotime, Shanghai, China). The absorbance at 405 nm was measured, and the data were normalized to the protein concentrations.
Measurement of cytochrome c release
The cytoplasm and mitochondria of cells were separated using the mitochondria isolation kit (Beyotime, Shanghai, China). Cytochrome c release in the cytoplasm and mitochondria was measured using the cytochrome C Quantikine ELISA kit (R&D Systems, Minneapolis, MN). The result was normalized to the protein concentration.
Measurement of ATP synthesis
ATP synthesis was measured using the luciferin-luciferasebased enhanced ATP assay kit (Beyotime, Shanghai, China). In brief, cells were collected and prepared with lysis buffers, centrifuged, and the supernatants were collected for ATP synthesis assay. The luminescence was monitored for 3 min using a FlexStation3 multi-mode microplate-reader (Molecular Devices, Sunnyvale, CA).
Measurements of Dwm, ROS, lipid peroxidation, GSH-Px and SOD
The mitochondrial membrane potential (Dwm) was monitored using the fluorescent probe JC-1 (Invitrogen, Carlsbad, CA). The fluorescence intensity was detected, and the Dwm was represented by the ratio of JC-1 red/green intensity. ROS production was measured using the fluorescent probe CM-H2DCFDA (Invitrogen, Carlsbad, CA). The lipid peroxidation was determined by malondialdehyde (MDA) levels using the MDA assay kits (Beyotime, Shanghai, China).
ARE luciferase activity assay
H9c2 cells were transfected with the pARE-Luc reporter plasmid (SABiosciences, Frederick, MD) for 24 h and then treated with KB followed by the H/R insults. The cell samples were prepared, and the luciferase activity of the samples was measured using the Dual-Luciferase reporter assay systems (Promega, Madison, WI).
Nrf2 siRNA transfection
The cells were transfected with the Nrf2 siRNA (100 nM) or scrambled control siRNA (Santa Cruz, Santa Cruz, CA) for 36 h using Lipofectamine 3000 (Invitrogen, Carlsbad, CA), after which the cells were collected for further tests.
Quantitative PCR assay
Total RNA and cDNA were prepared using the HighPure RNA isolation and Transcriptor cDNA synthesis kits (Roche, Mannheim, Germany), respectively, according to the manufacturer's protocol. qPCR was performed using the FastStart Universal SYBR Green Master reagents in the 7900 HT System (Applied Biosystems, Foster City, CA). The relative fold changes were normalized to Gapdh and calculated as folds of the control group using the 2 -DDC t method. The primer sequences used were as follows: HO-1, forward primer: CTGGAAGAGGAGATAG AGCGAA, reverse primer: TCTTAGCCTCTTCTGTCACCCT; Gapdh, forward primer: GACATGCCGCCTGGAGAAAC, reverse primer: AGCCCAGGATGCCCTTTAGT.
Western blot assay
The cells were lysed using the RIPA lysis buffer. The protein concentration was measured by the Pierce BCA protein assay kit (Thermo Scientific, San Diego, CA). Same amounts of protein were electrophoresed on SDS-PAGE and then transferred onto a PVDF membrane (Bio-Rad, Hercules, CA). The membrane was then incubated with different primary and secondary antibodies at room temperature for 1 h. The primary antibodies, including cleaved-caspase 3, cleaved-PARP, Bax, Bcl-2, p-AKT, t-AKT, p-AMPKa, t-AMPKa, Lamin B1, GAPDH and b-actin, were obtained from Cell Signaling Technology (Boston, MA). The Nrf2 and HO-1 antibodies were obtained from Abcam (Waltham, MA). The western blot band was visualized using the ECL assay kit (GE Healthcare, Milwaukee, WI). Finally, the blot was analysed and quantified using the Image Lab software (Bio-Rad, Hercules, CA).
Statistical analysis
Statistical analyses were performed with one-way ANOVA followed by Bonferroni's multiple comparisons test using GraphPad Prism 7.00 software (GraphPad Prism, La Jolla, CA). All experiments were performed three times and in triplicates. Data are presented as mean ± SEM. A p value < 0.05 was considered statistically significant.
Effects of KB on cell viability and cytotoxicity in H9c2 cells
We measured the cytotoxic effects of KB on H9c2 cells using the CCK8 and LDH assays. As shown in Figure 1(B,C), no cytotoxicity was observed in cells treated with KB up to 30 mM for 24 h. However, we observed cytotoxicity at a KB dose of 100 mM. Hence, we chose KB doses up to 30 mM (0.3, 1, 3, 10 and 30 lM) for further experiments.
KB exerted protective effects against H/R-induced cell death and apoptosis in H9c2 cells
Next, we investigated the effects of KB on cell death and apoptosis in H9c2 cells subjected to H/R insults. Based on previous reports and our preliminary experiments, we induced cardiac injury in H9c2 cells by subjecting the cells to hypoxia for 6 h, followed by reoxygenation for 24 h (Peng et al. 2019;Jia et al. 2022). First, KB (1, 3, 10 and 30 lM) inhibited the H/R-induced decline in cell viability and H/R-induced LDH release in a dosedependent manner (Figure 2(A,B), respectively). The inhibitory effects of KB against cell injury were detected by Annexin-V/PI and DNA fragmentation assays. As shown in Figure 2(C), the apoptotic and dead populations decreased upon KB treatment in a dose-dependent manner (1, 3, 10 and 30 lM, all p< 0.05 compared with the H/R-treated group). KB (1, 3, 10 and 30 lM) also alleviated H/R-induced DNA fragmentation in a dose-dependent manner in H9c2 cells (Figure 2(D)). However, 10 and 30 lM of KB exerted almost the same protective effects on H9c2 cells with H/R insults. Hence, we selected a KB dose of 10 lM for our further experiments.
KB blocked apoptotic cascades in H/R-induced H9c2 cells
As shown in Figure 3(A), the H/R insult significantly increased caspase-3 activity in H9c2 cells, whereas KB significantly suppressed the caspase-3 activity. KB treatment also significantly down-regulated the protein level of cleaved PARP in cells with H/R insults (Figure 3(B,C)). Further, KB reversed the H/Rinduced decline in the ratio of Bcl-2/Bax (Figure 3(B,D)).
KB alleviated H/R-induced mitochondrial dysfunction in H9c2 cells
The mitochondrial membrane potential (Dwm) collapses during an apoptotic cascade (Mansingh et al. 2018). As shown in Figure 4(A), KB significantly attenuated Dwm disruption by H/R insults in H9c2 cells, as evidenced by the increase in the ratio of red to green fluorescence. Further, KB significantly inhibited H/R- induced cytochrome c release from mitochondria into the cytoplasm in H9c2 cells (Figure 4(B,C)). KB also rescued the decrease in H/R-induced dysfunction in ATP production in H9c2 cells (Figure 4(D)).
KB attenuated H/R-induced oxidative stress in H9c2 cells
Oxidative stress is critical in H/R-induced cellular damage (Zhou P et al. 2022). Therefore, we further investigated the antioxidant capability of KB. KB significantly suppressed ROS production and H/R-aroused lipid peroxidation (MDA production) in H9c2 cells subjected to H/R insults ( Figure 5(A,B), respectively). Furthermore, KB improved GSH-Px and SOD activity in H/Rinduced H9c2 cells ( Figure 5(C,D)).
KB promoted Nrf2 nuclear translocation and activated the Nrf2/ARE/HO-1 pathway in H/R-induced H9c2 cells
The Nrf2/ARE pathway is crucial for the protective effects of various natural products in cardiomyocyte injury models (Lu et al. 2022). Thus, we examined the effects of KB on the Nrf2/ARE axis. KB significantly facilitated Nrf2 nuclear accumulation ( Figure 6(A,B)) and improved ARE promoter activity (Figure 6
KB-induced Nrf2/ARE activation protected the cells against H/R-induced cardiac injury
Nrf2 siRNA was used to investigate the specific role of Nrf2 in KB-mediated protection against H/R-induced injury. H9c2 cells were transfected with Nrf2 siRNA, which significantly abrogated the Nrf2 expression in H9c2 cells (Figure 7(A)). As expected, Nrf2 siRNA abrogated the protective effect of KB against H/Rinduced toxicity, as shown by the decreased cell viability and increased LDH release in Nrf2 siRNA-treated groups (Figure 7(B,C)).
KB up-regulated phosphorylation of AKT and AMPK in H/Rinduced H9c2 cells
KB confers beneficial functions via modulating several pathways, including AKT and AMPK (Mansingh et al. 2018;Huang J et al. 2019). Thus, we explored the effect of KB on AKT and AMPK signalling to reveal its underlying mechanisms in H/R-induced cardiomyocyte injury. As shown in Figure 8(A,B), H/R insults decreased the phosphorylation of AKT and AMPKa in H9c2 cells. However, KB significantly restored AKT phosphorylation in H9c2 cells under the H/R lesioning condition. Further, KB up-regulated AMPKa phosphorylation in H/R-induced H9c2 cells (Figure 8(C,D)). Thus, our results showed that KB alone could promote and facilitate AKT and AMPKa phosphorylation in H9c2 cells without H/R insults (Figure 8(A,C)).
KB-induced AKT and AMPK were involved in KB-mediated Nrf2/ARE activation in H/R-induced H9c2 cells
Several protein kinases, including PI3K/AKT and AMPK, have been implicated in Nrf2 induction (Huang J et al. 2019). Therefore, we used specific inhibitors to block AKT and AMPK activity and determine whether AKT and AMPK were involved in KB-mediated Nrf2 activation. Interestingly, AKT and AMPK inhibitors abrogated Nrf2 nuclear translocation (Figure 9(A,B)) in H/R-induced H9c2 cells. Moreover, AKT and AMPK inhibitors abolished the protective effect of KB against H/R-induced decline in cell viability and the increase in LDH release ( Figure 9(C,D)). Thus, these results indicated that AKT and AMPK activation by KB was associated with the induction of the Nrf2/ARE/HO-1 pathway, which contributed to the protective effect of KB on H/R-induced H9c2 cells.
Discussion
In this study, we explored the protective effect of KB on H/Rinduced cardiac injury and its underlying mechanisms. We found that KB improved cell survival and decreased cell death against H/R damage. KB also reversed H/R-induced apoptotic cascades, as evidenced by the decrease in the ratio of apoptotic populations in the DNA fragmentation assay. Further, KB reversed the decline of Dwm in H/R-treated H9c2 cells. The collapse of Dwm leads to mitochondrial membrane depolarization, which usually serves as a marker of early apoptotic events (Chen Y et al. 2018;Mansingh et al. 2018). Thus, the effects of KB on the collapse of Dwm could also confirm its inhibitory actions on H/R-mediated cell apoptosis. H/R insult modulates other intrinsic apoptotic cascades, including caspase-3, PARP, Bcl-2 and Bax (Kaushal et al. 2004). Caspase-3 activation leads to the execution phase of apoptotic cascades and mediates cellular degradation. Moreover, caspase-3 also interacts with other apoptosis markers, including the PARP and Bcl-2 family proteins (Elmore 2007). KB also modulates Bcl-2 and Bax (Kaushal et al. 2004;Chen Y et al. 2018;Mansingh et al. 2018;Kalpage et al. 2019;Peng et al. 2019). The Bcl-2 and Bax are two important proteins of the Bcl protein family, which execute diverse functions in intrinsic apoptosis (Kaushal et al. 2004;Elmore 2007;Peng et al. 2019). Bcl-2 is anti-apoptotic while Bax is pro-apoptotic (Kaushal et al. 2004). Thus, in this study, the modulation of Bcl-2 and Bax by KB confirmed that KB could inhibit H/R-induced apoptotic cascades, which demonstrated the protective effect of KB on H/R-induced H9c2 cells.
Cardiac injury resulting from H/R-aroused cell damage is closely related to ROS productions and oxidative stress in cardiomyocytes (Hayyan et al. 2016;Cadenas 2018;Zhou P et al. 2022). The myocardial cells are more susceptible to free radical damage because they have lesser antioxidants and antioxidant enzymes like GSH, SOD and catalase (Valko et al. 2007;Zhou P et al. 2022). KB also acts as an indirect antioxidant that induces endogenous antioxidants and enzymes. In this study, we also confirmed that KB could suppress H/R-mediated ROS generation and lipid peroxidation inhibition in H9c2 cells. Further, the upregulation of antioxidant enzymes was also observed in KBtreated cells. Thus, based on these results, we suggested that KB could suppress H/R-induced oxidative damages in H9c2 cells.
H/R insults induce mitochondria dysfunction injury in cardiomyocytes (Solaini and Harris 2005;Kang et al. 2017;Jia et al. 2022). Myocardial cells have abundant mitochondria, which mainly contribute to the principal ROS generation, and are usually exposed to high oxygen tension (Solaini and Harris 2005;Kang et al. 2017;Jia et al. 2022). In this study, KB alleviated H/R-induced mitochondrial dysfunction. Accumulating evidence indicates that H/R could induce mitochondrial dysfunction followed by the prompt efflux of ROS, which leads to apoptosis (Huang CH et al. 2015;Quan et al. 2021). Hence, the inhibition of H/R-induced mitochondrial dysfunction by KB might be essential in protecting against H/R-induced apoptosis.
Enhancing Nrf2 transcriptional activity and promoting the transcription of genes encoding endogenous antioxidants is a promising strategy to attenuate H/R-induced oxidative stress and damage (Fan et al. 2018;Li CW et al. 2018;Qiu et al. 2018;Lv et al. 2019;Zhou F et al. 2019). Thus, in this study, we investigated the effects of KB on the Nrf2 pathway. The Nrf2/ARE pathway is pivotal in regulating cellular defences against oxidative damages, and the Nrf2/ARE pathway is also associated with H/R-induced cardiac injury (Bubb et al. 2017;Strom and Chen 2017;Chen G et al. 2019;Lu et al. 2022). Nrf2 is one of the redox-sensitive transcription factors that modulate cellular antioxidant defences and maintain redox homeostasis (Bubb et al. 2017;Strom and Chen 2017;Chen G et al. 2019;Lu et al. 2022). The expression of target genes in the Nrf2/ARE axis could inhibit intracellular ROS generation (Bubb et al. 2017;Strom and Chen 2017;Chen G et al. 2019). In this study, we found that KB led to significant Nrf2 nuclear accumulation and enhanced ARE promoter activity in H9c2 cells. Moreover, KB significantly increased HO-1 expression, which could directly protect against oxidative damage. Notably, the up-regulation of HO-1 promoted cellular resistance to H/R-induced damage. Therefore, our data demonstrated that KB could activate the Nrf2/ARE pathway, which might contribute to KB-mediated protective effects. Next, Figure 9. KB-mediated AKT and AMPK were involved in KB-induced Nrf2/ARE activation and its protective effect on H/R-induced H9c2 cells. H9c2 cells were pre-incubated with inhibitors for 1 h and treated with KB for 2 h, followed by H/R insults. (A, B) The nuclear protein was prepared, and the Nrf2 level was analysed using a western blot. (C, D) The cell viability and cytotoxicity were determined by the CCK8 assay and LDH release, respectively. Data are expressed as % of vehicle control. Results are shown as mean ± SEM (n ¼ 8). The vehicle control group was treated with only DMSO. ## p< 0.01 vs. the vehicle group. ÃÃ p< 0.01 vs. the H/R-treated group. && p< 0.01 between two groups.
we conducted Nrf2 gene silencing using specific Nrf2-siRNA. Nrf2-siRNA transfection could abolish KB-mediated Nrf2 nuclear activations and ARE promoter activities in H/R-treated H9c2 cells. As expected, we confirmed that Nrf2 knockdown could abrogate KB-mediated anti-apoptotic actions in cells with H/R treatments. Thus, the activation of the Nrf2/ARE/HO-1 pathway contributed to KB-mediated protective effects against H/R insults.
KB is beneficial to the cell since it modulates several pathways, including AKT and AMPK (Mansingh et al. 2018;Huang J et al. 2019). The AKT pathway is involved in myocardial ischemia-reperfusion injury (Mansingh et al. 2018;Huang J et al. 2019). In addition, the AMPK signalling pathway serves as a critical regulator in modulating energetic stress, controls glucose uptake and glycolysis, and protects myocardial tissue from ischemic injury (Cui et al. 2013;Nagaoka et al. 2015;Thirunavukkarasu et al. 2015;Venardos et al. 2015;Kosuru et al. 2018;Potenza et al. 2019;Tian et al. 2019;Zhang BF et al. 2019). Therefore, we hypothesized that the cardioprotective actions of KB against H/R-induced injury were due to the AKT and AMPK axis. Notably, the activation of AKT and AMPK pathways was observed in KB-treated H9c2 cells with or without H/R insults. AKT or AMPK inhibitors also abrogated KB-induced protection against H/R injury. Therefore, we suggested the up-regulation of AKT and AMPK activities might be essential for the protective effect of KB. Further, several kinases, including AKT and AMPK, could modulate the activation of the Nrf2 pathway (Fan et al. 2018;Li CW et al. 2018;Lv et al. 2019;Zhou F et al. 2019). In this study, we not only found that KB promoted the phosphorylation of AKT and AMPK but also observed that blocking AKT and AMPK pathways via pharmacological inhibitors abolished the KB-mediated Nrf2 activation. Thus, we suggested that AKT and AMPK pathways were involved in Nrf2 activation and contributed to KB-induced protection against H/R-induced cardiac injury.
Conclusions
This study showed that KB alleviated cardiomyocyte injury in H/R-induced H9c2 rat cardiac myoblasts. Further, KB inhibited ROS production and lipid peroxidation but promoted antioxidant enzyme activity and the activation of the Nrf2/ARE/HO-1 pathway in H9c2 cells against H/R insults. Moreover, the KBmediated activation of Nrf2 pathways was associated with the AKT and AMPK signal cascades. In conclusion, this study revealed that KB confers protection to cardiomyocytes against hypoxia/reoxygenation insult via the AKT and AMPK-mediated activation of the Nrf2/ARE/HO-1 signalling pathway. We believe that KB has the potential to become a promising drug candidate for managing ischemic cardiac disorders.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by grants from Guangdong Basic and Applied Basic Research Foundation (2020A1515110899), Science and Technology Projects in Guangzhou (202102020972)
Data availability statement
The data in the current study are available from the corresponding author upon request. | 2023-02-07T06:17:39.915Z | 2023-02-05T00:00:00.000 | {
"year": 2023,
"sha1": "1c7195345d2ff65032ea0e1f3173429be7e14fe2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bebf6597338af658a0622d198d98234d3b59e994",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264298206 | pes2o/s2orc | v3-fos-license | Hematological Indices in Patients With Goiter: A Cross-Sectional Study in a Tertiary Care Hospital in South India
Introduction A wide range of haematological abnormalities have been observed in patients with goitre. The aim of the study was to evaluate the abnormalities in haematological parameters among patients with goitre in a tertiary care hospital in south India. Methods This was a cross-sectional study carried out in the pathology department of Chengalpattu Medical College from April 1 to June 30, 2019. The lab reports, including the complete blood count (CBC) and serum thyroid profile that included thyroid-stimulating hormone (TSH), triiodothyronine (T3), and thyroxine (T4) of all the patients with goitre, were retrieved from the laboratory records. Results were tabulated and analysed. Results Out of a total of 200 patients with thyroid dysfunction, 12 (6%) were males and 188 (94%) were females, with the majority (51.5%) of them in the age group of 30-60 years. Serum TSH levels showed a statistically significant association with red cell distribution width (RCDW) (P-value = 0.000), mean corpuscular volume (MCV) (P-value = 0.020), and total white blood cell (WBC) count (P-value = 0.003) among the patients with goiter. There was no significant association between TSH and other parameters like haemoglobin, packed cell volume (PCV), red blood cell (RBC) count, and platelet (PLT) count. Conclusions Red cell distribution width and mean corpuscular volume are useful haematological parameters that will help clinicians in the early diagnosis and proper treatment of haematological abnormalities seen in patients with goitre.
Introduction
The thyroid gland is one of the vital endocrine organs in the human body that secretes thyroid hormones.Thyroid hormones influence the normal development, physiological functions, and metabolic activity of almost all organ systems in our body [1].The haematopoietic system is one of the primary systems influenced by thyroid hormones through several mechanisms.Thyroid hormones play an important role in regulating haematopoiesis in humans.
Thyroid-stimulating hormone (TSH), which is secreted by the anterior pituitary gland, mediates the output of thyroid hormones, namely triiodothyronine (T3) and thyroxine (T4), which in turn regulate erythropoiesis.All these functions are regulated by the binding of the T3 hormone to nuclear receptors [2].Thyroid hormones stimulate erythropoiesis through their direct effect on bone marrow progenitor cells.In addition to this direct effect, it also plays an indirect role by regulating iron absorption, vitamin B12 absorption, and modulating erythropoietin production [3].
Thyroid dysfunction results in the enlargement of the thyroid gland, referred to as goitre.Goitre can be associated with a wide variety of haematological abnormalities, the most common being anaemia of different morphological types, namely microcytic hypochromic, macrocytic, and normocytic normochromic anaemia.Anaemia has been identified in 20-60% of hypothyroid patients, and several mechanisms are involved in its pathogenesis [4].In every case of anaemia with an uncertain aetiology, the possibility of hypothyroidism should be considered.
In the laboratory evaluation of anaemia, haematological indices such as mean corpuscular volume (MCV), packed cell volume (PCV), haemoglobin (HB), red cell distribution width (RDW), and red blood cell (RBC) count are useful for diagnosis and categorising the morphological type of anaemia.Other parameters, like the total white blood cell (WBC) count and platelet count (PLT), also show variations among patients with thyroid dysfunction.Most of these haematological parameters have been found to be altered with a decline in thyroid function, namely hypothyroidism [5].
Hence, this study is aimed at evaluating the alteration in haematological parameters among patients with goitre in a tertiary care hospital.
Setting
This was a cross-sectional study carried out in the pathology department of Chengalpattu Medical College, Chengalpattu district, Tamil Nadu.The study period was three months, from April 1 to June 30, 2019.
Inclusion Criteria
The study included all patients with goitrous enlargement of the thyroid gland attending the central clinical laboratory of Chengalpattu Medical College.Blood samples from patients of all age groups and genders with goitre were collected.
Exclusion Criteria
Patients with other haematological conditions like iron deficiency anaemia and thalassemia; patients with systemic diseases, chronic renal diseases, and pregnant females; and patients on drug intake were not included in the study.
Sample size and sampling
A purposive sampling technique was used for the selection of desired samples according to the inclusion criteria.All the blood samples received from patients with goitre in the central clinical laboratory during the three-month study period were used for the study.
Protocol
The study was approved by the Chengalpattu Medical College, Institutional Ethical Committee for Human Studies (approval number: CMCH/19/PR/071).Informed consent was obtained from all patients during blood sample collection.This was a laboratory-based study where the reports, including the complete blood count (CBC) and serum thyroid profile that included TSH, T3, and T4, of all the blood samples from patients with goitre received from April 1 to June 30, 2019 in the central clinical laboratory were retrieved from the laboratory records.All the findings were tabulated and analysed.A TSH value of 0.4-5 IU/L was taken as a reference range for euthyroid status.Thus, patients with TSH values >5 IU/L were considered hypothyroid, while those with TSH values <0.4 IU/L were considered hyperthyroid cases.The RDW value of 12-14%, HB of 12-14 g, PCV of 34-36%, and RBC count of 4.5-5.5 million/cu.mm,MCV of 80-100 fL, WBC count of 4000-11,000/cu.mm,and platelet count of 1,50,000-4,00,000/cu.mm were taken as the normal reference range for the haematological parameters in this study.
Statistical analysis
Data were entered into Microsoft Excel (Microsoft® Corp., Redmond, WA) and analysed using Statistical Package for Social Sciences (SPSS) software version 22.0 (SPSS, Inc., Chicago, IL).Fisher's exact test and Chisquare test were used to evaluate the association between TSH and complete blood count parameters.A Pvalue of 0.05 was taken as the cut-off point to determine statistically significant results.Frequency and percentage were calculated for categorical variables like patient demographic data (age, gender, etc.).
Results
In the present study, a total of 200 patients who presented with goitre were included, out of which 188 (94%) were females and 12 (6%) were males.The majority of the patients (51.5%) were in the age group of 30-60 years (Table 1).The thyroid profile of all 200 patients revealed that 100 (50%) were euthyroid, 82 (41%) were hypothyroid, and 18 (9%) were hyperthyroid (Table 1).Among the haematological indices, the majority of the patients with goitre showed an alteration in MCV (82%), followed by RDW (54.5%) and RBC count (45%).The least affected RBC parameters were PCV (8.5%) and haemoglobin (3.5%).Thus, anaemia was seen in only 8.5% of the patients based on the reduction in PCV, which is a more sensitive haematological parameter compared to haemoglobin in the diagnosis of anaemia.However, other haematological indices like MCV, RDW, and RBC count showed alteration in the majority of patients with goitre even in the absence of anaemia.The total WBC count was altered in 56 (28%) cases, with one (0.5%) of them showing leukopenia, while the platelet count was altered in 23 (11.5%) cases, with two (1%) of them showing thrombocytopenia (Table 1).In this study, all the patients, including both males and females, were categorised into three groups based on their TSH levels: euthyroid (0.4-5 IU/L), hypothyroid (> 5 IU/L), and hyperthyroid (<0.4 IU/L).Fisher's exact test revealed a statistically insignificant association between the age groups and TSH levels among the three groups of patients in this study (P-value = 0.132).All the haematological parameters were compared with the TSH values among the three groups of patients and statistically analysed.
Among the 164 patients with altered MCV values, 54.3% were euthyroid, 36.6% were hypothyroid, and 9.1% were hyperthyroid.Among the total 36 patients with normal MCV values, the majority (61.1%) were hypothyroid.Fisher's exact test revealed a statistically significant association (P-value = 0.020) between MCV and TSH levels in this study (Table 2).Among the 109 patients with increased RDW values in the present study, the majority (62.4%) were hypothyroid, and among the total 91 patients with normal RDW values, the majority (79.1%) were euthyroid.The chi-square test revealed a highly significant association (P-value 0.000) between increased RDW values and TSH levels in this study (Table 2).
In the present study, other RBC haematological parameters like haemoglobin (P-value = 0.611, PCV-value = 0.196), and RBC count (P-value = 0.912) showed statistically insignificant association with TSH levels among the three groups of patients (Table 2).
In this study, among the 56 patients with an altered WBC count, 48.2% were hypothyroid, 33.9% were euthyroid, and 17.9% were hyperthyroid.Among the total 144 patients with a normal WBC count, the majority (56.3%) of them were euthyroid.The chi-square test revealed a statistically significant association (P-value = 0.003) between total WBC count and TSH levels in this study (Table 2).
In this study, among the 23 patients with an altered platelet count, 13% were hyperthyroid, 43.5% were euthyroid, and 43.5% were hypothyroid.Among the total 177 patients with a normal platelet count, the majority (50.8%) of them were euthyroid.There was no significant association (P-value = 0.601) between platelet count and TSH levels among the three groups of patients (Table 2).
Discussion
The prevalence of goitre resulting from thyroid dysfunction is constantly increasing worldwide, especially in women compared to men.Thyroid dysfunction in the form of hypothyroidism or hyperthyroidism is associated with a wide range of haematological abnormalities, including pancytopenia, in many untreated cases [6].Thyroid dysfunction is identified by measuring serum TSH levels and is now considered the most sensitive test among patients with goitre [7].
In this study, females accounted for 94% of the total patients with goitre, and the common age group was 30-60 years (51.5%).Iddah et al. [8] reported that 95% of female patients with thyroid dysfunction had a median age of 41 years, similar to the present study.
In the present study, anaemia was seen in 8.5% of patients, leukopenia in 0.5% of cases, and thrombocytopenia in 1% of cases.In the study by Iddah et al. [8], anaemia was encountered in 28.4% of cases, leukopenia was seen in 12.2% of cases, and thrombocytopenia was seen in 4.7% of cases, unlike the present study.
There was no significant association between the age groups and TSH levels among the three groups of patients in this study (P-value = 0.132), similar to the study by Geetha and Srikrishna [7].Geetha and Srikrishna [7] reported a positive association (P-value < 0.001) between MCV and serum levels of TSH similar to the present study (P-value = 0.020).MCV reflects the size of RBCs.It is believed that thyroid dysfunction is associated with premature ageing of RBCs and increased lipolytic tendency of RBCs, along with altered distribution of lipids in the RBC's membrane, thereby altering the MCV values [7].MCV is increased in hypothyroidism and decreased in hyperthyroid patients.
RDW, which is used for quantitative measurement of variation in the size of RBCs, is calculated by dividing RBC SD by MCV.The RDW value is calculated routinely by automated haematology analyzers used to determine the CBC.In a study by Arundhathi et al. [9], RDW was significantly increased in hypothyroid patients, similar to the present study.Yu et al. [10] reported that RDW was significantly increased in patients with subclinical hypothyroidism and thus will help clinicians detect thyroid dysfunction at an early stage.
In the present study, there was no significant association between TSH levels and other RBC parameters like haemoglobin (P-value = 0.611), PCV (P-value = 0.196), and RBC count (P-value = 0.912), in concordance with the studies by Shouree et al. [6] and Geetha and Srikrishna [7].In contrast to these observations, Dorgalaleh et al. [11] reported a statistically significant association between TSH levels and RBC count, PCV, RDW, and haemoglobin.
Yu et al. [10] and Bashir et al. [12] reported a statistically significant association between TSH levels and total WBC count similar to the present study (P-value = 0.003).However, Arundhathi et al. [9] reported no significant association between total WBC count and TSH levels.The mean WBC count was lower in hypothyroid patients compared to hyperthyroid patients in the study by Siddegowda et al. [13].
There was no significant association (P-value = 0.601) between platelet count and TSH levels in this study, similar to the studies by Shouree et al. [6] and Arundhathi et al. [9].Platelet counts are less affected in thyroid dysfunction due to the fact that platelets are non-nucleated cells and have a shorter life span with rapid turnover [14].
Thus, among the various haematological parameters, the most significant and consistent association with derangement in TSH levels was shown by RDW and MCV in the majority of studies.However, RDW will be affected by other chronic medical conditions such as renal diseases, rheumatoid arthritis, and cardiac diseases [15].
Limitations
The present study was based on laboratory data.The clinical follow-up of the patients with goitre and the radiological features were not included in this study.Further studies incorporating the clinical, radiological, and laboratory data will provide better insight into the relationship between the various haematological parameters and the clinical severity of thyroid dysfunction.
Conclusions
The clinical manifestations of thyroid dysfunction among patients with goitre typically develop slowly over a period of weeks to months.The most common manifestation of thyroid dysfunction includes haematological abnormalities, which, if not diagnosed and treated appropriately, can lead to serious lifethreatening complications.CBC should be routinely done in the clinical evaluation of all patients with goiter.Among the various CBC parameters, RDW and MCV can serve as simple and cost-effective parameters for early diagnosis and appropriate management of patients with haematological abnormalities associated with sub-clinical hypothyroidism.
TABLE 1 : Frequency of various parameters among the patients with goiter
TSH: thyroid-stimulating hormone; HB: haemoglobin; MCV: mean corpuscular volume; RDW: red cell distribution width; PCV: packed cell volume; RBC: red blood cell; WBC: white blood cell
TABLE 2 : Association between TSH levels and other parameters
*Statistically significant (P-value is significant at <0.05 TSH: thyroid-stimulating hormone; RDW: red cell distribution width; MCV: mean corpuscular volume; HB: haemoglobin; RBC: red blood cell; PCV: packed cell volume; WBC: white blood cell; PLT: platelet | 2023-10-19T15:11:07.628Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "87657389715a2bd77998c054f04542ffefdfe6a5",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/194344/20231017-4710-9b5y6i.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5201bba798a09fe4de3e32d007e7a5705a4d94b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126283262 | pes2o/s2orc | v3-fos-license | Use of various versions of Schwarz method for solving the problem of contact interaction of elastic bodies
A definition of a sufficiently common problem of mechanical contact interaction in a system of elastic bodies is given. Various versions of realization of the Schwarz method for solving the contact problem numerically are described and the results of solution of a number of problems are presented. Special attention is paid to calculations where the grids in the bodies significantly differ in steps.
Introduction
Taking the contact interaction between various structural elements of constructions into account is an important component of assessment of the strain stress distribution (SSD) in bodies. Analytical solutions of contact problems have been received for a very restricted number of contact interactions and forms of the contacting surfaces. In almost all important situations, it is necessary to apply numerical methods among which, for solving the problem of the deformable solid mechanics (DSM), the leading position is held by the finite element method (FEM) [1][2][3][4].
Definition of the problem of contact interaction in a system of thermo-elastic bodies
In the three-dimensional space R 3 with a Cartesian coordinate system Ox 1 x 2 x 3 , we consider a group of bodies occupying the area G = α G α with a piecewise smooth border ∂G = α ∂G α . We assume that the relation between temperatures and deformation can be neglected. Therefore, the problem of heat conductivity can be solved separately, and the received temperature profile can be applied at the solution of a quasi-static problem of equilibrium of bodies.
We use the classical problem definition of heat conductivity taking into account the boundary conditions of second and third kinds [3,4]. To solve the contact problem, we assume that the temperature at the corresponding points in the neighborhood of contacting bodies is the environment temperature on the site boundary where the heat exchange condition is set.
The mathematical statement of a quasi-static untied problem of DSM in the considered thermo-elastic statement includes the following ratios (for each body G α , i, j = 1, 3): the equilibrium equations σ ji,j (u, T ) + Q i (x, t) = 0, x ∈ G α ; u(x, t) S α σ ji (u, T )n j S α and the governing equations (in this case, Hooke's law) for stress tensor components where u(x, t) is the displacement vector of the point determined by the radius vector x = x i e i , u 0 (x, t) is the displacement vector of a point located on a surface S α 1 , Q(x, t) = Q i (x, t)e i is the vector of mass forces, p(x, t) = p i (x, t)e i is the vector of the external loading acting on a surface S α 2 , ε kl are components of a tensor of the total strain (set by Cauchy relations), ε th kl are components of a tensor of temperature strain, C ijkl are components of a tensor of elastic coefficients.
When solving the contact problem on the contact surfaces of bodies, it is necessary to satisfy the contact conditions for interaction between displacements and stresses. For simplicity, we consider only the case of two bodies with one couple of contact surfaces. Let us consider two thermo-elastic contacting bodies A and B occupying the areas G A and G B with piecewise smooth boundaries ∂G A and ∂G B in the space R 3 .
On the surface of contact S k = S A k = S B k , the following conditions must be satisfied: on the displacements (kinematic conditions) and on the stresses (stress condition) where u A n and u B n are projections of the displacement vectors of boundary points on the direction of the external normal n A to the boundary of a body L, δ n is the initial distance (gap) along the normal between boundary points of bodies A and B, σ A n and σ B n are projections of the stress vectors σ A and σ B on the external normals n A and n B respectively, S α The tangential contact stresses σ α τ = σ α · τ α (τ α is tangent to the contact boundary of the corresponding body) are calculated by the formula (Coulomb's law) where µ is the friction coefficient (sliding friction). If we pose adhesion conditions on the contact bundary, then not only normal components of the vectors of displacements and stresses, but all components of these vectors, appear in formulas (5), (6).
The set of relations (1)-(7) makes the mathematical statement of a contact problem of DSM. It is supposed that all functions entering this formulation have sufficient smoothness.
The multi-contact character of the considered problem is determined, first of all, by the geometry of the interacting bodies. Two bodies can interact but have several incoherent surfaces of contact. On the other hand, several bodies also can participate in interaction. These circumstances need to be considered when developing an algorithm for solving problem (1)- (7) numerically. 3. Application of the alternating Schwarz method for solving the contact problem numerically The stated problem is solved numerically by FEM [1][2][3][4]. In particular, the applied version of FEM for this problem is described in [3,5]. As a result of FEM application, the problem of MDSB (1)-(7) reduces to solving the linear matrix equation [2,4] [ Here the following notation is used: [K] is global stiffness matrix, {U } is global displacements vector, {R} is global vector of loading. Various iterative methods are used to solve the contact problems, for example, the penalty method, the Lagrange multiplier method, the combined method of penalties and Lagrange, the pseudo-medium method, an alternating Schwarz method and others [2,4,5]. In this work, the application of Schwarz method is considered (a version of decomposition method -see [6, p. 412] and [7] for a general case).
The essence of the method is as follows: at the first step, on the contact surfaces of the bodies, the initial approximation for components of the vector of displacements is set (the approximation is chosen from the range of expected values for the region of contact interaction). After solving this task, kinematic condition (5) on the contact surface is satisfied, but the calculated contact pressures on the opposite contact surfaces of the interacting bodies are not equal (condition (6) is violated). At the next step, by means of in a specially executed correction, we can obtain equal contact stresses but the obtained displacements do not satisfy condition (5). Further, at the next iteration step, the corrected kinematic conditions are again applied (combining the contacting surfaces). The stress and kinematic iterations are alternated to attain the convergence when both kinematic (5) and stress conditions (6) on the contact surface are satisfied with a prescribed accuracy. This method is described in more detail in [5,8,9]. Thus, the alternating Schwarz method is an iterative method and its essence within the finite element technology is as follows. At even iterations, the components of vectors of displacements of contact clusters {U k } (A) and {U k } (B) of bodies A and B is corrected. For body A, the correcting expressions have a form for some initial displacement. Here For the Schwarz method used for odd iterations, the vector of global loading is the sum of two parts: the vector {R nk }, which describes the impact of all forces on a body except for the contact ones, and the vector of nodal contact forces {R k } in which only the components corresponding to the clusters on the contact surface are nonzero.
The calculation of components of nodal forces {R k } (A) and {R k } (B) , arising at contact nodes of bodies A and B, is also based on the assumption that, for any kinematic restriction imposed on the surface site, there is a stress loading such that it makes an equivalent impact on the body. Therefore, after carrying out the kinematic iteration with number 2n, the values {R k } 2n (A),m and {R k } 2n (B),m can also be calculated by the formula (j is the global number of the variable corresponding to the node on the contact interface with number m): Here α 2n and are also used to form the global vectors of nodal loading {R} (A) and {R} (B) for bodies A and B. Then matrix equation (8) is solved for each of the two considered bodies.
Convergence of the described iteration scheme is considered in [8,9]. The use of relations (11) for grids whose nodes on the contact surfaces of the two bodies coincide or are close to each other results in rather good results. But in the case of distinct nodes, especially when the grid steps are significantly different for the two bodies, the immediate application of (11) significantly worsens the convergence of the repetition process (when the second or higher order FE are used, the situation becomes even worse).
Because, in reality, not the concentrated forces carrying to grid clusters but the distributed contact forces operating on all contact boundaries act on the contacting sites of the surfaces of two bodies (by analogy with the vector of surface forces), the following expression can be written for the jth component of the vector {R k }: where N S j is the basic function defined on a surface and corresponding to the node with number j and p k (x) is the contact pressure.
It is obvious that, when using distinct grids for stress iterations, it is necessary to correct not the values of nodal forces {R k } (A) and {R k } (B) , which experience not only the action of pressure, and integrals of basic functions, but the values of contact pressure p Further, we consider various realizations of the Schwarz method, which differ only in relations for stress iterations (for kinematic ones, the displacements are calculated by formula (9)).
The first realization of the Schwarz method is based on the following steps: 1. We replace the unknown contact pressure in (12) with an approximation based on use of basic functions N S which are falling into clusters of the surface grid (for the first-order FE, they are piecewise linear functions) where M is number of the nodes which are on the considered surface of contact. 3. Substituting (13) in (12), we obtain a system of M equations for M unknown pressures Solving (14), we obtain the values of contact pressures after kinematic iteration. These pressures do not satisfy condition (6). 4. We carry out corrections of the received pressures by a formula similar to (11): 5. Knowing the corrected pressures, we calculate new values of nodal forces: 6. Considering the obtained vector of nodal forces in the vector of global loading, we solve matrix equation (8) for each body.
Further, the method defined by points 1-6 will be called the "first method." In [10], a generalization of the given version of the Schwarz method to the case of multicontact interaction is considered. The influence of the choice of an initial approximation on the speed of convergence of the iteration process is investigated. For a particular class of problems, a modification of the Schwarz method is proposed which allows one to receive an initial approximation by solving a number of auxiliary problems by direct consideration of various loadings acting on the bodies.
4.
Application of the Schwarz method to the problem of contact of a large number of bodies The numerical algorithms described above are realized as computational procedures which, as a component of the central computing core, enter the prototype of the integrated program platform TEMETOS for carrying out computing experiments in complex problems of mathematical model operation [12].
The authors executed a number of calculations in which some thermo-mechanical effects occurring in a fuel element (TVEL) [10,11] are simulated.
For example, for calculating the SSD parameters in TVEL at the exit to a rated duty, the following problem is considered: there is a thick-walled pipe (TVEL cladding) containing a column of identical cylinders put over each other; the cylinders have an internal opening and flats on both end faces (fuel pellets). In the cylinders, the uniform thermal emission obeying the following law is assumed: there is a linear rise in power of thermal emission in time till the limit value of q l max , and then the power remains constant. It is necessary to maintain the temperature of the external surface of the pipe at a constant value T 1 . Between the external surfaces of the cylinders and the internal surface of the pipe, there is a heat exchange. The lower end face of the pipe and the lower end face of the lower cylinder are fixed. On the external surface of the pipe, the constant pressure p 1 is set. On the upper face of the top cylinder, the constant pressure p 2 is set.
The definition of the problem allows one to use a modified version of the Schwarz method. In the course of solving the problem, the dynamic temperature problem is solved first, and then the received temperature fields are used to solve a quasi-stationary equilibrium equation.
In [10], a series of calculations in the axially symmetric statement, where the number of cylinders varies from several tens to several hundreds (as contact conditions, the sliding conditions without sliding friction are chosen) is described. At an initial instant, there was a gap between the cylinders and pipe. Then the increase in the body temperature strains was a result of the contact between the cylinders and the pipe. The iterations stopped when the maximal relative change of displacements (compared with the two next iterations) did not exceed 1%. To achieve a similar accuracy, it is required, on the average, 10-15 iterations at one step in time (for any number of cylinders). Let us note that, in these problems, the total system of linear equations for each body was solved self-contained which allowed one significantly to reduce the calculation time.
A similar problem is also solved in the three-dimensional statement (the number of cylinders varied from two to ten). In [11], the results of three-dimensional calculations for 4 pellets are given. The case of an axially symmetric loading where the calculation area corresponds to the sector 90 • is therefore considered. As the contact conditions, the sliding condition with sliding friction is chosen.
The obtained results are compared with the results of two-dimensional (axisymmetric) calculations by means of the ANSYS code by using the Lagrange multiplier method. The comparison showed a rather good qualitative and quantitative compliance between the considered sizes. Figure 1 shows the distribution of radial and axial stresses for the three-dimensional calculation area consisting of two cylinders and a pipe at one of the time instants.
At the moment, the used version of the Schwarz method has an essential drawback: it cannot be applied directly if there are cells of the surface grid which are only in partial contact (this will be called "partial contact of a cell"). In this case, for some clusters located on the boundaries of the contact surface, it is more difficult to interpret the concept of "corresponding point," which is of key significance for formulas (9), (11). In the course of heating, considerable axial displacements are observed on the cylinders, and similar situations with partial contact of cells arise constantly. To resolve this problem, a local reorganization of the grid in the pipe near the boundaries of the surface of contact between the pipe and each of cylinders was carried out at each iteration step. Because of such reorganizations of the grid, it was possible to avoid problems with partial contact of cells, but there were situations where the grid steps in special subareas from the cylinders and from the pipe differed significantly (sometimes by more than twice). At the same time, in all carried-out calculations (three-dimensional and two-dimensional), there were stress concentrators near the contact of the corners of the cylinder facet and the pipe.
The numerical values of contact pressure in these areas are very sensitive to the used calculation grid, and therefore such differences in the grid steps led to oscillations on the curves of contact pressure and decelerated the convergence of the iteration process.
These specified problems led to the comprehension that it is necessary to develop a more perfect realization of the Schwarz method. The main ideas which are the cornerstone of new version are explained below.
New version of the Schwarz method
The disadvantages of the previous version of the Schwarz method can be explained by the fact that when the grid is too rough for describing the contact pressure, the total force applied to the contact surface of the first body may differ from the total force applied to the contact surface of the second body, which leads to deceleration of the iteration process convergence.
To avoid this situation, it is possible to supplement the applied algorithm with the requirement that the forces acting on the contacting surfaces coincide. This restriction can be considered as follows: we divide the contact surface of each body into non-itersecting sites and, for the power iteration, demand that the equality hold not for the values of pressure at grid nodes but for the values of contact forces obtained after integration of the pressure on each of such sites for one body and for the corresponding site of the other body.
Let us note that a similar approach potentially allows one to solve a problem with partial contact of cells if, instead of the term "corresponding point," one uses the more universal concept "corresponding site." Let us describe one of possible realizations of the given algorithm: 1. We replace the unknown contact pressure in (12) with an approximation based on use of basic functions χ S which fall into the surface grid but optionally coincide with the basic functions N S : where L is the number of introduced basic functions. We see that the contact surface of the considered body consists of L non-intersecting sites S k = L j=1 S k,j .
2. After each kinematic iteration with number 2n (n = 0, 1, 2, . . .), we find the values of components of the vector of nodal forces {R k } 2n m , m = 1, . . . , M . 3. We obtain the system of M equations for L unknown pressurs Solving (18), we find the values of the contact pressure after kinematic iteration (if L does not coincide with M , then it is necessary to use the methods focused on solving systems with a rectangular matrix). The obtained pressure does not satisfy the stress condition (6). 4. We calculate the contact forces corresponding to each site S k,j : 5. We adjust the received contact forces by the formula similar to (11): Here {P k } 2n (B),s is contact force calculated on the site of the surface of body B which corresponds to the considered part S k,j of body A. 6. After calculation of the contact forces, we determine the contact pressure. For this, it is necessary to solve a set of equations: 7. When the corrected pressures are known, we calculate new values of the nodal forces: 8. Considering the obtained vector of nodal forces as the vector of global loading, we solve for each body the matrix equation (8) for each body.
It is possible to specify functions, piecewise constant on the surface elements, as one of possible choices of basic functions χ S . Then L is the number of surface elements which are in contact (L differs from M , which is the number of nodes in contact).
For further application, we consider the case L = M . As non-intersecting sites of the contact surface S k,j , we take the Dirichlet cells corresponding to nodes of the surface grid. As basic functions χ S , we choose the functions N S (for FE of first order, they are piecewise linear functions).
Further, the method given by points 1-8 is called the "second method."
Comparison of various versions of the Schwarz method for a test problem
Let us analyze the results of calculations with application of two realizations of the Schwarz method described above for the following two-dimensional test problem used as an exmple: the second body of width 6 and height 2 rests on the first body of height 4 and width 9. On the left side, the bodies are fixed across, and the lower body is fixed down. The upper body is affected by the distributed load given by the formula p(x) = p 0 [1 − cos(2πx/l)], where p 0 = 10, l = 3. As the conditions on the contact surface, we choose the adhesion condition, and therefore the solution of the considered contact problem must coincide with the solution of the problem where the considered design is a uniform body. Therefore, to estimate the accuracy of the obtained results, it is advisable to carry out comparison with the calculations corresponding to the case of one body under the same loading.
In all calculations, the FE of first order were applied. All values are given in dimensionless form.
The chosen initial approximation can affect the speed of convergence of the Schwarz method. But for the problems considered in this work, this influence is rather restrictive, and therefore, in all calculations, the following values of displacements on all surfaces of contact were chosen as the initial approximation: u x = 0, u y = −10 −4 . Method (1) To estimate the accuracy of the numerical solution of the contact problem, the following values were considered: the stresses (σ xx , σ yy ) obtained near the contact boundary (at the center of the cells located along the contact surface of the first body), and the displacements (u x , u y ) at the nodes located on the contact boundary (for both bodies). The number of a body is given in brackets. These values were compared with the corresponding values obtained in calculations for one body with h step (for the similar section).
For calculations where a uniform grid with an identical step was used in both bodies, the results obtained by different methods are almost identical. They coincide with the values obtained in calculations for one body with a relative accuracy of about 10 −5 . When the grid steps differ rather strongly, then the results of calculations become different. In calculations on similar grids, all considered sizes are calculated with a significantly smaller accuracy. Moreover, one can observe that a larger discrepancy of displacements on the contact surfaces of two bodies is essential (after the power iteration): the previously obtained relative error of displacements (when comparing the values on the surfaces of the first and second body) was approximately 10 −5 , but the displacements obtained now differ by percents. At the same time, the further decrease in the accuracy is not observed: at the power and kinematic iterations, we have two sufficiently steady solutions, which do not coincide with each other. At the power iterations, the nodal values of the contact pressure coincide, but there is an overlap of the displacements. At the kinematic iterations, the displacements coincide, but there is a gap of the contact pressure.
For descriptional reasons, we present the curves of displacements and stresses after 50 iterations for the maximal difference of steps of grids (h 1 = 0.125, h 2 = 0.375). Figure 2 shows the curves of displacement of u x and u y on the contact surfaces of both bodies calculated by various methods (curves with markers) and calculated for one body (the grid step is h = 0.125, curves without markers). The curves show that, for the first method, on the given grids at the considered power iteration, there is a very big divergence of displacements (for u x , the discrepancy attains 25%, and for u y , 7%). For the second method, the gap of displacements at the power iteration, is significantly less. Figure 3 shows the curves of stresses (σ xx , σ yy ) near the contact boundary of the first body for calculations by various contact methods (curves with markers) and for calculations for one body (the grid step is h = 0.125, curves without markers). For the considered problem, on the contact surface, there is a singular point, the edge of the upper body. As the grid becomes finer, the contact pressure grows at this point without bounds. Therefore, the stresses calculated in a neighborhood of this point are most sensitive to the grid steps. Therefore, in cases where grids with various steps are constructed, the greatest divergence of the solutions is observed.
Proceeding from the above analysis of the obtained data, it is possible to conclude that, for the considered test problem where significantly different grids were used, the second method allows one to obtain more precise results. Its application for the multi-contact problem considered in section 3 is planned in the near future.
Conclusions
An algorithm for numerical solution of the contact problem for a system of interacting elastic bodies by using the alternating Schwarz method is explained. The results of application of this method for solving a multidimensional problem of contact interaction between a large number of bodies are given. A new version of the Schwarz method realization, which is focused on the case where the grid steps are significantly different in different bodies is proposed. An example of a two-dimensional test problem of contact is used to compare two methods for a series of calculations with more and more differing grids. It is shown that the new method allows one to obtain better results compared with the first. | 2019-04-22T13:11:17.092Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "7d2a1c8119531948247ffd9f01aa57a3603f6ebc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/991/1/012021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "680df4a0f0997ae71dd7106103b768f59f185b9f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
253858729 | pes2o/s2orc | v3-fos-license | Dual Role of ACBD6 in the Acylation Remodeling of Lipids and Proteins
The transfer of acyl chains to proteins and lipids from acyl-CoA donor molecules is achieved by the actions of diverse enzymes and proteins, including the acyl-CoA binding domain-containing protein ACBD6. N-myristoyl-transferase (NMT) enzymes catalyze the covalent attachment of a 14-carbon acyl chain from the relatively rare myristoyl-CoA to the N-terminal glycine residue of myr-proteins. The interaction of the ankyrin-repeat domain of ACBD6 with NMT produces an active enzymatic complex for the use of myristoyl-CoA protected from competitive inhibition by acyl donor competitors. The absence of the ACBD6/NMT complex in ACBD6.KO cells increased the sensitivity of the cells to competitors and significantly reduced myristoylation of proteins. Protein palmitoylation was not altered in those cells. The specific defect in myristoyl-transferase activity of the ACBD6.KO cells provided further evidence of the essential functional role of the interaction of ACBD6 with the NMT enzymes. Acyl-CoAs bound to the acyl-CoA binding domain of ACBD6 are acyl donors for the lysophospholipid acyl-transferase enzymes (LPLAT), which acylate single acyl-chain lipids, such as the bioactive molecules LPA and LPC. Whereas the formation of acyl-CoAs was not altered in ACBD6.KO cells, lipid acylation processes were significantly reduced. The defect in PC formation from LPC by the LPCAT enzymes resulted in reduced lipid droplets content. The diversity of the processes affected by ACBD6 highlight its dual function as a carrier and a regulator of acyl-CoA dependent reactions. The unique role of ACBD6 represents an essential common feature of (acyl-CoA)-dependent modification pathways controlling the lipid and protein composition of human cell membranes.
Introduction
The acyl-CoA binding protein ACBD6 interacts with the N-Myristoyltransferase (NMT) enzymes to form a dimeric enzymatic complex regulating and controlling the specificity of the myristoylation process [1][2][3]. N-myristoylation is an essential modification regulating the functions, stability, and membrane association of a diverse set of cytosolic proteins in cells [4][5][6][7][8][9][10][11][12]. Acylation of N-terminal glycine with the 14-carbon acyl donor myristoyl-CoA (Myr-CoA) occurs mainly during translation. Interaction with NMT requires the C-terminal ankyrin-repeat domain of ACBD6 to form an enzymatic complex with enhanced activity and protected from competitive inhibition by more abundant acyl-CoAs, such as palmitoyl-CoA [2,[13][14][15][16][17][18]. Myristate and myristate analogs must be esterified with CoA to access the acyl-CoA binding site of NMT [17,[19][20][21]. Upon thio-esterification by cellular acyl-CoA synthetases (ACSL), Myr analogs can compete with Myr-CoA binding and occupy the site until the analog chain can be transferred onto a polypeptide acylacceptor. The acyl chain of palmitoyl-CoA, which binds to NMT with high affinity, is not a substrate of the acyl-transferase reaction and the NMT catalytic cycle is blocked [1,17,19]. Similarly, the 2-hydroxymyristate chain of the 2-OH Myr-CoA analog efficiently inhibits Nmyristoylation in various cell types with an estimated in vitro Ki of 45 nM, which is ≈45,000 lower than the Ki of the unesterified form [3,[22][23][24][25][26][27][28][29][30][31][32], bringing into question the rationale for suggesting that the fatty acid 2-OH Myr itself could have been an inhibitor of NMT enzymes [33]. The commonly used myristoyl in vivo labeling probes 12-azidododecanoic acid (12-ADA) and 13-tetradecynoic acid (YnMyr) are also Myr analogs once converted to CoA ester derivatives by the cellular ACSL enzymes [5,7,[34][35][36]. These probes designed to monitor the myristoylation of proteins actually compete with the binding of the correct acyl-donor (Myr-CoA) to NMT, and the consequence of such competition on the membrane association and functions of the thousands of myr-proteins is often overlooked during prolonged in vivo labeling experiments [37][38][39].
We established that one of the functions of ACBD6 was to protect the NMT/ACBD6 complex from competition and provide enhanced activity under Myr-CoA limiting concentrations. It was argued that binding of Myr-CoA to ACBD6 in a complex with NMT could sequester the acyl-donor away from NMT, allowing access of other acyl-CoAs to the Myr-CoA binding site and promote lack of specificity of the myristoylation reaction [40]. However, the Myr-CoA bound to the ACB domain is channeled to NMT and the acyl-CoA binding (ACB) domain is not necessary to provide protection [2,3]. The ACB domain appears to positively regulate the function of the ANK module in the ACBD6/NMT complex. Ligands bound to the ACB domain act as positive effectors of the acyltransferase activity of the NMT/ACBD6 complex. Unique among the members of the ACBD family [41], the phosphorylation of two serine residues of the ACB domain regulates the binding activity of ACBD6 and further enhances the activity of the ACBD6/NMT complex [2].
In addition to its role in regulating the function of the NMT enzymes of human and other organisms, ACBD6 can regulate the availability of acyl-CoAs in partition between the cytosolic and membrane compartments of the cells. Acyl-CoAs bound to ACBD6 are acyl-donors for the lysophospholipid acyltransferase enzymes (LPLAT). The dynamic binding property of the ACB domain allows the controlled release of acyl-CoA to the membrane-bound enzymes and protects them from the detergent-like property of their substrates [42,43].
The importance of ACBD6 is underscored by the fact that genetic mutations of the ACBD6 gene preventing the production of a full-length protein are associated with neurodegenerative syndromes in humans [3]. In view of the variety of processes influenced by ACBD6, and the presence of several other acyl-CoA binding proteins with potentially overlapping function, as well as the independent role of the two functional modules (ACB and ANK), we investigated the effects of the disruption of the ACBD6 gene in human cells. HeLa ACBD6.KO cells that do not produce the ACBD6 protein were viable. The acylation of lipids was significantly reduced in the absence of ACBD6 and the defect in PC formation by the Lands' cycle led to a defect in formation of lipid droplets. The deficiency of the myristoylation reaction in the absence of an NMT/ACBD6 complex was evidenced by a decrease of the in vivo rate of protein myristoylation, and by the increased sensitivity of the cells to NMT inhibitors. These results established that ACBD6 supports two distinct acyl-CoA dependent acylation pathways essential for the remodeling of lipids in membranes by the LPLAT enzymes and of proteins by the NMT enzymes.
Construction and Characterization of the ACBD6.KO Cells
Deletion of the human ACBD6 gene was performed in HeLa cells with an ACBD6 CRISPR/Cas9 set (Santa Cruz; #sc-413630) designed to remove the entire ACB domain and generate a codon frameshift leading to disruption of the open reading frame. Transfected cells were selected in the presence of 1 µg/mL puromycin with medium changed every two days for a period of two weeks. Several surviving clones were obtained, and were initially analyzed by RT-PCR to confirm disruption. The absence of production of an ACBD6 protein, full-length or truncated, was further confirmed by Western blotting. The ACBD6.KO cells do not display apparent growth defects and were maintained in culture as their parent cells. Total RNA was isolated with PureLink RNA Mini Kit according to the manufacturer instructions (Thermo Fisher Scientific, Pittsburgh, PA, USA). Purified RNAs were treated with RNase-free DNase I (TURBO DNase, Thermo Fisher Scientific). Synthesis of cDNA was performed with the RevertAid First Strand cDNA Synthesis kit in the presence of oligo(dT) primers (Thermo Fisher Scientific). End-point RT-PCRs were performed with SuperScript™ One-Step RT-PCR System with Platinum™ Taq DNA Polymerase (Thermo Fisher Scientific). cDNA of five clones were sequenced to confirm the identity of the deletion in the clones. Western blot detection of ACBD6 (#MA5-28990; Thermo Fisher Scientific), NMT2 (#sc-136005; Santa Cruz Biotech, Dallas, TX, USA) and ACTB (#sc-81178; Santa Cruz Biotech) were performed with mouse monoclonal antibodies.
Cell Culture and Growth Experiments
Cells were grown in high-glucose DMEM supplemented with 10% fetal bovine serum, 2 mM glutamine, 5 mM non-essential amino acids and 1% (v/v) MEM vitamins (Thermo Fisher Scientific). Growth measurements were performed in 96-well plates and quantified by staining with the sulforhodamine B (SRB) dye, as previously described [3]. Absorbance was measured at 560 nm with a microplate reader. As indicated in the legend of figures, cells were grown in the presence of increasing concentrations of 2-hydroxymyristate (Cayman, Ann Arbor, MI, USA) and IMP-1088 (Cayman).
Lipid Droplet Quantification and Isolation
Lipid droplets (LDs) detection and quantification were performed with the Cell Navigator Fluorimetric Lipid Droplet assay kit (em/ex 550/640; ATT Bioquest, Pleasanton, CA, USA), according to the manufacturer's instructions. Cells were grown in 96-well plates for 24 h to near confluency in the absence or presence of 200 µM oleic acid, made from a solution of 200 mM oleic acid/40 mM defatted BSA in PBS. LDs were quantified from set of 7 wells of three independent experiments. Cells were then fixed with TCA and stained with SBR for total protein quantification, which was used to normalize the fluorescence value of each well. LDs were isolated from cells grown to near confluency in four T75 flasks in the presence of 100 µM oleic acid for 24 h, using the LD Isolation kit (Cell Biolabs, San Diego, CA, USA). LDs were collected in about 400 µL at a protein concentration of 0.1 µg/µL and stored at −80 • C.
N-myristoyltransferase Activity Measurements
The reactions were performed as previously described with few modifications [2]. For the measurements performed in the presence of purified human ACSL6 (150 nM) [44], ATP (10µM) and CoASH (0.3µM) were added to the reactions. Some reactions were performed in the presence of the fatty acid precursor C 14:0 (Myr; 20 µM), the fatty acid analog competitor 2-hydroxymiristate (2-OH Myr; 10, 50, 100, 1000 µM), the peptide binding inhibitor IMP-1088 (10, 100, 1000 nM), as indicated in the legend of the figures. Fatty acids, dried from 100 mM stock solutions made in ethanol, were maintained in solution with Triton X-100 (final concentration in reaction was 0.04%). Reactions were performed in triplicate in 200 µL at 37 • C with 250 nM purified human NMT2, unless otherwise indicated. Detection and quantification of the formation of the acyl-peptide was performed by reverse phase HPLC [2]. The measurement of the in vivo myristoyltransferase activity of the ACBD6.KO cells was performed by quantification of the incorporation of the analogue 12-azidododecanoic acid (12-ADA) (Click-iT myristic acid kit #C10268; Thermo Fisher Scientific) into proteins. After protein extractions, the azido-myristoylated proteins were reacted with alkyne-biotin (Click-iT Biotin protein analysis detection kit #C33372; Thermo Fisher Scientific) and detected by Western blotting with HRP-Streptavidin. Cells were grown in T75 flasks to about 70% confluency, and 5 µM 12-ADA (made 16.6 mM in DMSO) were added to the growth medium and incubated for 1-4-18 h, as indicated in the legend of the figure. The medium was removed, cells were washed three times with ice-cold water, and the cells were lysed in the flask with 1 ml of ice-cold 50 mM Tris-HCl pH 8.0, 1% SDS, and Halt™ Protease Inhibitor Cocktail (Thermo Fisher Scientific). After agitation at 4 • C for 20 min, the cell extract was collected and sonicated for 10 s on ice. The solution was then cleared of debris and aggregates by centrifugation at 16,000 g for 5 min at 4 • C. The proteins were then precipitated with methanol/chloroform and the pellet was washed with methanol, dried, and suspended in 100 µL of 50 mM Tris-HCl pH 8.0 and 1% SDS. The protein concentrations were determined with the Detergent Compatible Bradford assay kit (Thermo Fisher Scientific) with BSA as reference. About 200 µg of proteins were then reacted with alkyne-biotin, according to the manufacturer's instructions. Proteins were precipitated with methanol/chloroform and carefully washed with methanol. Pellets were suspended in 50 µL of 50 mM Tris-HCl pH 8.0 and 1% SDS and the protein concentration was determined. Proteins (40 µg) were separated on denaturing SDS-PAGE (Any Kd TGX gel; Bio-Rad, Hercules, CA, USA), transferred to PVDF membrane and reacted with streptavidin-HRP (#STAR5B; Bio-rad). Following detection, the membrane was stripped and blotted with mouse monoclonal ACTB antibody (#sc-81178; Santa Cruz Biotech).
Protein Palmitoylation Quantification
Cells were grown and labeled with 10 µM 15-azido-pentadecanoic acid (Thermo Fisher Scientific) as described above. Protein extraction, biotin-alkyne reaction, and analysis were performed as described for the 12-ADA labeling experiments.
Fatty Acid Incorporation
Cells were grown in 96-well plates to about 70% confluence. The medium was removed and replaced by medium containing 5µM [ 14 C]C 16:0 (made as a 250 µM solution with 0.02% defatted BSA). Cells were incubated for the indicated times (10 to 120 min) and were washed twice with 0.1% BSA made in PBS to remove unincorporated labels. Fresh medium was added, and cells were incubated for one hour. For each time point, two sets of 8 wells were assayed. One set was fixed with TCA and stained with SBR for total protein quantification. Scintillation cocktail was added to the second set to dissolve the cells, which were transferred to vials and counted in a scintillation counter (LS6500 Beckman Coulter, Brea, CA, USA).
Acyl-CoA Synthetase and lysoPL Acyltransferase Assays
Cells grown in T75 flasks were harvested by trypsinization, washed in PBS and suspended in ice-cold 20 mM sodium phosphate pH8.0, 10 mM MgCl 2 , 5 mM DTT, 20% glycerol and Halt™ Protease Inhibitor Cocktail (Thermo Fisher Scientific). Cells were lysed with a glass grinder and debris was removed by centrifugation at 2000× g for 10 min at 4 • C. The protein concentration of the cleared protein extracts was determined, and the extracts were stored at −80 • C. Reactions were performed at 37 • C in 200 µL of 20 mM sodium phosphate pH 8.0, 2 mM DTT, 20 mM MgCl 2 , 10 mM ATP and 0.5 mM CoA, with 10 µM [ 14 C]C 16:0 , 10 µM lysoPC (or lysoPA), and 8.5 µg to 20 µg of protein extract, as indicated in the legend of the figures. Sets of reactions were performed in the absence of lysoPC/lysoPA to assay the ACSL activity. LPCAT assays of isolated LDs were performed with 5 µM [ 14 C]C 16 -CoA, 20 µM lysoPC and 0.8 µg LDs proteins. For each assay, the incubation times are indicated in the figures. Three to four time points, in triplicate, were used to determine the rates of incorporation. Calculations and statistical analysis were performed with GraphPad Prism 9.
Role of the Acyl-CoA Synthetase Enzymes in the Myristoylation Reaction
The apparent misunderstanding of the NMT requirement for thioesterification of the acyl-donors and acyl-competitors [33] led us to reassess the role of long-chain acyl-CoA synthetase (ACSL) enzymes in the myristoylation reaction. In the cell, ACSL are responsible for the formation of the donor Myr-CoA from the fatty acid myristate, and the competitor 2-OH Myr-CoA from a non-hydrolysable fatty acid analog of myristate (2-hydroxymyristate; 2-OH Myr) [22]. As expected, we confirmed that neither fatty acid was a donor or competitor of the myristoylation reaction. Even at the high concentration of 1 mM, the formation of the myr-peptide from Myr-CoA was unaffected by the presence of 2-OH Myr ( Figure 1A), and no product was formed in the presence of Myr alone ( Figure 1B inset). When ATP, CoASH, and the acyl-CoA synthetase enzyme ACSL6 [44] were added to the reaction, myristoylation by NMT proceeded from Myr ( Figure 1B). Under these conditions, 2-OH Myr inhibited NMT activity even at the lower concentration of 0.1 mM (≈80% inhibition Figure 1C). When myristoylation could proceed without requirement from the ACSL enzyme by providing Myr-CoA instead of the precursor Myr, 2-OH Myr could still inhibit NMT activity in the presence of ACSL (≈90% inhibition; Figure 1D). These findings confirmed the role of the ACSL enzymes in the myristoylation reaction, and the absolute requirement for thioesterification of the acyl chain. The absence of inhibition of purified NMT1 by 2-OH Myr, which was interpreted as evidence that this analog was inactive [33], was in fact expected since Myr analogs must be esterified with CoA to access the NMT binding site [17,[19][20][21]45]. Synthetic compounds designed to occupy the peptide binding pocket, such as IMP-1088, strongly inhibit myristoylation, and do not require activity of cellular enzymes for their action ( Figure 1A). The cellular ACSL enzymes are also responsible for the conversion of various synthetic myristate analogs, such as 12-ADA and YnMyr, into active labeling probes (see below) ( Figure 2).
Disruption of ACBD6 in Human Cells
Disruption of the ACBD6 gene was performed in HeLa cells using a CRISPR/C construct resulting in the out-of-frame deletion of the acyl-CoA binding domain (A coding region. Several clones were selected and further analyzed for expression o ACBD6 mRNA and protein ( Figure 3). Reverse transcription of the full-length coding
Disruption of ACBD6 in Human Cells
Disruption of the ACBD6 gene was performed in HeLa cells using a CRISPR/Cas9 construct resulting in the out-of-frame deletion of the acyl-CoA binding domain (ACB) coding region. Several clones were selected and further analyzed for expression of an ACBD6 mRNA and protein ( Figure 3). Reverse transcription of the full-length coding region in the clones produced a shorter and single cDNA ( Figure 3A). Sequencing confirmed the out-of-frame deletion of exon 1 to exon 3, encoding the ACB domain ( Figure 3B). The absence of an ACBD6 product in the ACBD6.KO clones was further confirmed by Western blotting ( Figure 3C). Whereas disruption of ACBD6 is associated with profound neurological deficiencies in humans [3], the growth of these cells was similar to their parents, indicating that ACBD6 does not appear to be an essential protein under laboratory growth conditions.
NMT Activity Deficiency in the Absence of ACBD6
The availability of Myr-CoA is limiting in cells and can be out-competed by the more abundant Pal-CoA (C16-CoA). Formation of an ACBD6/NMT complex enhances myristoylation under acyl donor limiting conditions but also protects the Myr-CoA binding site from acyl competitors [1][2][3]. Cells not producing the ACBD6 protein provided an oppor-
NMT Activity Deficiency in the Absence of ACBD6
The availability of Myr-CoA is limiting in cells and can be out-competed by the more abundant Pal-CoA (C 16 -CoA). Formation of an ACBD6/NMT complex enhances myristoylation under acyl donor limiting conditions but also protects the Myr-CoA binding site from acyl competitors [1][2][3]. Cells not producing the ACBD6 protein provided an opportunity to assess the impact of substrate limitation and competition on the myristoylation reaction. Cells were challenged with NMT inhibitors blocking either the acyl-donor or the polypeptide binding site. The drug IMP-1088 prevents binding of the polypeptide to NMT [46,47], whereas the CoA thioester derivative of the Myr analog 2-OH Myr occupies the Myr-CoA binding site [22]. As shown in Figure 4, the absence of ACBD6 rendered the growth of the cells more sensitive to the drugs and resulted in their inability to grow even at concentrations that had little effect on the parent cells (20 nM IMP-1088 and 20 µM 2-OH Myr). The increased sensitivity of the cells to the two competitors suggested that activity of NMT was reduced in the ACBD6.KO cells and could not overcome further reduction induced by the binding of the drugs. In addition, these cells were more sensitive to the competitor targeting the Myr-CoA binding site than the peptide binding inhibitor ( Figure 4A,B,D). These findings also rule out the suggestion that the growth inhibition observed in cells exposed to 2-OH Myr was the result of some non-specific toxic effect unrelated to NMT activity [33]. The absence of ACBD6 would not affect a broad metabolic defect induced by this fatty acid. In the absence of ACBD6, it appears that the NMT enzymes are no longer protected from competition and that the low abundance of Myr-CoA in the cells (≈0.1-1 µM) [48] becomes limiting when challenged with non-inhibitory acyl-donor competitor concentrations (≈10-20 µM).
Biomolecules 2022, 12, x 9 of 18 activity of NMT was reduced in the ACBD6.KO cells and could not overcome further reduction induced by the binding of the drugs. In addition, these cells were more sensitive to the competitor targeting the Myr-CoA binding site than the peptide binding inhibitor ( Figure 4A,B,D). These findings also rule out the suggestion that the growth inhibition observed in cells exposed to 2-OH Myr was the result of some non-specific toxic effect unrelated to NMT activity [33]. The absence of ACBD6 would not affect a broad metabolic defect induced by this fatty acid. In the absence of ACBD6, it appears that the NMT enzymes are no longer protected from competition and that the low abundance of Myr-CoA in the cells (≈0.1-1 µM) [48] becomes limiting when challenged with non-inhibitory acyldonor competitor concentrations (≈10-20 µM).
Protein N-Myristoylation Deficiency in the Absence of ACBD6
To confirm the defect of the N-myristoyl-transferase reaction in the ACBD6.KO cells, the level of myristoylated proteins was quantified in vivo. Cells were grown in the presence of a Myr azide-derivative analog (12-ADA) for up to 18 h [39]. The azide-myristoylated-proteins were detected by Click chemistry with an alkyne-biotin/HRP-streptadvin system (see Section 2) ( Figure 5). The transfer of the 12-ADA chain from 12-ADA-CoA onto myr-proteins resulted in a protein labeling profile similar to the pattern observed during the in vivo incorporation of 14 C-Myr [22] (Figure 5A). Two major bands were de-
Protein N-Myristoylation Deficiency in the Absence of ACBD6
To confirm the defect of the N-myristoyl-transferase reaction in the ACBD6.KO cells, the level of myristoylated proteins was quantified in vivo. Cells were grown in the presence of a Myr azide-derivative analog (12-ADA) for up to 18 h [39]. The azide-myristoylated-proteins were detected by Click chemistry with an alkyne-biotin/HRP-streptadvin system (see Section 2) ( Figure 5). The transfer of the 12-ADA chain from 12-ADA-CoA onto myrproteins resulted in a protein labeling profile similar to the pattern observed during the in vivo incorporation of 14 C-Myr [22] (Figure 5A). Two major bands were detected within one hour labeling and additional bands accumulated over time. Compared to the parent HeLa cells, cells lacking ACBD6 had a significantly lower amount of myristoylated proteins within the first hour of labeling (≈60% of HeLa) ( Figure 5B). This difference decreased during the growth of the cells and myristoylation levels of the ACBD6.KO cells were nearly as high as HeLa after 18 h (≈90%). Under laboratory growth conditions, the slower accumulation of mature myristoylated proteins from nascent polypeptides appears to be sufficient to support growth and could account for the lack of a significant growth defect of the ACBD6.KO cells ( Figure 4C) [3].
polypeptides appears to be sufficient to support growth and could account for the lack of a significant growth defect of the ACBD6.KO cells ( Figure 4C) [3].
To assess whether the absence of ACBD6 also slowed the activity of other protein acylation enzymes, the palmitoylation of proteins was monitored [6,10]. No significant difference in the amount and rate of protein labeling was detected in the ACBD6.KO cells compared to HeLa ( Figure 5C,D). The normal level of protein acylation from Pal-CoA provided further evidence that, in the ACBD6.KO cells, the decreased transfer rate of the myristoyl chain was the result of reduced NMT activity rather than deficiency in the formation of acyl-CoAs, such as the acyl donors Myr-CoA and Pal-CoA (see below, Figure 6). N-myristoylation of proteins is reduced in the absence of ACBD6. (A,B). HeLa and ACBD6.KO cells were grown in flask and exposed to the labeling probe 12-ADA (azidomyristate) at a concentration of 5 µM for 1-4-18 h, as indicated. Cells were harvested, lysed, and N-myristoylation of proteins is reduced in the absence of ACBD6. (A,B). HeLa and ACBD6.KO cells were grown in flask and exposed to the labeling probe 12-ADA (azido-myristate) at a concentration of 5 µM for 1-4-18 h, as indicated. Cells were harvested, lysed, and azidomyristoylated-proteins were detected by Click chemistry (see Method). Proteins were separated on denaturing SDS-gradient gel, transferred on PVDF membrane, and detected with streptavidin-HRP. β-actin was used as a loading reference and was detected with a monoclonal antibody (A). Intensity of a major band (asterisk) and all the visible bands were quantified in Hela (circle) and ACBD6.KO To assess whether the absence of ACBD6 also slowed the activity of other protein acylation enzymes, the palmitoylation of proteins was monitored [6,10]. No significant difference in the amount and rate of protein labeling was detected in the ACBD6.KO cells compared to HeLa ( Figure 5C,D). The normal level of protein acylation from Pal-CoA provided further evidence that, in the ACBD6.KO cells, the decreased transfer rate of the myristoyl chain was the result of reduced NMT activity rather than deficiency in the formation of acyl-CoAs, such as the acyl donors Myr-CoA and Pal-CoA (see below, Figure 6). azido-myristoylated-proteins were detected by Click chemistry (see Method). Proteins were separated on denaturing SDS-gradient gel, transferred on PVDF membrane, and detected with streptavidin-HRP. β-actin was used as a loading reference and was detected with a monoclonal antibody (A). Intensity of a major band (asterisk) and all the visible bands were quantified in Hela (circle) and ACBD6.KO (square) (B). Values are reported relative to the intensity of the β-actin signal detected in each sample as a function of time. The values obtained for the ACBD6.KO cells are also reported relative to the values obtained with HeLa cells (filled red square; data plotted on the right y axis). Error bars represent the standard deviations of values obtained from 3 measurements: **, p < 0.05; ***, p < 0.005. (C,D). HeLa and ACBD6.KO cells were exposed to the labeling probe 15-azido-pentadecanoic acid (azido-palmitate) at a concentration of 10 µM for 1-4-18 h, as indicated. Proteins were detected as indicated above. The protein ladder Blue Prestained Protein Standard (New England BioLabs, Ipswich, MA, USA) is indicated on the right in (A,C). The values obtained for the ACBD6.KO cells are reported relative to the values obtained with HeLa cells. Error bars represent the standard deviations of values obtained from 3 measurements.
Role of ACBD6 in the acyl-CoA Dependent Acylation of Lipids
Acyl-CoAs bound to ACBD6 can be channeled to the acyl-CoA dependent acyltransferase LPLAT enzymes which acylate monoacylglycerophospholipids (lysophospholipids) to phospholipids [42,49,50]. The controlled release of acyl-CoA by ACBD6 appears to be essential for the protection of these membrane-bound proteins from the detergent-like property of their substrates. The in vivo requirement of those enzymes for ACBD6 was assessed in the ACBD6.KO cells. Compared to HeLa, the levels of incorporation of exogenously added fatty acid ( 14 C16:0) and of esterification to 14 C16-CoA by the ACSL enzymes were not affected in the ACBD6.KO cells ( Figure 6A,B). However, a significant defect in the acylation of lysophospholipids was observed and was not restricted to a specific LPLAT enzyme. Acylation of the two lysophospholipids LPA and LPC by the LPAAT and LPCAT enzymes was reduced by 60-70% in the absence of ACBD6 ( Figures 6C,D and 7).
Role of ACBD6 in the Acyl-CoA Dependent Acylation of Lipids
Acyl-CoAs bound to ACBD6 can be channeled to the acyl-CoA dependent acyltransferase LPLAT enzymes which acylate monoacylglycerophospholipids (lysophospholipids) to phospholipids [42,49,50]. The controlled release of acyl-CoA by ACBD6 appears to be essential for the protection of these membrane-bound proteins from the detergent-like property of their substrates. The in vivo requirement of those enzymes for ACBD6 was assessed in the ACBD6.KO cells. Compared to HeLa, the levels of incorporation of exogenously added fatty acid ( 14 C 16:0 ) and of esterification to 14 C 16 -CoA by the ACSL enzymes were not affected in the ACBD6.KO cells (Figure 6A,B). However, a significant defect in the acylation of lysophospholipids was observed and was not restricted to a specific LPLAT enzyme. Acylation of the two lysophospholipids LPA and LPC by the LPAAT and LPCAT enzymes was reduced by 60-70% in the absence of ACBD6 (Figures 6C,D and 7). A deficiency in the acylation of lysophospholipids, an essential step of the Kennedy and Land's pathways, was expected to impact various processes in the cells [50,52]. The LPCAT1 and LPCAT2 enzymes are bound to the lipid monolayer surrounding the neutral lipid core of lipid droplets [53]. Synthesis of phosphatidylcholine (PC) from acyl-CoA and lysoPC by the LD-bound LPCAT enzymes is essential for LD formation and the downregulation of either LPCAT1 or LPCAT2 reduced the cellular LD content [53][54][55]. Quantification of LDs produced in the ACBD6.KO cells shows a significant depletion compared to HeLa (−30%; Figure 8A). Addition of oleic acid to the culture was successful in stimulating LD production, and the LD content of the ACBD6.KO cells increased to a near normal level ( Figure 8A). The ability of the cells to respond to metabolic stimulation established that the LD synthesis pathway was not irreversibly deficient and suggested that a step had become limiting in the absence of ACBD6. Since these cells were deficient in the essential lysoPC acylation reaction ( Figure 6D), lipid droplets were isolated, and their ability to acylate lysoPC from acyl-CoA was determined in vitro. The rate and yield values of PC formation of the LDs obtained from the ACBD6.KO cells were similar to those isolated from HeLa (Figures 7 and 8B). The normal level of LPCAT activity bound to the LDs indicated that the decreased content in LDs of the ACBD6.KO cells was not due to diminished levels of bound-LPCAT enzymes. The absence of ACBD6 rendered the cells deficient in the Lands' acylation pathway limiting formation of PC and LDs. These findings established a new function for ACBD6 in controlling the lipid formation of vesicles in the cells. Figures 6 and 8 were separated by thin-layer chromatography as previously described [51]. Formation of PC and of PA by lysates of ACBD6.KO and HeLa are shown in the top and middle panels, respectively. Formation of PC by lipid droplets (LDs) isolated from ACBD6.KO and HeLa is presented in the bottom panel.
A deficiency in the acylation of lysophospholipids, an essential step of the Kennedy and Land's pathways, was expected to impact various processes in the cells [50,52]. The LPCAT1 and LPCAT2 enzymes are bound to the lipid monolayer surrounding the neutral lipid core of lipid droplets [53]. Synthesis of phosphatidylcholine (PC) from acyl-CoA and lysoPC by the LD-bound LPCAT enzymes is essential for LD formation and the downregulation of either LPCAT1 or LPCAT2 reduced the cellular LD content [53][54][55]. Quantification of LDs produced in the ACBD6.KO cells shows a significant depletion compared to HeLa (−30%; Figure 8A). Addition of oleic acid to the culture was successful in stimulating LD production, and the LD content of the ACBD6.KO cells increased to a near normal level ( Figure 8A). The ability of the cells to respond to metabolic stimulation established that the LD synthesis pathway was not irreversibly deficient and suggested that a step had become limiting in the absence of ACBD6. Since these cells were deficient in the essential lysoPC acylation reaction ( Figure 6D), lipid droplets were isolated, and their ability to acylate lysoPC from acyl-CoA was determined in vitro. The rate and yield values of PC formation of the LDs obtained from the ACBD6.KO cells were similar to those isolated from HeLa (Figures 7 and 8B). The normal level of LPCAT activity bound to the LDs indicated that the decreased content in LDs of the ACBD6.KO cells was not due to diminished levels of bound-LPCAT enzymes. The absence of ACBD6 rendered the cells deficient in the Lands' acylation pathway limiting formation of PC and LDs. These findings established a new function for ACBD6 in controlling the lipid formation of vesicles in the cells.
Discussion
The formation of an ACBD6/NMT complex enhances and protects activity of the enzyme from the binding competition of the C14 carbon donor, Myr-CoA, by abundant high binding affinity acyl-CoAs, such as C16-CoA or C12-CoA [2,14,17]. The defect in protein Nmyristoylation and the increased growth sensitivity to NMT inhibitors of the ACBD6.KO cells confirmed that the absence of ACBD6 led to decreased NMT activity. Similarly, skinderived fibroblasts of individuals carrying loss-of-function mutations of the ACBD6 gene were deficient in myristoylation and hyper-sensitive to NMT inhibitor [3]. As acyl-CoA carriers, other members of the ACBD family can weakly stimulate the NMT reaction in vitro, presumably via acyl-donor channeling, but only ANK-containing ACBD proteins can form an enzymatic complex with NMT [2]. This mechanism is not unique to human cells, and Plasmodium falciparum also has a PfACBD6/PfNMT system [2]. The finding that disruption of ACBD6 slowed but did not prevent myristoylation supports the interpretation that, as suggested, the selection process of the donor is not limited to the diffusion of the correct acyl-CoA to the donor site in the first step of the rather unique kinetic mechanism of these enzymes [11]. However, the formation of an ACBD6/NMT complex is essential to provide full activity and support the co-translational acylation modification affecting the functions and localization of an estimated thousands of proteins in human cells [11,12].
The ability of the ACBD6.KO cells to incorporate and esterify fatty acids at similar levels as compared to the parent cells confirmed that ACBD6 is not essential to the acyl-CoA transferase reactions. ACBD1 (DBI, ACBP) protects those enzymes from feedback
Discussion
The formation of an ACBD6/NMT complex enhances and protects activity of the enzyme from the binding competition of the C14 carbon donor, Myr-CoA, by abundant high binding affinity acyl-CoAs, such as C 16 -CoA or C 12 -CoA [2,14,17]. The defect in protein N-myristoylation and the increased growth sensitivity to NMT inhibitors of the ACBD6.KO cells confirmed that the absence of ACBD6 led to decreased NMT activity. Similarly, skin-derived fibroblasts of individuals carrying loss-of-function mutations of the ACBD6 gene were deficient in myristoylation and hyper-sensitive to NMT inhibitor [3]. As acyl-CoA carriers, other members of the ACBD family can weakly stimulate the NMT reaction in vitro, presumably via acyl-donor channeling, but only ANK-containing ACBD proteins can form an enzymatic complex with NMT [2]. This mechanism is not unique to human cells, and Plasmodium falciparum also has a PfACBD6/PfNMT system [2]. The finding that disruption of ACBD6 slowed but did not prevent myristoylation supports the interpretation that, as suggested, the selection process of the donor is not limited to the diffusion of the correct acyl-CoA to the donor site in the first step of the rather unique kinetic mechanism of these enzymes [11]. However, the formation of an ACBD6/NMT complex is essential to provide full activity and support the co-translational acylation modification affecting the functions and localization of an estimated thousands of proteins in human cells [11,12].
The ability of the ACBD6.KO cells to incorporate and esterify fatty acids at similar levels as compared to the parent cells confirmed that ACBD6 is not essential to the acyl-CoA transferase reactions. ACBD1 (DBI, ACBP) protects those enzymes from feedback inhibition by the acyl-CoA products and appears sufficient to support the activation of fatty acids in the cells [56,57]. These findings provided further evidence that the N-myristoylation defect observed in the ACBD6.KO cells was not a consequence of a defect in Myr-CoA formation. The decreased acylation of lipids affecting the formation of PC by the Lands' pathway accounts for the lower production of LDs observed in the absence of ACBD6. Interestingly, downregulation of the Caenorhabditis elegans ACBD6 homolog, CeACBP-5, resulted in an opposite effect, and a 40% increase in the production of lipid droplets was observed [58]. These findings provide examples of two distinct acyl-CoA-dependent cellular processes regulated by ACBD6 either via ACB-mediated substrate binding or via ANK-mediated complex formation.
The unexpected controversy surrounding the use of compounds other than the IMP drugs designed to block the peptide-binding pocket [33] to inhibit the activity of NMT enzymes was rather puzzling. Fatty acid analogs targeting the Myr-CoA binding site of NMT must be provided as CoA thioester for binding [17,[19][20][21]45]. The fatty acid 2-OH Myr is the in vivo precursor of 2-OH Myr-CoA, which is a Myr-CoA analog and a potent inhibitor of NMT (Ki of 45 nM) [22]. Esterification of 2-OH Myr with CoASH by the cellular acyl-CoA synthetases is required for inhibition of the NMT acyl-transferase reaction. In vitro, 2-OH Myr-CoA but not 2-OH Myr inhibits the activity of NMT. The Myr analog labeling probes YnMyr and 12-ADA also require thioesterification to form YnMyr-CoA and 12-ADA-CoA in order to access the Myr-CoA binding site of NMT both in vitro and in vivo [5,7,34,36,38,59,60]. Surprisingly, the inhibitory property of the activated form 2-OH Myr-CoA was not tested and the failure of 2-OH Myr to inhibit the activity of NMT1 in vitro was taken as evidence that it was not an inhibitor [33]. When added to the culture medium, 2-OH Myr has been shown to inhibit myristoylation, membrane association, and functions of diverse myr-proteins, as well as preventing their labeling with the radio-labeled [ 3 H or 14 C] Myr-CoA, which is a direct measure of in vivo N-myristoylation [3,[22][23][24][25][26][27][28][29][30][31][32]. The conclusion that 2-OH Myr is not an in vitro inhibitor of NMT is accurate, but similarly, it would be accurate to dismiss the use of YnMyr and 12-ADA as myristoylation labeling probes since none of these compounds are ligands for NMT. These fatty acid analogs can cross cellular membranes and become Myr-CoA analogs once esterified by the cellular acyl-CoA synthetase. All three compounds will compete with binding of Myr-CoA to NMT. However, only the acyl chains that are not a donor of the acyl-transferase step, such as 2-hydroxymiristate, will block the catalytic cycle. The azido and alkynyl acyl chain of YnMyr-CoA and 12-ADA-CoA can be transferred on an acceptor polypeptide resulting in the labeling of myr-proteins and the release of the Myr-CoA site of the enzyme.
When assaying the in vivo inhibitory efficacy of the fatty acid 2-OH Myr and the IMP compounds on the incorporation of the YnMyr probe, cellular mechanisms interfering with those experiments should be considered. Once activated to a thioester by the cellular ACSL, the acyl chain will be targeted for incorporation into lipids during the several hour labeling period, which will limit its accumulation to levels sufficient to outcompete both the donor Myr-CoA and the probe YnMyr-CoA. The CoA thioester of YnMyr and 12-ADA bind NMT with high affinity and compete with Myr-CoA [37][38][39]. Similarly, the treatment of the cells with a high concentration of probe, 20 µM [33], will compete with 2-OH Myr-CoA for access to the Myr-CoA binding site and will protect the enzyme from inhibition. The IMP drugs target the peptide binding pocket of the enzyme [46,47] and their efficacity to inhibit NMT is not affected by the occupancy of the donor site by the YnMyr probe. Even under such unfavorable conditions, the myristoylation of a well-characterized NMT target, ARL1, was decreased in cells exposed to 2-OH Myr, but these findings were dismissed ( Figure S1) [33]. As therapeutic drugs, the IMP inhibitors are likely more selective for the targeted inhibition of human intra-cellular pathogenic NMT enzymes than fatty acids analogs, but it does not change the kinetic characteristic of the reaction which will be blocked when the 2-hydroxymyristate chain occupies the donor site [19,21,22]. In addition, a ligand targeting the acyl-CoA binding site, and not the peptide binding pocket, is the appropriate NMT inhibitor to affect the steps controlling the binding and selection of the acyl-donor.
The wide range of processes affected by the members of the acyl-CoA binding proteins family, present in all kingdoms of life, cannot be solely accounted by their shared property of the binding of lipid intermediate metabolites [41]. The presence of a conserved ACB motif defined this family, but acyl-CoAs are often not involved in the affected processes, and ACBDs' functions extend beyond trafficking and buffering of the cytosolic acyl-CoA pools. They also do not appear to have redundant functions since disruption of one is not compensated by the other members [41]. For ACBD6, the ACB domain can supply the NMT substrate Myr-CoA and sequester the competitor Pal-CoA, but in vitro, these functions are not necessary since the ANK module alone is sufficient for the stimulation and protection of NMT activity [3]. Moreover, the fusion of the ANK domain to another ACBD protein conferred the NMT-stimulatory property of ACBD6 to the chimera. These findings support the view that no essential function appears to be provided by the conserved ACB domain to the non-conserved domain. However, several findings indicate that interactions of these two domains, which can perform their functions independently, might be essential to the functions of ACBD6 in vivo. In a mixture of the Myr-CoA donor and of the Pal-CoA competitor, addition of a third preferred acyl-CoA (C 18:1 -CoA), which will saturate the ACB domain, or the phosphorylation of serine residues in the ACB domain, enhanced the activity of the ACBD6/NMT complex [1][2][3]. Although the deletion of ACB does not impair stimulation of NMT, substitution of ACB residues produced forms diminished in their ability to stimulate and protect NMT activity [1]. As a ligand of the ACB domain, acyl-CoA appears to act as a positive effector regulating the properties of the ANK domain. The ACB domain provides the acyl chain for acyl-CoA dependent processes, such as lipid acylation, but it may also act as an allosteric binding site regulating the functions of the non-conserved C-terminal motifs of the ACBD proteins. The dynamic binding properties for acyl-CoA, which is influenced by fatty acids, suggest that those characteristics are essential for ACBDs' functions in an ever-changing environment of ligands differing in length, structure, and abundance [42]. The enhanced properties of the phosphorylated-ligand-bound ACBD6 form, which is likely the form in the cell, might be essential to overcome interference by acyl-CoAs and fatty acids on the activity of the ACBD6/NMT complex [2]. The diversity of the processes affected by ACBD6 highlight its dual function as an acyl-CoA provider and as a regulator of acyl-CoA dependent reactions controlling the lipid and protein composition of human cells membranes.
Author Contributions: Conceptualization, implementation, analysis, E.S. All authors wrote the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: Funding for this study was provided by a discretionary budget.
Institutional Review Board Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-11-25T16:13:15.257Z | 2022-11-22T00:00:00.000 | {
"year": 2022,
"sha1": "ed3d77575b29e1acc1c3e77ead1e0666d074a6bd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/12/12/1726/pdf?version=1669123804",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a68e8be92771bda189d4fc9e2b6a48105870840e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
258397945 | pes2o/s2orc | v3-fos-license | Local Monomer Levels and Established Filaments Potentiate Non-Muscle Myosin 2 Assembly
The ability to dynamically assemble contractile networks is required throughout cell physiology, yet the biophysical mechanisms regulating non-muscle myosin 2 filament assembly in living cells are lacking. Here we use a suite of dynamic, quantitative imaging approaches to identify deterministic factors that drive myosin filament appearance and amplification. We find that actin dynamics regulate myosin assembly, but that the actin architecture plays a minimal direct role. Instead, remodeling of actin networks modulates the local myosin monomer levels and facilitates assembly through myosin:myosin driven interactions. Using optogenetically controlled myosin, we demonstrate that locally concentrating myosin is sufficient to both form filaments and jump-start filament amplification and partitioning. By counting myosin monomers within filaments, we demonstrate a myosin-facilitated assembly process that establishes sub-resolution filament stacks prior to partitioning into clusters that feed higher-order networks. Together these findings establish the biophysical mechanisms regulating the assembly of non-muscle contractile structures that are ubiquitous throughout cell biology.
Introduction
Non-muscle myosin 2 (NM2) is a cytoskeletal motor protein that builds bipolar filaments to engage actin filaments and generate contractile forces. The magnitude and orientation of these forces are highly tunable to regulate processes at the cell, tissue, and organism level (1). This adaptability across spatial and temporal scales requires active remodeling of actomyosin networks. Understanding the spatiotemporal mechanisms of how cells build these force-producing units is therefore critical.
NM2 filaments are dynamically assembled from NM2 monomers, which consist of two myosin heavy chains (MHC), two essential light chains (ELC), and two regulatory light chains (RLC). Each MHC consists of an N-terminal motor domain, light chain-binding neck region, and a Cterminal alpha helix which dimerizes into a coiled-coil tail. The standard monomer-to-filament model of NM2 filament assembly begins with phosphorylation of RLC on Thr18/Ser19 (2), which drives the NM2 monomer from the folded, inactive 10S state to the unfolded, assemblycompetent 6S state (3,4). Once unfolded, the coiled-coil tails readily associate in parallel and anti-parallel orientations to form a bipolar filament (5). Kinases from a variety of signaling networks phosphorylate the RLC to enhance NM2 filament assembly, with the dominant kinases being RhoA-activated Rho-associated coiled-coil kinase (ROCK1/2) and Ca ++ /calmodulin-activated NM2 light chain kinase (MLCK) (6). In addition to phosphomodulation, in vitro studies demonstrated that NM2 filament assembly was enhanced in the presence of actin filaments (7), suggesting combinatorial contributions from both kinase signaling and actin networks.
To explore molecular details of NM2 filament assembly in living cells, it is important to capture data at the length and time scales of the interactions in question. Recent advances in light microscopy have provided the spatial resolution required to observe discrete NM2 filaments (∼300 nm in length) with the temporal resolution required to observe network assembly (8,9). These studies have added dynamic mechanistic insight to earlier static electron microscopy (EM) experiments (10,11), and demonstrated that the simple monomer-to-filament model is incomplete in cellular contexts. More specifically, we and others observed that once an initial NM2 filament is established by unknown mechanisms in the lamella of a migrating cell, it grows in intensity, and then "partitions" into a cluster of filaments or "expands" into a stack of filaments (8,9). These clusters/stacks then merge with the higher-order actomyosin networks within the cell (stress fibers, transverse arcs, etc.). Similar progressions have been observed in contractile ring assembly, suggesting a common and universal mechanism for initiating and amplifying NM2 networks (12).
Despite these technology-enabled advances, we currently lack an experimentally-supported working model for how a nascent NM2 filament is precisely established in space and time within a cell.
We also do not understand how nascent NM2 filaments contribute to the higherorder network assembly required for physiological levels of contraction. Here we show that leading edge retractions are better predictors of NM2 filament assembly than canonically proposed calcium and RhoA signaling events. Similarly, we find that actin dynamics regulate NM2 filament assembly, decreasing assembly when actin dynamics are stalled, and amplifying assembly following the break down of actomyosin structures elsewhere in the cell. Despite the clear In the corresponding kymograph in the right panel, orange arrows mark calcium sparks and magenta arrows mark NM2 filament appearance. Scale bar = 10 um.(c) Scarlet-NM2A (in blue) and RhoA biosensor (in grey) imaged with Zeiss Airyscan 880. Left panel displays example frame with kymograph ROI indicated by red dotted line. In the corresponding kymograph in the right panel, orange arrows mark active RhoA signal and magenta arrows mark NM2 filament appearance. Scale bar = 10 um. (d) Sum intensity projection of z-stack and 3 frame time averaging for frames collected every second. Orange-to-red gradient dotted lines indicate wave-like retraction and magenta circles indicate subsequent NM2 filament appearance. Scale bar = 2 um. (e) Scarlet-NM2A cells transiently expressing mEGFP-VASP imaged with Zeiss 880 confocal every second. Image is a sum intensity projection of Z-stack with NM2 in blue and vasp in purple. Scale bar = 10 um. Red line indicates the ROI used for subsequent kymograph in right panel. Grey dotted lines connect leading edge retractions with NM2 filament appearance.
role for actin dynamics in NM2 assembly, we did not observe an actin ultrastructure that is prognostic of NM2 filament formation. Instead, we find that by locally increasing myosin concentration, we can assemble NM2 filaments and initiate filament amplification and further partitioning. Finally, using molecular standard candles, we count the number of myosin monomers in filaments and show that monomers are more likely to add to existing myosin clusters instead of forming nascent filaments. We also find that partitioning myosin typically already contain multiple filaments, suggesting that amplification precedes partitioning. Together these findings clarify the dynamics of NM2 filament assembly within cells.
Results
Leading edge retractions precede nascent NM2 filament appearance. To better understand the precise events that precede nascent NM2 filament assembly in cells, we initially tested spatiotemporal correlation of filament appearance with known upstream biochemical modulators.
Although we detected both calcium and RhoA activity in the lamella, neither signaling cascade preceded NM2 filament appearance with any apparent precision. When imaging the calcium biosensor, flashes often filled the entire lamella. Occasionally, an NM2 filament appearance followed a calcium flash, but many calcium flashes did not result in filament appearances (Fig.1B). In contrast, active-RhoA did co-localize with NM2 filaments, but after filament appearance instead of preceding it (Fig.1C). Interestingly, active-RhoA often flanked the growing NM2 clusters, reminiscent of NM2-dependent RhoA activation observed in other systems (16). Therefore, while RLC kinases are undoubtedly contributing to NM2 filament assembly, our imaging failed to observe spatiotemporal precision in their contribution to initiating assembly events.
To identify additional factors that might dictate nascent assembly, we assessed lamellar NM2 behavior in polarized fibroblasts. Similar to previous reports (17,18) we often find NM2 filament appearance is preceded by a leading edge retraction (Fig. 1D, Movie 2). To more carefully observe this correlation, we imaged mScarlet-NM2A fibroblasts expressing EGFP-VASP to visualize the leading edge. Kymographs drawn through the leading edge illustrated multiple retractions of the cell edge that led to the subsequent NM2 filament appearance in the lamella (Fig 1E).
Actin dynamics facilitate nascent NM2 filament appearance. To directly test the role of leading edge retractions and actin dynamics in NM2 filament assembly, we adopted a drug cocktail consisting of jasplakinolide and latrunculin (JL) that arrests actin dynamics by inhibiting both polymerization and depolymerization (19). First, we confirmed that JL administration stalls leading edge dynamics in the fibroblasts within seconds (Fig 2A-B; Movie 3). We then quantified the rate of NM2 filament appearance in the lamella before and after pharmacological perturbations (Fig 2B-C, . We found the relative appearance rate did not change in cells treated with DMSO, but significantly decreased upon addition of the actin-stalling JL cocktail ( Fig 2C). This demonstrates that while NM2 filament assembly can occur in their absence, leading edge retractions and dynamic actin aid in the process. NM2 filaments assemble in wide array of actin structures. We hypothesized that the underlying actin architecture (alignment, density, etc.) might provide additional cues to facilitate NM2 filament assembly. To better understand the lamellar actin architecture where nascent NM2 filaments are forming, we performed correlative light and platinum replica electron microscopy (PREM; Fig.3) (20,21). We manually unroofed migrating GFP-NM2A fibroblasts (Fig. 3A) and imaged with both super-resolution fluorescence and platinum replica electron microscopy ( Fig 3B-E) (22)(23)(24). Within an unroofed lamella, we observe a range of fluorescent NM2A structures, from low intensity doublets with two distinct puncta ∼300 nm apart (consistent with a bipolar filament or sub-resolution stack; Fig3F), to larger high intensity clusters with many puncta indicating they contain many NM2A filaments ( Fig.3B-C,E). Due to the similar diameter of NM2 bipolar filaments relative to actin filaments, and the overall density of the actin cytoskeleton, we could not distinguish NM2 bipolar filaments in the PREM images, similar to previous reports (11). However, we could observe the local actin architecture where NM2 structures were present and not present. First, the NM2 structures exist in a diverse array of lamellar actin network architectures (Fig. 3D,G). This includes both seemingly disorganized actin and higher density bundled actin (Fig. 3G). Second, while the biggest NM2 clusters typically overlapped with regions of bundled actin, there were no obvious underlying actin features prognostic of low intensity NM2 doublets, with neighboring actin regions appearing indistinguishable from NM2-containing actin regions. Therefore, while filamentous actin supports enhanced NM2 assembly and there are likely structural details with the filamentous actin present here that are beyond the resolution of our PREM imaging, we do not observe specific actin architectures that might be facilitating nascent NM2 assembly events.
Globally elevating NM2 monomer availability initiates filament assembly. Given the lack of an identifying actin ultrastructure to predict NM2 filament formation, we sought to determine whether cytoskeletal dynamics could instead be regulating myosin monomer availability. We used primary MEFs from EGFP-NM2A knockin mice (25) transduced with a lentiviral fluorescent probe for filamentous actin, FTractin-3x-mScarlet (26,27)), to monitor changes in cell morphology and actin architectures (Movie 6). In longterm, time-lapse imaging of migratory cells we observed a qualitative correlation between NM2 filament appearance and tail retraction events (Fig. 4A, Movie 7). Consistent with previous results (9), we also saw that treatment of cells with ROCK inhibitor (Y27632) resulted not only in disassembly of actomyosin structures, but robust assembly of nascent NM2 filaments in the lamella (Fig. 4B, Movie 8). We therefore hypothesized that global monomer availability in the cytoplasm, whether through changes in morphology or pharmacological perturbation, regulates NM2 filament assembly. To test this hypothesis in the absence of actin dynamics, we treated cells simultaneously with JL and the ROCK inhibitor (JLY; Fig.4C, Movie 9) (19). This JLY treatment not only rescued NM2 filament appearance, but increased it relative to the control (Fig. 4D). This demonstrates that a global increase in monomer levels upon stress fiber disassembly is sufficient to initiate NM2 filament assembly in the absence of actin dynamics.
Locally increasing NM2 monomer concentration
initiates filament assembly. To directly test if artificially enhancing local NM2 monomer levels in a cell is sufficient to initiate NM2 filament assembly independent of upstream signaling, we engineered an improved light-inducible dimer (iLID) optogenetic system to optically recruit NM2 monomers to the cortex of migrating fibroblasts (28). We expressed a membrane-anchored LOV2-SsrA peptide in our Halo-tagged NM2A knock-in fibroblast cell line, along with a recruitable SspB-mApple-NM2A construct that can bind anchored SsrA upon blue light activation (Fig.5A). We then imaged the lamella while locally stimulating with blue light in a region devoid of NM2A filaments (Fig.5B, Movie 10). Within minutes, the photo-recruitable NM2A began accumulating in the stimulated region, followed shortly thereafter by the endogenous NM2A (Fig. 5C). Punctate filamentous structures containing a mixture of recruitable and endogenous NM2A continued to enrich and flow retrograde out of the stimulated region. These experiments reveal that locally increasing NM2 monomer concentration is sufficient to initiate filament formation, and that established NM2 filaments can enhance local filament assembly. . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 27, 2023. ; Considering we could faithfully observe both NM2 filament initiating events and the enhancement of established NM2 filament clusters, we next sought to quantify their relative contributions to total lamellar filament assembly. Specifically, we asked if a new NM2 filament forms in the lamella, what is the likelihood that initiates a new cluster versus builds into an existing cluster.
To do so, we adopted a molecular counting workflow using "standard candles" to build a standard curve of fluorescence and subsequently interpolate or extrapolate the number of NM2 monomers present in structures within the cell.
We used a membrane-anchored protein nanocage that selfassembles with 60 subunits when expressed in cells (Fig.6A) (29,30). By using subunits with an EGFP on either one terminus (EGFP-60mer) or both termini (EGFP-120mer), we created two known standards. We created a third standard by expressing EGFP-Actin (Fig. 6B). Each standard candle was separately expressed in fibroblasts, where we segmented and quantified the fluorescent intensity of individual candles (Fig. 6C, Supp. Fig. 1A). We then created a standard curve by plotting the mean fluorescent intensities of each standard distribution as a function of the number of EGFP molecules present in the individual structures (Fig. 6D). Fitting a line to these data demonstrated a highly linear relationship between the fluorescence intensity of a structure and the known number of EGFP molecules present.
Using identical imaging settings, we then imaged endogenous EGFP-NM2A in fibroblasts from homozygotic knock-in mice. In these cells, every MHC 2A is tagged with an EGFP, and every NM2A monomer contains two EGFPs. In vitro studies and theoretical models demonstrate mature NM2A filaments consist of ∼30 monomers (5,31,32). Therefore, one mature NM2 filament would contain ∼60 EGFP molecules while two mature filaments would contain ∼120 EGFPs (Fig. 6E), conveniently aligning with our known standards. Within lamellar regions (Fig. 6F), we quantified two parameters: (1) the number of nascent NM2 filament assembly events that initiate new clusters within a given time and (2) the fluorescent intensity increase for all NM2 adding to existing clusters in the same region during the same time ( Fig. 6G-H, Movie 11). By converting the fluorescent intensity increase in all clusters to the number of NM2 filaments using our standard curve, we could directly compare the number of nascent filament assembly events (new clusters) to the number of filaments assembling into existing clusters. We find that assembling NM2 is ∼100 times more likely to incorporate into existing structures than to form nascent clusters in the lamella, demonstrating the dominant contribution of NM2-facilitated assembly to overall assembly.
NM2 structures amplify to sub-resolution stacks before partitioning. Provided the importance of cluster growth to total lamellar NM2 assembly, and the derivation of clusters from a nascent NM2 filament, we sought to better define molecular mechanisms that enable addition of NM2 filaments to existing structures. Previous high resolution imaging studies observed the process by which a nascent NM2 filament grows in intensity before partitioning into multiple filamentous structures, a process that repeats sequentially to enhance cluster size as they mature into higher-order networks (Fig. 7A) (8,9). Two nonmutually exclusive models were proposed in which a single mature NM2 filament is partitioned into two immature filaments ("Single Filament Partitioning") or a mature filament recruits additional monomers/filaments to establish multiple filaments prior to partitioning ("Multi-Filament Partitioning" ; Fig. 7B).
Using our molecular counting approach, we could now determine at which point during partitioning multiple NM2 filaments are present. We used super-resolution Airyscan imaging to resolve the number of GFP-tagged NM2 head groups within a structure. We classified two-puncta bipolar structures as pre-partitioning, three-puncta structures as midpartitioning, and four puncta structures as post-partitioning ( Fig. 7C-D, Movie 12). In the fixed imaging results, the identified two-puncta structures not only included structures just before partitioning, but also more nascent bipolar structures. Surprisingly, the vast majority of these two puncta structures already contained multiple NM2 filaments (Fig. 7E-F). This suggests a rapid amplification into sub-resolution filament stacks prior to spatial segregation of the NM2 filaments. The live results afforded the opportunity to isolate the exact frame prior to partitioning for each identified structure. Similar to the fixed results, most of the pre-partition or two puncta data contained multiple NM2 filaments. This number increased upon detectable partitioning and post-partitioning states (Fig. 7G). The higher number of NM2 monomers counted in the pre-partition data for live experiment compared to fixed results is likely due to our ability to identify partitioning events live and therefore filter out the more nascent two puncta structures that were not partitioning yet.
These data demonstrate that optically resolved two-puncta structures which have previously been identified as single NM2 filaments, are actually stacks of filaments in register. Partitioning is thus the separation of multiple NM2 filaments from one another, as opposed to the splitting of a single mature filament. Therefore, collectively, we suggest that cluster growth and overall NM2 assembly is highly driven by NM2-facilitated partitioning.
Discussion
The demand for local force generation in numerous myosin 2-dependent processes dictates that NM2 filament assembly and amplification is dependent on its local environment. We propose here a mechanistic model where local myosin monomer availability enhances local NM2 filament assembly (Fig. 8). This myosin enrichment can be achieved strictly through dynamic reorganization of the actin cytoskeleton, including concentrating monomer in the lamella through collapse of leading edge protrusions (Fig. 8A-B) or globally elevating monomer availability via disassembly of actomyosin stress fibers in the rear of the cell during migration (Fig. 8A,C,J-K). At the heart of this model lies a reliance on cumulative actin-myosin and myosin-myosin interactions. While the kinetics of these many transient interactions will vary, the more local interactions there are, the more likely it is that sufficient monomers will dwell in a given area long enough to create a stable but immature NM2 filament. Once a nascent NM2 filament is established, it acts as a site of enhanced assembly, suggesting that myosin:myosin interactions are the main drivers of these amplification processes (Fig. 8D-E). The resulting clusters (Fig. 8F) of NM2 filaments continue to spatially segregate or partition (Fig. 8G), creating additional local assembly sites that further perpetuate the amplification and generation of force (Fig. 8H-I).
The first component of our model elicits a role for actin dynamics in dictating local myosin 2 assembly. Many previous studies have focused on the interaction between NM2 filaments and actin filaments, with myosin acting as either a motor or a crosslinker (33). In vitro and theoretical studies have shown that the presence of filamentous actin enhances NM2 filament assembly (7,34). More recent cellular studies have reported actin-dependent roles in NM2 filament alignment, expansion, and partitioning (8,9,35). Here we reframe the perspective to focus on how actin dynamics impact myosin assembly. Our observations that leading edge and tail retractions precipitate NM2 filament formation suggest a role for the actin cytoskeleton in modulating myosin monomer concentration. Specifically, we speculate that leading edge retractions serve to locally concentrate myosin monomers by reducing the local actin pore size (36), similar to other models for actin-dependent restricted diffusion (37,38). In addition to a reduced physical space, the increase in actin density would facilitate local NM2 retention by acting as a kinetic trap, providing a plethora of binding sites for free monomer to interact with. The ideal monomer to engage this actin is the 6S monomer transiently unfolded via phosphorylation of the RLC at Thr18/Ser19. However, even the inactive, folded monomer (10S) can bind actin, albeit with reduced affinity (39)(40)(41). Similarly, tail retraction and pharmacological disruption of the actomyosin cytoskeleton could spur additional NM2 filament assembly through release of NM2 monomer in filaments previously assembled in these structures. This sudden influx of NM2 monomers globally elevates the monomer concentration across the cell, facilitating nascent assembly by increasing the likelihood of NM2 monomer interactions. In each case the dynamics of the actin network serves to modulate NM2 concentrations that drive assembly of NM2 filaments. The second overarching component of our model elicits myosin:myosin interactions. In this regard, monomermonomer, monomer-filament, and filament-filament interactions should be considered. It is known that multiple tail interactions between several monomers within a NM2 filament help to stabilize the structure (31). While the tail interactions surely dominate, numerous other interactions have been identified within the myosin holoenzyme (headhead, head-tail, light chain-tail, etc.) (42). It is quite possible that these low affinity interactions also occur between NM2 monomers and filaments to contribute to the local enrichment of myosin. Additionally, NM2 filament stacks and clusters, concatenation and partitioning have all been reported through a combination of in vitro and cellular studies (8,9,43). This indicates myosin-myosin interactions beyond those occurring between monomers within a filament. Based on FRAP studies, the half-time of recovery is typically within 10s of seconds, indicating rapid NM2 filament exchange kinetics (44,45). We hypothesize that once a nascent NM2 filament is established, transient interactions and rapid exchange kinetics enrich the local monomer concentration to increase the probability of additional filament assembly events (46). We speculate this effective diffusion trap is the basis for the myosin-facilitated assembly that we observe in our data. Importantly, this model is not mutually exclusive with the potential for a nascent NM2 filament to enhance filament assembly via mechanosensitive feedback systems that alter local actin to favor assembly or initiate signaling events that lead to canonical myosin activation via RLC phosphorylation (16,(47)(48)(49). Indeed, our observation of active Rho flanking established myosin clusters is supportive of parallel mechanisms to amplify myosin filaments.
We propose that these mechanisms contribute to the rapid assembly and amplification of NM2 filaments to efficiently produce physiological levels of contraction in polarized migration. While it is most straightforward to experimentally observe these myosin dynamics in the lamellar regions of migrating cells, we believe this to be a universal mechanism of rapidly building contractility. Two additional areas of biology that have clear evidence for, and use of, rapid myosin filament assembly and amplification are the contractile ring (50) and adherens junction maturation (51). In both contexts, higher-order myosin networks are observed and must develop rapidly to achieve the requisite contractility. The mechanisms that we outline here could help to drive the NM2 filament amplification in these dense areas. While additional work will be needed to confirm these mechanisms at work in dense regions, it is clear that in addition to biochemical regulation, myosin filament assembly and amplification are sensitive to biophysical constraints.
Cells were transfected for CRISPR knock-in, lentivirus preparation, and the calcium and rho biosensor experiments using the LipoD293 (SignaGen #SL100668) system.[ADD DNA CONC, LIPOD AMOUNT, CELL AMOUNT] Cells were transfected for molecular counting experiments using the Neon Electroporation system [ThermoFischer Scientific] with 2 x 20msec pulses of 1350V and 5µg plasmid DNA into 400k cells in a 100uL reaction. For calcium activity experiments, cells were transfected with GCaMP7s (Addgene Plasmid #104463) (14), plated 24-48 hours posttransfection and 12-24 hours before imaging, and then cells of average brightness were imaged.
For RhoA activity experiments, cells were transfected with GFP-RhoA-AHPH (AKA GFP-RhoBio) (Addgene Plasmid #68026) (15) with the same approach as described for the calcium experiments. Cells transfected with the protein nanocages for the molecular counting experiments were transfected with mem-EGFP-60mer or mem-EGFP-120mer, plated 48 hours post-transfection and 18-24 hours before imaging. Cells transfected with EGFP-Actin (Addgene plasmid #31502) (52) for the molecular counting experiments were transfected 4-6 hours prior to imaging and plated immediately to achieve extremely low expression for single molecule identification. Primary MEFs used for the long term cell migration and JLY experiments were infected with viral media from HEK cells transfected with 3x-mScarlet-FTractin (Addgene Plasmid #112960) (53). Once stably expressing the 3x-mScarlet-FTractin, the cell line was re-stored and used within 10 passages. For the optogenetics experiment, a stable cell line was created using pLV-Stargazin-mTurquoise2-iLID (Addgene Plasmid #161001) (54) lentivirus and then transiently transfected with SspB-mApple-NM2A using LipoD293 transfection system. Cells were plated 24-48 hours post transfection and 12-24 hours before imaging.
Inhibitors were used at the following concentrations: Y27631 (EMD Millipore #68801), 10µM; LatrunculinB (EMD Millipore #428020), 1.25µµM; Jasplakinolide (EMD Millipore #420127), 2µM. Drug treatments were prepared at a 2X solution in L-15 imaging media and then added to the wells at a 1:1 dilution while imaging. All cell lines were tested for mycoplasma once a month. Generation of CRISPR knock in cell lines . Halo-NM2A and mScarlet-NM2A knock-in cells were derived from JR20 parental fibroblasts using CRISPR/Cas9. We generated pSpCas9(BB)-2A-Puro (PX459) V2.0 (Addgene Plasmid #62988) with target sequence AAACTTCATCAATAACCCGC using established protocols (55). To generate donor plasmids, pUC57 was digested with EcoR1 and Stu1 and purified. A four piece Gibson assembly was then performed with three gBlocks (IDT): 1) a 794 bp 5' HDR of genomic sequence immediate upstream of the endogenous start codon, 2) mScarlet fluorophore or HaloTag with an 18 amino acid GS-rich linker, 3) a 802 bp 3' HDR of genomic sequence immediately downstream of the endogenous start codon with silent PAM site mutation. JR20 were transfected with donor and target-Cas9 plasmids, single-cell sorted at 5-10 days post-transfection. Individual clones were evaluated for knock-in via western blotting and microscopy. Clones used in this study include Halo-NM2A clone 2 (H2A2) and mScarlet-NM2A clone 3 (S2A3). Molecular cloning. To engineer photo-recruitable NM2A, we introduced an SspB upstream of mApple in pmApple-NM2A. After digesting pmApple-NM2A with Age1, an SspB PCR product with flanking HDR arms was introduced via Gibson cloning in frame with mApple-NM2A. Lentiviral Stable Cell Lines. HEK293T were transfected using LipoD293 (Signagen #SL100668) and the accompanying lentivirus generation transfection protocol. Briefly, cells were plated in a 6cm dish and grown to 80-90% confluence. Approximately one hour before transfection, media was changed on the cells. Transfection complex was created with LipoD293, packaging plasmic psPax (Addgene Plasmid #12260) and envelope plasmid PmD2.G (Addgene Plasmid #12259), and lentiviral construct and added dropwise to the dish. Media was changed at 24 hours post transfection and collected at 48 and 72 hours. Viral media was spun down at 1000g for 5 minutes and then filtered with a 0.45µm filter. A 50% confluent 6cm of cells were then infected with 48 hour viral media. Viral media was removed after 24 hours and cells stored for future use after 3-5 days.
pMD2.G was a gift from Didier Trono (Addgene plasmid # 12259; http://n2t.net/addgene:12259; RRID:Addgene 12259) Imaging. Calcium and Rho: Calcium and Rho imaging was performed on a Zeiss 880 Airyscan with a 63x 1.4 NA objective in the "Airyscan Fast" acquisition mode. Time lapse images of NM2 and calcium or rho biosensors were conducted at a 2.5 second or 10 second frame interval, respectively. Long term migration imaging was performed on a 3i confocal spinning disk with a 40x 1.3 NA objective. Time lapse images of NM2 (488) and actin (561), at 30% laser and 50msec exposure per channel. Between 12 and 15 positions were acquired every minute for eight hours. Zeiss definite focus.2 was used to focus between timepoints and positions.
JL and JLY: JLY experiments were performed on a 3i confocal spinning disk with a 63x 1.4 NA objective. Time lapse images of NM2 (488) and actin (561), at 30% laser and 50msec exposure per channel. Four positions were acquired every five seconds for ten minutes. Zeiss definite focus.2 was used to focus between timepoints and positions. After 5 minutes, drug cocktails prepared at a 2X concentration were added 1:1 with the media in the well while imaging to ensure rapid treatment. Cells were imaged until they ripped apart from the drug treatment and then an equal number of frames before and after treatment were used for analysis.
Molecular counting: For all molecular counting experiments, standard candle controls and EGFP-NM2A imaging were performed each day within 6 hrs of each other using identical laser and acquisition settings. Live molecular counting experiments were performed on a Zeiss 880 Airyscan with a 63x 1.4 NA objective in the "Airyscan Fast" acquisition mode. Argon laser at 0.9% power with 0.83µsec pixel dwell, a 4x line averaging was used for all images acquired for the standard curve and live imaging of NM2 to be counted. Time lapse images of NM2 were conducted with identical laser power, pixel dwell, and averaging at a 5 second frame interval.
Fixed molecular counting experiments were performed on a Zeiss 880 Airyscan with a 100x 1.4 NA objective in the "Airyscan Fast" acquisition mode. Argon laser at 9% power with 0.98µsec pixel dwell, a 4x line averaging was used for all images acquired for the standard curve and live imaging of NM2 to be counted. Time lapse images of NM2 were conducted with identical laser power, pixel dwell, and averaging at a 5 second frame interval.
Correlative Light And Electron Microscopy. Round 25mm coverslips were squeaky-cleaned (56) and plasma cleaned before coating with 20µg/mL Human Plasma Fibronectin (EMD Millipore #FC010) at 37C for 1 hour. After fibronectin coating and subsequent PBS washes, PDMS strips approximately 5mm in width and 25mm on length, were placed to bisect the circular coverslip. Cells were plated at 75% confluence and incubated overnight. 20-24 hours after plating, PDMS strips were removed and the media changed twice to remove any lifted cells. Cell unroofing was performed 12-18 hours after PDMS strip removal as described previously (22,23). Briefly, coverslips individually taped onto a 6cm petri dish and covered with PBS. Then, they were rinsed with intracellular buffer (70mM KCL, 30mM HEPES maintained at pH 7.4 with KOH, 5mM MgCL2, 3mM EGTA) and then cell edges 'glued' down with a 30 second treatment of 0.08% Poly-L-Lysine in intracellular buffer. Then cells were manually unroofed by spraying 1 mL of 3% paraformaldehyde (Electron Microscopy Sciences #15710) and 1% glutaraldehyde (Electron Microscopy Sciences #16216) in intracellular buffer through a 25 gauge needle along the line of the PDMS 'wound' from a distance of 1cm. Fresh fixative was added to fully submerge the coverslips to fix for 30 minutes at room temperature. Coverslips were then rinsed with PBS and then removed from the petri dishes to incubate in 1:50 phalloidin-555 (Thermo Fisher #A34055) face down on parafilm for 30 minutes at RT. Immediately after phalloidin staining, coverslips were carefully flipped over, avoiding any sliding, and placed in a PBS wash. The coverslips were then mounted in a magnetic chamber in PBS and immediately imaged on the Airyscan 880. Cells that appeared unroofed based on phalloidin intensity were selected within a 1mm diameter and the coverslip was marked for that region with a diamond tip objective. Within the marked region, NM2 and actin were imaged at 63x 1.4 NA oil in "SR" mode and then 40x 1.3 NA oil tilescan in "Fast" mode. Coverslips were then imaged using EVOS phase contrast microscope at 20x, 10x, and 4x. Images were then organized to create a map back to the same imaged ROI after platinum replica preparation. After fluorescence imaging, coverslips were placed in fixative, flipped over onto a glass slide. The coverlsips were immobilized with epoxy resin, sealed with VALAP, and shipped overnight at 4°C to the Taraska lab at NIH for the PREM workflow.
Platinum replica sample preparation was performed as described in (22,23). Briefly, coverslips were placed in 0.1% tannic acid for 20 minutes, rinsed 4x in water, 0.1% uranyl acetate for 20 minutes, rinsed 2x in water, and then dehydrated gradually with increasing concentrations of ethanol (15,30,50,70,80, 90% up to being rinsed 3x in 100% ethanol) prior to critical point drying (Tousimis 895). After critical point drying, the coverslips were trimmed down with a diamond scriber. Samples were rotary coated with a 2-3 nm coat of platinum-carbon at a 17 degree angle, then 5-6 nm carbon at a 90 degree angle (RMC 9010). The coverslips were imaged with 20x phase contrast light microscopy to find the cells that were previously mapped during fluorescence imaging. The coverslip was then placed face up on the air/water interface of 5% hydrofluoric acid until the coverslip dropped into solution leaving the platinum replica sitting on surface. The replica was rinsed with water and lifted with a 4 mm circular loop onto a formvar/carbon coated 75mesh copper grid (Ted Pella 01802-F). The grid was again imaged with 20x phase contrast light microscopy to confirm that replica transfer went smoothly and identify the area of interest on the grid. Transmission electron microscopy (TEM) was performed with montaging on a FEI Tecnai T12 equipped a Gatan Rio-9 camera and SerialEM freeware (57).
TEM microscope was maintained by Haotian Lei and Yanxiang Cui as part of the NIDDK cryo-EM facility at the NIH Bethesda campus.
Correlation of fluorescence and platinum replica electron micrographs was performed using Matlab correlation software (23). Major structures in the actin channel (e.g. large bundles or unique branching) were used to correlate images, with a minimum of 20 points used to correlate. Image Analysis. Calcium, Rho, Y27, and Tail Release: Qualitative analysis of NM2 filament appearance was conducted in FIJI by making sum time projections for every 10 frames. Kymographs were generated in FIJI using the KymoResliceWide plugin with a line width of 11.
JL and JLY: Qualitative analysis of NM2 filament appearance was conducted in FIJI by making sum time projections for every 10 frames. Quantification of NM2 filament appearance for JLY analysis was performed manually in FIJI by identifying puncta appearance in the lamella over the course of all of the frames. Average appearance rates for all frames acquired after drug treatment and the same number of frames pre-treatment and normalized to the pre-treatment rate for each cell to compare paired, before and after treatment conditions. Molecular counting: Image analysis was performed using custom written python analysis software ( https://github.com/m-a-q or https://github.com/OakesLab). Images were sum-projected in Z and then cell masks generated for each image and each frame for time series. For candle images, single frames were analyzed and local peaks (or puncta) identified within the cell mask. The intensity of a 14 by 14 pixel box around each peak was used to filter out any candles that were side by side. The intensity of the top of bottom z slices where the peaks are identified in X and Y was used to filter out candles that were not fully within the Z range that was acquired. Once unsuitable peaks were filtered out, the remaining list of peak positions were used to quantify the sum intensity of a 14 by 14 pixel box around the peak coordinates. Candle intensities were then plotted in histograms and then plotted by the number of GFPs present in the molecules. The intensity of a single GFP was found from linear regression analysis.
Images of NM2 were analyzed using trackpy (https://github.com/soft-matter/trackpy) (58) to first identify NM2 filament clusters as they appear and flow back with retrograde flow. A 35 by 35 pixel box around the particle centroid was summed to quantify the fluorescent intensity, and then divided by the intensity of a single GFP, then divided by 2, to get the number of NM2 monomers present in the structure at any given frame. This was compared to the rate of track appearance from the same trackpy analysis. For partitioning analysis, Individual tracks were then further tracked to identify local peaks within the growing cluster to count the number of NM2 head groups that could be resolved. Based on the number of resolvable head groups, we determined the partitioning state of the structure. Statistical analysis. To compare the number of NM2 filament appearance rates (Fig. 2C, Fig. 4D), we used a Wilcoxon matched-pairs signed rank test performed comparing 'pre' and 'post' frames for each cell. Statistical analysis was performed using Prism (GraphPad). p-values less than 0.05 were considered significant.
Intensity histograms (Fig 6.c) were fits to gaussians and linear regression analysis used on the calibration curves (Fig. 6D, Fig. 7E) to determine the value of a single GFP. Statistical analysis was performed using the stats module from scipy. Data availability. The data that support the findings of this study are available upon reasonable request from the corresponding author [Jordan Beach]. . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made | 2023-04-30T13:09:05.350Z | 2023-04-27T00:00:00.000 | {
"year": 2023,
"sha1": "0a4088a5534cd45da2312d51081967130ee82f39",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c3f712e8c91d7719facacd775746401734fbda48",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
209149672 | pes2o/s2orc | v3-fos-license | Tunnelled haemodialysis catheters in central Free State: Epidemiology and complications
Background End-stage renal disease (ESRD) is a disease with profound impact on the patient, health system and economy. Tunnelled haemodialysis catheters (TDC) are amongst the most common dialysis methods. It has been established internationally that certain demographic descriptors and aetiologies carry an increased risk of complications. There is a dearth of epidemiological profiling of ESRD patients with TDC in South Africa. Objective To establish the epidemiological profile of patients who received TDC and to establish the complication rate, with the goal of demonstrating associations between the epidemiological profiles and complications. Method This was a retrospective study of all patients who received TDC in an Academic Hospital Interventional Radiological Suite over a period of 60 months between 01 March 2011 and 29 February 2016. Results A total of 179 patients received 231 catheters. The majority of patients were male (58.7%) and 35.8% of the patients resided in Mangaung. The leading aetiologies of ESRD included hypertensive nephropathy (43.6%), primary glomerular disease (17.3%) and HIV-associated nephropathy (6.1%). Procedural complications occurred in 7/224 (3.1%) insertions, whilst 37/185 (20.0%) developed catheter-related infection and 54/185 (29.2%) developed dysfunctional catheters. There were no deaths linked to catheter-related complications. Conclusion Our patients’ demographic profile, ESRD aetiology, complication rate for procedural complications and catheter-related infections are roughly on par with international studies; however, the catheter dysfunction rate is higher than in the aforementioned studies. This possibly reflects the difficulty of accessing specialist care for our patients, the majority of whom reside outside the Mangaung district. Further studies with larger sample sizes are required to demonstrate statistically relevant associations.
Introduction
End-stage renal disease (ESRD) is an increasing healthcare concern across the world with a high mortality rate and associated economic implications, particularly in Southern Africa, where it affects a younger demographic than in developed countries. 1,2 An effective screening programme would assist in early nephrologist or renal centre referral which is shown to have an impact on decreasing the morbidity and mortality of these patients. 3,4 In state healthcare, 44.1% of the dialysis population is managed with haemodialysis and at our institution a large portion of the dialysis population undergo tunnelled haemodialysis catheter insertion either for temporary vascular access (whilst grafts or fistulae mature or the peritoneum recovers) or when other vascular access routes are exhausted. 5 Tunnelled haemodialysis catheters (TDC) do offer some advantages, including immediate dialysis and no repeated venepuncture. However, they are associated with an increased risk of complications and significant mortality when compared with other types of vascular access, with a 1-year survival of patients on TDC of 75%. 3,6 Based on studies in China and Croatia, multiple risk factors have been demonstrated to carry an increased risk of complications. 3,7 However, no local study has assessed our complication rate and investigated epidemiological risk factors. Filling this void would assist in the implementation of focused and effective screening programmes.
The goal of this study was multifactorial. We aimed to establish the epidemiological profile of patients at an academic hospital, who received TDCs at the Interventional Radiological Unit over a 60-month period, to establish the complication rate within that population group and to determine if associations between the risk factors, epidemiological data and complications could be established.
Study design and setting
This was a retrospective, analytic study conducted at an Academic Hospital Interventional Radiology Unit, which serves the population of the Free State province, as well as occasional out of province and private patients.
Study population and sampling strategy
The study population consisted of all state patients who received TDCs at an Academic Hospital Interventional Radiology Unit during the period of 01 March 2011 to 29 February 2016. All patients aged 18 years and older, who received their catheter at the interventional suite, were included.
Catheter insertion
Catheters were inserted by an experienced interventional radiologist in the Interventional Unit via percutaneous access. The procedure was performed under sterile theatre conditions with ultrasound guided venous access. All TDCs inserted in our centre are cuffed. The catheter is tunnelled subcutaneously for approximately 9 cm -10 cm from the venous access site. The catheter is then placed under fluoroscopic control with tip positioning in the right atrium. Cutaneous fixation is created with sutures until cuff adhesion -approximately 8-12 weeks. Initial patency and positioning are confirmed during the procedure. The catheter is then locked with heparin (1000 µ/mL) The primary goal for access was the internal jugular vein. However, in patients with previous access and complications, other sites were used. Subclavian access was used when no other access site was available.
Secondary intervention
In patients where the catheter is unable to maintain adequate extracorporeal blood flow and thrombolytic therapy (alteplase) has been ineffective, brushing is performed in the Interventional Unit under fluoroscopic guidance and sterile conditions to displace and remove the fibrin sheath (a composite of cells and debris that forms a biofilm around catheters that can obstruct the lumen, acting as a valve) or thrombus by using a Terumo guidewire to sound the catheter lumen and rinse the lumen with saline. The catheter is then locked with heparin, 1000 µ/mL. If brushing fails to restore patency, then snaring is employedvascular access is gained from another site and mechanical stripping of the catheter tip is performed via a snare.
Data collection
Patients were identified using the procedural register and further information was gathered from existing electronic medical records. A comprehensive data sheet was completed. Details captured included date of birth, age at catheter insertion and residence. Aetiology was grouped into diabetes, primary glomerular disease (including nephrotic syndrome, acute glomerulonephritis and rapidly progressive glomerulonephritis), hypertensive nephropathy, acute renal failure, obstructive uropathy, renal tubular interstitial diseases (including acute tubular necrosis, tubulointerstitial nephritis, contrast nephropathy, reflux nephropathy and myeloma), Human Immunodeficiency Virus Associated Nephropathy (HIVAN), drug induced nephropathy, polycystic kidney disease and unknown.
For ease of analysis, complications were grouped into procedural complications (air embolism, bleeding and pneumothorax), catheter-related infection and catheter dysfunction (malposition, thrombosis, fibrin sheath, central vein stenosis and loosening or catheter breakage).
Further details recorded included whether the catheters underwent repair or brushing and if they were removed because of complications, fistula maturation or peritoneal dialysis catheter use. In the cases of patient demise, it was noted whether this was a result of catheter-related complications or other causes.
Primary and secondary patency was calculated. Primary patency is regarded as the time duration of catheter patency until the first intervention required to maintain patency whilst secondary patency is regarded as the length of time from insertion until catheter removal because of complication or catheter failure. 8
Data analysis
The primary researcher entered all the data onto an Excel data sheet, which was then submitted for statistical analysis by the Department of Biostatistics at the University. Results were summarised as frequencies and percentages (categorical variables) and means, standard deviations and percentiles (numerical variables). Associations were investigated using appropriate hypothesis testing with p <0.05 considered statistically significant.
Ethical considerations
Ethical clearance was obtained from the Health Sciences Research Ethics Committee of University of Free State (HSREC 62/2017) and Free State Department of Health (UFS-HSD2017/0478).
Results
A total of 179 patients received TDCs during the study period and qualified for the study. In the study sample, 105 were male (58.7%) and 64 (35.8%) resided in Mangaung district. The mean age at insertion was 40.4 years with a standard deviation of 12.05. The four leading aetiologies were hypertensive nephropathy, primary glomerular disease, HIVAN and unknown aetiology (see Table 1 for more information).
The patients received 231 catheters. A hundred and fiftyeight patients had catheters inserted for the first time. The majority of patients (141, 77.3%) received one catheter, 25 patients (14.0%) received two, 10 patients (5.6%) received three, 1 patient (0.6%) received four and 1 patient (0.6%) received five catheters during the study period. Of the 231 catheters inserted, 224 (97.0%) had information regarding insertion and 185 (80.1%) had information regarding follow up. The majority of lines were inserted in the right internal jugular vein, with the left internal jugular vein insertion being the second as per Table 2.
Procedural complications occurred in 3.1% of insertions whilst 20.0% developed catheter-related infections and 29.2% developed complications related to dysfunction (see Table 3 for further breakdown).
The mean age at insertion varied between the complication groups: in the catheter-related infection group, the mean age was 37.5 years; in the procedural complication group, mean age was 40.2 years; and in the catheter dysfunction group, mean age was 39.8 years. Table 4 summarises the patient characteristics, complications recorded and the associations between them.
Out of the 231 catheters, 45 catheters (19.5%) had incomplete follow up. Of the catheters with adequate follow up, 4.3% went on to receive catheter repair, 17.7% required a single brushing, 5.4% received two brushings and 3.2% received three brushings, with a primary patency rate of 98 days.
Complications resulted in 27.9% of the catheters being removed whilst 32.3% were removed because of fistulas and 18.8% because of peritoneal dialysis being initiated or resumed. No patients demised because of catheter-related complications, whilst 10.2% of the patients demised because of other causes. Secondary patency rate was 87.0% at 6 months and 76.1% at 12 months.
Discussion
The high financial burden of ESRD has a considerable impact on the limited resources of the South African health system. Therefore, it would be of benefit if there was earlier diagnosis and efficient management of renal disease, preventing or delaying the progression to ESRD. The Academic Hospital Interventional Radiology Unit assists with TDC insertion for a large percentage of the Free State dialysis population as it can be demonstrated by considering that in 2016 the Free State had 235 patients on dialysis; our study population over the five year period numbered 179 patients. 5 Despite the increased risk of infection and mortality compared with fistulae or grafts, TDCs remain an important part of dialysis patient care. 9,10,11 The epidemiological analysis of the study population revealed that the patient's age (mean of 40.4 years) was in keeping with a local South African study on ESRD, but younger than studies from other African countries and developed countries where renal failure is predominantly a diagnosis of the middle aged and the elderly. 1,2 Male patients formed 58.7% of the sample; this corresponds to previously reported rates in Africa of 61% -63% male gender in renal failure patients. 1 The female proportion of the study population experienced the majority of the complications, however, the gender discrepancy was not found to be statistically significant, which is also in keeping with an international study which indicated that patient gender did not impact catheter survival. 12 A significant percentage (64.2%) of the study population resided outside the Mangaung district with implications in terms of ease of access to specialised medical services and
Catheter-related infection (N = 185)
Catheter-related infection 37 20.0 Note: Procedural complication was recorded during initial catheter insertion and admission and thus has a larger denominator than catheter-related infection and dysfunctional complications which were recorded in patients who returned for follow up. More than one complication could occur per insertion. †, Denominators are procedures. further management of the TDC and the patient. The patients outside the Mangaung district experienced the majority of the complications (57% -68%) across all three complication groups although the discrepancies were not statistically significant.
End-stage renal disease aetiology was similar to other studies in Africa with hypertension being the most commonly recorded cause in 43.6% of patients versus 34.6% (Sudan) and 30.9% (Cameroon). Further common causes in our study included primary glomerular disease and HIVAN. In Cameroon, other aetiologies included glomerulonephritis (15.8%), diabetes (15.9%), HIVAN (6.6%) and unknown (14.7%). 13 In a Sudanese study the causes included chronic glomerulonephritis (17.6%), diabetes (12.8%) obstructive uropathy (9.6%) and in 10.7% no cause was identified. 1 Hypertension as an aetiology constituted a larger percentage of this study population than international studies although it is difficult to determine whether this was primary hypertension or secondary to chronic kidney disease. Additionally, this study had a high percentage of patients with an unknown cause. These findings could be a reflection on the lack of efficient primary healthcare with many patients presenting late in the course of the disease and not receiving renal biopsies.
The majority of catheters were inserted in the right jugular vein, with no statistically significant discrepancy between site of insertion and procedural or dysfunctional complication rate, however, there was a statistically significant correlation between catheter-related infection and insertion of the catheter in either femoral site. In a study by Dewelter et al, it was demonstrated that right jugular insertion confers a significantly improved outcome as compared with other sites of insertion. 14 This study, as compared with a study in Pakistan, had a decreased incidence of procedure-related complications (3.2% vs. 5.6%) but an increased rate of catheter-related infection (20% vs. 17.3%) as well as dysfunction-related complications (29.2% vs. 16%). 15 The increased incidence of catheter-related infection and complications causing dysfunction reflect perhaps the difficulty for our patients in accessing specialist care after the procedure, particularly if they reside in another district. In light of the above, it might be of value to consider a chronic low dose of aspirin to maintain tunnelled central venous catheter (CVC) patency. 16 Catheter-related infections remain a significant problem within the dialysis population with implications for cost of care and patient quality of life, as patients with catheterrelated infections have an average hospital stay of 6.5 days, undergo several tests and receive treatment during the hospital stay. 17 Considering the incidence of catheter-related infections, future studies could analyse the benefit of antimicrobial barrier caps in reducing this rate, as per the Kidney Disease Outcomes quality Initiative (KDOQI) guidelines from the National Kidney Foundation. 16 The secondary patency rate is better than a study in India at 6 months (87.0% compared with 55%) and the 12-month catheter survival rate falls within the wide range found in a previous review article of 2007 (between 25% -75%). 8,18 A high percentage of the catheters were removed because of initiating or resuming peritoneal dialysis or use of fistulae. This is perhaps because of the increased number of patients 5 There were no deaths in our study because of catheter-related complications.
The aetiology in the study population, on average, did not have a statistically significant impact on the complication rate although other studies have shown that diabetes conveys increased risk and that age can have an influence additionally. 7 Polycystic kidney disease was shown to have an increased risk of catheter-dysfunction-related complications. The reason for this is unknown and merits further investigation.
Although our study was unable to establish a statistically significant association between demographics, aetiology and complications in the majority of cases, we were, however, able to demonstrate an association between femoral site catheter insertion and the risk of catheter-related infection; and between patients with polycystic kidney disease and an increased risk of catheter dysfunction. Studies have shown that associations exist between several patient characteristics (male gender, increased age, diabetic nephropathy, hypertensive nephropathy and glomerulonephritis) and their risk of complications. 3,7
Study limitations
Many patients who had their catheter inserted and were then managed further in other centres were lost to follow up, resulting in incomplete information, particularly with regards to catheter-related infection and dysfunctional catheter complications. A further challenge was the relative paucity of renal biopsies to confirm the ESRD aetiology.
Conclusion
Our demographics, aetiology of ESRD and complication profile largely correspond to other studies except for an increased complication incidence in females, an increased percentage of hypertension as the cause for ESRD and an increased percentage of catheter dysfunction complications. These findings are perhaps a reflection on the challenges our primary healthcare system faces and the difficulty for these patients to access specialist care in the periphery. Because of the limited number of patients and complications, this study was unable to establish statistically significant correlations between complications and epidemiological factors in many of the measured characteristics.
In our setting, given pre-existing research that has demonstrated a decreased risk of complications with early referral to specialist care and dialysis initiation with other vascular access options (besides TDC), 4 it would be optimal to create a screening programme for high risk patients (HT, DM). 2 If a South African multicentre study with a larger study population was able to confirm local risk factors for complications, then appropriate care centres could implement protocols for increased vigilance and screening for complications in the vulnerable population groups. This could also lead to and assist with the formation of local guidelines for the management of dialysis such as the KODQI 2018 guidelines. 16 Together, these could assist in early identification of patients at risk of developing ESRD and lead to earlier referral to specialist care which has been shown to have a positive effect on patient outcome. 4,18,19,20 | 2019-11-28T12:24:06.559Z | 2019-11-27T00:00:00.000 | {
"year": 2019,
"sha1": "f5f435aa0a49df8b322745fd4aa2714489bc4d9d",
"oa_license": "CCBY",
"oa_url": "https://sajr.org.za/index.php/sajr/article/download/1791/2380",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22cc34ed5ef97ca38d1c988b90d993b9d0485f16",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22609101 | pes2o/s2orc | v3-fos-license | Development and validation of a lipase nasogastric tube position test
Background Nasogastric tube position should be checked every day by either aspirate pH or chest radiography to prevent fatal misplaced feeding into the lungs. Many patients do not have acidic gastric aspirates and require daily chest radiographs. We developed and validated a lipase test that was compatible with non-acidic gastric aspirates. Methods We conducted evaluations of diagnostic test accuracy at a teaching hospital in development and validation stages. Development: We collected gastric and lung aspirates from 34 consecutive patients. We measured pH and human gastric lipase activity in the laboratory. These data helped us develop the lipase test. Ingenza Ltd (Roslin, Scotland) created tributyrin-coated pH test paper, which human gastric lipase converted into butyric acid, thus correcting false negatives. Validation: We tested nasogastric feeding tube aspirates from 36 consecutive patients with pH and lipase tests, using chest radiography or trial by use as the reference standard. Results Development: We demonstrated human gastric lipase activity in the non-acidic stomach aspirates. Validation: The accuracy of the lipase test (sensitivity 97.2%, specificity 100%) was significantly better than pH (sensitivity 65.7%, specificity 100%, p<0.05). Conclusions When nasogastric tube stomach aspirates were not acidic and pH was falsely negative, the lipase test showed a true positive and was significantly more accurate.
INTRODUCTION
Nasogastric tubes are commonly used to assist enteral nutrition. 1 The National Patient Safety Agency issued guidelines recommending that aspirate pH is tested before every feed and at least once every day to check nasogastric tube position and prevent harm from feeding into the lungs through a misplaced nasogastric tube. An acidic result (pH≤5.5) indicates that the nasogastric tube is correctly positioned in the stomach and feeding is safe. If the result is not acidic ( pH>5.5), a chest radiograph is indicated to check that the nasogastric tube is positioned in the stomach and not in the lungs. 2 3 No tests other than pH and chest radiography are reliable and currently recommended. 4 However, up to 42% of hospital inpatients receive antacid medications that render the results of pH test paper falsely negative. 5 The ideal solution would be a test that was accurate despite non-acidic gastric aspirates, safe, point-of-care and non-ionising. Other authors have reported some success with pH and magnetic-tipped nasogastric tube stylets. 6 7 The use of gastric enzymes in nasogastric tube position tests has been mooted, but no evaluations of clinically viable prototypes have been published. 8 Human gastric lipase is an endogenous gastric enzyme, which starts the digestion of dietary triglyceride in the human stomach. 9 Chief cells secrete human gastric lipase
Summary box
What is already known about this subject ▸ Nasogastric tube position should be checked once a day to prevent the never event of misplaced feeding into the lungs. ▸ Only two tests are recommended: aspirate pH and chest radiography. ▸ Many patients on acid-supressing medication never have acidic gastric aspirates, which makes the pH test falsely negative. ▸ It is not feasible to obtain daily chest radiographs for these patients, especially in the community; therefore, there is no viable option.
What are the new findings ▸ We describe the development of a gastric lipase-augmented nasogastric tube aspirate pH test. ▸ We showed that lung aspirates have no lipase activity. ▸ We then showed that false-negative pH results were corrected by the lipase test, which significantly improved accuracy.
How might it impact on clinical practice in the foreseeable future?
▸ We intend to develop this prototype and create a more viable daily nasogastric tube position test that works for patients on acid suppression both in hospital and in the community.
entirely from the gastric fundus. 10 11 It is not known for certain if human lipase activity is present in the lungs. Human gastric lipase is relatively stable and its production, unlike the secretion of hydrochloric acid from gastric parietal cells, is not affected by antacid medications. 12 13 Production of human gastric lipase is well developed at birth 14 and is stimulated by pentagastrin and a high-fat diet 15 and reduces with age, but does not diminish completely. 11 One barrier to a single reagent test is that human gastric lipase is inactivated by acidic stomach contents and therefore is unsuitable as a means of determining nasogastric tube position on its own. It has been suggested that a combined test for pH incorporating a gastric enzyme may be significantly more accurate than each in isolation. 8 The objective of this study was to develop and validate a nasogastric tube position test that was compatible with non-acidic gastric aspirates by utilising human gastric lipase to lower the pH of gastric aspirates on pH test paper.
Study design
We present two prospective studies. The development phase explored human gastric lipase activity from stomach and lung aspirates in the laboratory. The validation phase was a diagnostic test study that trialled the lipase test versus pH to determine nasogastric tube position reported in accordance with STARD guidelines. 16 Setting This research project was conducted in the UK at a single tertiary-referral acute London teaching hospital between 2011 and 2012. Favourable opinions were obtained from the UK National Health Service research ethics committees (Refs: 10/H0706/45 and 10/H0724/76).
Development phase
Recruitment consisted of consecutive adult patients undergoing major upper-gastrointestinal surgery involving one-lung ventilation, a procedure that ensured accurate collection of stomach and lung aspirates. Patients known to have no gastric fundus (eg, previous gastrectomy) were excluded because human gastric lipase is exclusively produced by the fundus. 11 All other patients who gave valid consent to participate were included. After the patient was anaesthetised, the consultant anaesthetist inserted a nasogastric tube. The consultant surgeon checked whether the tip of the nasogastric tube was correctly positioned in the stomach by palpation after gaining access to the abdominal cavity through a laparotomy incision and before mobilising any organs. This direct confirmation of nasogastric tube position represented the reference standard. After this confirmation, stomach aspirates were taken. In addition, the consultant anaesthetist took lung aspirates under direct vision by aspirating from the newly reinflated lung near the end of the operation. We immediately labelled the samples with anonymous codes and froze them to −80°C ready for transport to the off-site laboratory. A biochemist thawed the samples and tested pH and human gastric lipase activity at the off-site laboratory. The analysis was blinded. pH was measured by wetting the pH test paper with an aspirate and waiting for 1 min (Merck, New Jersey, USA, Ref: 1095840001). Human gastric lipase activity was measured using the 718 STAT Titrino (Metrohm, Herisau, Switzerland) using methods that have already been described. 13 A pH of ≤5.5 indicated correct and >5.5 indicated incorrect nasogastric tube position. Any human gastric lipase activity indicated correct nasogastric tube position and no activity indicated incorrect nasogastric tube position.
Design of lipase test
The results of the development phase informed the creation of the lipase test. A biochemist coated pH test paper (Merck, New Jersey, USA, Ref: 1095840001) with tributyrin (Ingenza Ltd, Roslin, Scotland). This substrate produces an alcohol and butyric acid when metabolised by human gastric lipase. We hypothesised that active human gastric lipase in non-acidic nasogastric tube stomach aspirates would create enough butyric acid to change the pH on the lipase test paper to ≤5.5, thus correcting false-negative results.
Validation phase
The accuracy of the lipase test was determined in the validation phase. Recruitment consisted of consecutive adult patients treated clinically with a nasogastric feeding tube. Patients known to have no gastric fundus (eg, previous gastrectomy) were excluded. All other patients who gave valid consent to participate were included. The reference standard test consisted of chest radiography or trial by use if chest radiography was not indicated. Consultant radiologists interpreted all chest radiographs while blinded to the index test results. Criteria for correct nasogastric tube position on the chest radiograph included a straight vertical course near the midline passing through the carina and not following a bronchus with the tip below the diaphragm on the same side as the gastric bubble. An aspirate from the nasogastric tube was taken within 30 min of the chest radiograph and there was no sign of nasogastric tube displacement such as change in tube length in the intervening time. If there were signs that the nasogastric tube may have been displaced, for example, patient pulling the tube, sticky plaster not securing the tube, length of tube at nares changed, then repeat aspirate tests and radiography were performed after the tube was resited. Trial by use was used because the research ethics committee deemed it inappropriate to obtain additional chest radiographs for the purpose of this study. Patients who had trial by use already had the position of their nasogastric tube satisfactorily confirmed earlier by pH, chest radiography or direct confirmation during an operation. All patients were followed up after their first feed after entering the study had been administered and again at discharge from the hospital to ensure that no misplaced nasogastric tube feeding into the lung had occurred during their admission (ie, aspiration pneumonia). We tested the aspirate with the standard 0-6 pH test paper with 0.5 increments and also with 2-9 pH paper with 0. 5
Statistics
In the development and validation phases, we compared the accuracy of pH and lipase tests in the same participants with paired analyses. 17 We required at least 10 patients for the development phase. 18 For the validation phase, we estimated that n=52 was required to rule out a clinically significant difference using pilot data (first 20 patients, pH test paper accuracy was 65%, lipase test accuracy was 100%, but 95% was used in the calculation, α=0.05, power=80%). 19 Planned interim analysis at the end of the originally allotted study time period showed a significant difference, and therefore recruitment was not extended beyond 36 patients.
Development phase Participants
We recruited 36 consecutive patients who underwent upper-gastrointestinal surgery between 2011 and 2012. Two patients were not included in the analysis, because one patient withdrew consent and another patient had an inoperable tumour and the position of their nasogastric tube could not be confirmed during the operation. Therefore, data from 34 patients were included in the analysis, 23 men and 11 women. The median age of participants was 68 years (range 44-82). We obtained gastric aspirates from 32 patients (2 patients had dry gastric aspirates) and lung aspirates from 23 patients (11 patients had dry lung aspirates). Twenty-two patients (65%) were taking antacid medication (12 were taking omeprazole, 7 were taking lansoprazole and 3 were taking esomeprazole). We excluded no data from the analysis. There were no indeterminate or outlier results.
Stomach samples
The pH of the 32 stomach samples ranged from 1 to 8.5 with a mean of 4.4. 19 (59%) of the stomach samples had a pH of 5.5 or less, which would indicate correct placement of a nasogastric tube. Human gastric lipase activity was present in 21 (66%) of the stomach samples and all of the samples between pH 3 and 8. Human gastric lipase activity was not present in samples that were more acidic than pH 3 and was also not present in a single alkaline sample at pH 8.5. This was the most alkaline sample and human gastric lipase activity was present in samples at pH 8. Crucially, 31 (97%) of the stomach samples had either a pH of 5.5 or less and/or human gastric lipase activity. Therefore, this indicated that a combined pH and human gastric lipase test could be more accurate than pH alone.
Lung samples
The pH of the 23 lung samples ranged from 6 to 8.5 with a mean of 6.9. None of the lung samples had a pH of 5.5 or less, which could have resulted in misplaced nasogastric tube feeding into the lung on pH criteria. Human gastric lipase activity was present in none of the lung samples, which was essential to the viability of a human gastric lipase-based test.
Validation phase Participants
We approached 46 consecutive ward patients who were treated clinically with nasogastric tubes between 2011 and 2012. One patient could not be recruited because they had a previous gastrectomy. Nine of the recruited patients were not included in the analysis because their nasogastric tube aspirates were dry. Therefore, data from 36 patients were included in the analysis, which included 27 men and 18 women. The median age of the participants was 67 years (range 22-88). In total, 38 patients (84%) were taking antacid medication (17 were taking omeprazole, 20 were taking lansoprazole and 1 was taking esomeprazole). Table 1 shows a summary of the results. All measurements were made twice, independently, by two assessors (OA and MB) and were always in agreement with one
DISCUSSION
In the development study, we determined pH and human gastric lipase activity in stomach and lung aspirates. We found human gastric lipase activity in the stomach when the pH was not acidic ( pH>5.5), which confirmed that a combined pH and lipase test might be viable and more accurate than each in isolation. We then incorporated tributyrin, a substrate to human gastric lipase, onto pH test paper. Tributyrin is metabolised by human gastric lipase to form butyric acid. This acid lowers pH and corrects false-negative results from gastric aspirates that are not acidic. We also showed that there is no human gastric lipase activity in lung aspirates, another new and crucial finding, because if there were any human gastric lipase activity in the lung, the lipase test could produce catastrophic false positives. In the second study, we examined the lipase test on aspirates from ward patients with nasogastric feeding tubes and demonstrated that it had significantly improved accuracy at determining nasogastric tube position when compared with pH. We addressed an important unmet need in this translational research project with a novel innovative solution that can be clinically implemented and result in tangible benefits to patients, healthcare workers and organisations. We used robust methods and reported these in accordance with the STARD, QUADAS-2 and QAREL quality checklists for studies of diagnostic tests. 16 20 21 In the validation phase, we included a spectrum of participants that is representative of the patients who will receive the test in practice. Very few patients were excluded from the study and we detailed the reasons in each case. The reference standard tests used in both studies were likely to classify the position of the nasogastric tube correctly and the index tests were performed very close to the time of the reference standard tests. The index test did not form part of the reference standard tests. No patients were lost to follow-up. In the validation phase, assessment of the index tests was independent, blinded and random with good inter-rater reliability and with the same clinical data as would be available were the test performed in practice. There were no uninterpretable, indeterminate and intermediate results or withdrawals after entering the study.
We could not include patients who were unable to give valid consent. This included some patients in the acute phases of a stroke. We did include patients with stroke who could give valid consent. Therefore, we believe that the potential spectrum bias introduced by not including patients who cannot give valid consent does not affect the generalisability of the results. It was deemed unethical to perform additional research chest radiographs on patients who did not require them clinically by an independent research ethics committee, as trial by use as described in the methods provided an equivalent reference standard to chest radiography and more accurately represented clinical practice. The lipase test and all aspirate nasogastric tube position checks could produce a false positive result if fresh gastric contents were aspirated from the lungs. This diagnosis might be missed in patients with clinically silent aspiration pneumonia such as those in a coma. Therefore, we support the use of chest radiographs in patients at risk of silent aspiration pneumonia to check both nasogastric tube position and for radiological signs of the diagnosis.
Up to 42% of hospital inpatients receive antacid medications that render the results of pH test paper falsely negative. 5 According to the guidelines, these patients require a chest radiograph every day. Daily chest radiography is undesirable due to radiation, cost, time and inconvenience and is inaccessible once the patient leaves the hospital. The ideal solution would be a test that was accurate despite non-acidic gastric aspirates, safe, point-of-care, intuitive and non-ionising. Other authors have reported success with pH and magnetictipped nasogastric tube stylets. 6 7 These have not been widely adopted because they are operator dependent, requiring training of specialist teams that do not represent the majority of end-users. The lipase test is a viable daily nasogastric tube position check for patients in hospital and in the community. Patients with no functioning gastric fundus to secrete human gastric lipase will not benefit. The lipase test will reduce reliance on chest radiographs. A reduction in reliance on chest radiographs is desirable to minimise the delay to start feeding, exposure to radiation, cost, burden on services and misinterpretation. The National Patient Safety Agency received reports of 32 deaths and 80 severe harms associated with feeding into the lungs through misplaced nasogastric tubes between 2002 and 2010. 2 3 Misinterpretation of chest radiographs was the most common reason for harmful misplaced nasogastric tube feeding into the lungs. 1 22 Acknowledgements The authors wish to thank Dr Alison Knaggs, Dr Ian Fotheringham, Nurse Lucy Farley and the staff of the radiology department, particularly Kevin O'Neill at St. Mary's Hospital, for invaluable assistance in data collection.
Contributors OA contributed to the study concept and design; acquisition, analysis and interpretation of data; drafting and critical revision of the manuscript for important intellectual content; and statistical analysis. RC obtained funding; and took part in the acquisition, analysis and interpretation of data; critical revision of the manuscript for important intellectual content and provided technical support. MH contributed to the acquisition of data and analysis. GBH was responsible for the critical revision of the manuscript for important intellectual content, and study supervision; and is the lead author/ guarantor. All the authors agreed on the final version of the manuscript. The lead author/guarantor had full access to all of the data and takes full responsibility for the veracity of the data and statistical analysis.
Funding This study was funded by a Smart Scotland grant from Scottish Enterprise.
Competing interests All the authors had financial support from Scottish Enterprise: Smart Scotland for the submitted work. Ingenza Ltd and inventors hold the intellectual property ( patent pending) on Assay for positioning of a feeding tube and method thereof. Imperial College employed OA, MH and GBH; Ingenza Ltd employed RC.
Patient consent Obtained.
Ethics approval NHS research ethics committee.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/ | 2018-04-03T06:01:30.127Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "046b647ae43e824704f482ad78bea74b09a985a9",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopengastro.bmj.com/content/bmjgast/3/1/e000064.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "046b647ae43e824704f482ad78bea74b09a985a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21043053 | pes2o/s2orc | v3-fos-license | Determination of iNOS-2087A>G Polymorphism in Acute Pancreatitis Patients
Purpose: To determine whether single nucleotide polymorphism (SNP) of inducible nitric oxide synthase (iNOS) is involved in susceptibility for acute pancreatitis. Material and Methods: Genomic DNA was extracted from blood samples collected from cases of acute pancreatitis (n=110) and normal population controls frequency matched for age and sex (n=232). iNOS – 2087A>G polymorphism was genotyped using TaqMan allelic discrimination assays. The association of the genetic polymorphism with clinical and pathological data of the patients was evaluated. Results: We have found no significant statistical association between this polymorphism and an increased risk of developing acute pancreatitis. Conclusion: In Romanian population, the risk of developing acute pancreatitis is not increased by the presence of iNOS-2087A>G polymorphism.
Introduction
Acute pancreatitis is a localized pathological condition of the pancreatic gland that involves a systemic inflammatory response [1,2]. This is the consequence of an imbalance between proinflammatory mediators and anti-inflammatory mechanisms produced through an excess of proinflammatory mediators [3,4]. It was demonstrated by several studies that the main players of the pro-inflammatory mediators group are the cytokines [2,3,5,6]. High levels of cytokines are responsible for the activation of reactive oxygen species pathway involved in oxidative stress mechanisms [1,7,8]. In this way a large amount of nitric oxide, a highly reactive free radical, is produced by nitric oxide synthase (NOS) starting from the amino acid L-arginine [7]. It was shown that under certain conditions, nitric oxide has protective and deleterious actions in cardiovascular, neuronal, digestive, and immunological systems [1,[9][10][11].
Several studies revealed that nitric oxide levels are also increased in the early stages of acute pancreatitis, being associated with a high risk of sepsis and shock (12)(13)(14). High levels of nitric oxide are the result of an increased activity of one of the nitric oxide synthase (NOS) isozymes = inducible -NOS (iNOS) [8,14,15]. Inducible nitric oxide synthase is encoded by the iNOS gene and has an increased activity during inflammation, suggesting an important involvement of this enzyme in disorders like acute pancreatitis [1,8,14,15]. Enzyme expression and nitric oxide production might be influenced and modified by several iNOS gene polymorphisms [16]. Higher nitric oxide production was thus associated with an increased enzyme expression determined by single-nucleotide polymorphisms (SNPs) located in the promoter region of the iNOS gene (iNOS -954G>C or -2087A>G) [17,18].
Subjects
The study population comprised 110 patients with acute pancreatitis (AP) and 232 controls with no evidence of pancreatic pathology, either inflammatory or tumoral. Cases and controls were aged >18 years, were of Romanian origin and consented to provide biological samples for genetic analysis. AP was diagnosed based on both clinical symptoms and imaging signs. Biological samples (peripheral whole blood) from both groups were obtained from patients who were admitted at the Emergency County Hospital of Craiova, Romania between January 2013 and July 2014. Controls were selected from individuals who attended the same hospital and had no history of acute or chronic inflammatory diseases, infectious, cancer or autoimmune disorders. The study design was approved by the Ethics Committee of University of Medicine and Pharmacy of Craiova, Romania. All participants were properly informed and signed a written consent and approval form for genetic analysis in accord with the Helsinki declaration. Demographic data, age, gender, body mass index, diabetes, clinical information (family/personal history of cancer and long-term -at least six consecutive months -drug use) were also collected for each patient.
SNP genotyping
All participants were genotyped for iNOS -2087A>G (rs2297518). The genotyping was performed in a 5-μL reaction volume using TaqMan probes fluorescently labeled with FAM or VIC and following the protocol recommended by the supplier (Applied Biosystems, Foster City, CA, USA).
Real Time PCR cycling conditions (Real Time ViiA7 -Applied Biosystem) for the denatured reactions were 95 0 C for 10 minutes, followed by 45 cycles of 92 0 C for 15 seconds and 60 0 C for 90 seconds annealing temperature.
Interpretation of samples was done using ViiA™ 7 Software v1.0 with the Allelic Discrimination option.
Statistical analysis
The Hardy-Weinberg equilibrium was tested to compare the observed and expected genotype frequencies among cases and controls. To estimate the association between iNOS polymorphism and AP, we calculated odds ratios (ORs) and 95% confidence intervals (95% CI) using logistic regression analysis. Genotypes were assessed using indicator variables with the common homozygote as reference. A two-sided P value < 0.05 was considered to be statistically significant.
Results
All 342 samples harvested from AP patients and healthy controls were genotyped. Genotyping was performed in 110 AP patients and 232 controls.
The average age of our subjects with acute pancreatitis was 59.54, while for the control group it was 60.65. Based on disease severity, out of the total 110 cases of acute pancreatitis 23% (25 cases) had a mild form of the disorder, while 77% (85 cases) were severe cases of pancreatitis.
The polymorphism we studied was in Hardy-Weinberg equilibrium for both acute pancreatitis and healthy control groups.
The genotype frequency for iNOS -2087A>G polymorphism is shown in table 1. As shown, with a p value of 0.631 and an OR value of 0.902 (95%CI: 0.590 -1.378) we have no significant statistical association between the presence of this polymorphism and the increased risk for patients to develop acute pancreatitis.
Discussion
iNOS is induced in response to inflammation. iNOS is an enzyme that can generate large quantities of nitric oxide in response to cytokines and endotoxins, being involved in the pathway of reactive oxygen species [1,8]. iNOS gene comprises 27 exons and is found on chromosome 17q11.2 [24,25]. Several studies associated polymorphisms of this with the risk of developing various diseases [16,20,21,23,24,26]. In a study conducted by Ozhan et al it was shown that there is an association between iNOS -2087A>G polymorphism and susceptibility to acute pancreatitis [1].
In the present study, no statistical association between iNOS -2087A>G polymorphism presence /absence and risk of developing acute pancreatitis has been found. Furthermore, a stratified analysis based on disease severity (mild and severe acute pancreatitis) has been done. Likewise the stratified analysis performed showed no statistically significant results that could be associated with the risk of developing acute pancreatitis (data not shown).
Conclusion
According to these results, iNOS -2087A>G polymorphism was not associated with an increased risk of Romanian population for acute pancreatitis. Further extensive studies are needed on larger groups in order to clarify the role of inducible nitric oxide synthase (iNOS) polymorphisms in acute pancreatic inflammation. | 2018-04-03T04:37:22.191Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "a6836fab889ac977f4fe756699b9945f51812419",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a6836fab889ac977f4fe756699b9945f51812419",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231929668 | pes2o/s2orc | v3-fos-license | Portioning-Out and Individuation in Mandarin Non-interrogative wh-Pronominal Phrases: Experimental Evidence From Child Mandarin
Portioning-out and individuation are two important semantic properties for the characterization of countability. In Mandarin, nouns are not marked with count-mass syntax, and it is controversial whether individuation is encoded in classifiers or in nouns. In the present study, we investigates the interpretation of a minimal pair of non-interrogative wh-pronominal phrases, including duo-shao-N and duo-shao-ge-N. Due to the presence/absence of the individual classifier ge, these two wh-pronominal phrases differ in how they encode portioning-out and individuation. In two experiments, we used a Truth Value Judgment Task to examine the interpretation of these two wh-pronominal phrases by Mandarin-speaking adults and 4-to-6-year-old children. We found that both adults and children are sensitive to their interpretative differences with respect to the portioning-out and individuation properties. They assign either count or mass readings to the bare wh-pronominal phrase duo-shao-N depending on specific contexts, but only count readings to the classifier-bearing wh-pronominal phrase duo-shao-ge-N. Moreover, the portioning-out and individuation properties associated with the individual classifier ge emerge independently in the course of language development, with the portioning-out property taking precedence over the individuation property. Taken together, the present study provides new evidence for the view that the portioning-out and individuation properties in Mandarin are encoded in classifiers rather than in nouns, and these two semantic properties are two distinct components in our grammar.
INTRODUCTION
Since Jespersen (1924, p. 198), countability of nominal expressions is usually defined as the property of "portioning out" (Borer, 2005) and individuating referents. Thus, portioning-out and individuation are two core concepts involved in the characterization of countability (Quine, 1960;McCawley, 1975;Pelletier, 1979Pelletier, , 2012Ware, 1979;Allan, 1980;Gordon, 1982Gordon, , 1985Macnamara, 1982;Bloom, 1990;Chierchia, 1998Chierchia, , 2010Snedeker, 2005, 2006;Borer, 2005;Bale and Barner, 2009;Rothstein, 2010Rothstein, , 2017; among many others). For the sake of clarity, we briefly introduce our use of these two concepts, while giving a more detailed account in Sections "Portioning-Out and Individuation in Mandarin" and "Portioning-Out and Individuation in Mandarin wh-Pronominal Phrases. " We define the portioning-out function of a linguistic element as the process of carving out a discrete unit for counting (cf. Au Yeung, 2005;Borer, 2005;Rullmann and You, 2006;Huang, 2009;Huang and Lee, 2009;Li, 2013;Zhang, 2013). Two related semantic dimensions are involved in the concept of portioning-out: cardinality (singularity/plurality) and discrete units of counting 1 . To illustrate, the classifier kuai in sentence (1) specifies one chunk of entity, in which the cardinality is one and the discrete unit of counting is 'chunk.' So, the classifier phrase kuai-pinggo 'CL kuai -apple' refers to an apple chunk. By contrast, in the absence of a classifier, bare nouns in Mandarin are underspecified in quantity and no unit of counting is specified. So the bare noun pingguo in (2) can denote one or more individual apple(s)/apple chunk(s), and even apple substance in form of pureé.
(1) Panzi li you kuai pingguo plate on exist CL chunk apple 'There is an apple chunk on the plate.' (2) Panzi li you pingguo plate on exist apple a. 'There is/are an apple/apples on the plate.' b. 'There is/are an apple chunk/apple chunks on the plate.' c. 'There is some mashed apple on the plate.' (Huang, 2009, p. 40) On the other hand, the individuation function of a linguistic element is a more restrictive notion, since it is defined on the basis of the portioning-out function. To illustrate, similar to the classifier kuai 'chunk' introduced above, the individual classifier ge in (3) specifies a discrete unit of counting. However, unlike kuai, the discrete unit of counting associated with ge has to be a kind of discrete units that corresponds to the natural unit of individual objects. Thus, the individuation function of individual classifiers in Mandarin requires that their associated nouns must denote individual objects, and cannot be a non-individual entity such as apple substance or an apple chunk. Taken together, due to the portioning-out and individuation functions of ge, the classifier phrase ge-pingguo in (3) denotes an individual, 'whole' apple. The identification of the individuation function of individual classifiers distinguishes this type of classifiers from non-individual classifiers, a fact that is well-acknowledged in the literature (e.g., Chao, 1968;Sybesma, 1998, 1999).
(3) Panzi li you ge pingguo plate on exist CL ge apple 'There is an apple on the plate.' Typologically distinct languages, as defined here by the presence/absence of the count-mass syntax (e.g., English versus Mandarin), generally differ in their ways of portioning-out and individuating referents. However, there is a fundamental issue that linguists and psycholinguists have been pursuing, namely, whether or not the apparent cross-linguistic distinctions reveal some language universals in expressing and representing countability. The present study attempts to address this issue by investigating how portioning-out and individuation in Mandarin are encoded and acquired in the child grammar.
Consider English first. In this language, plural morphology and determiners portion out and individuate referents of associated nominal expressions (Borer, 2005). For example, in (4) and (5), due to the presence/absence of the plural marker -s, the count and mass uses of the same noun chicken in the phrases 'the chickens' and 'the chicken' differ in portioning out and individuation (Borer, 2005). While the chickens denotes a multiple number of individual chickens without specifying how big these chickens are, the chicken denotes a certain amount of chicken mass without specifying whether there exist individual chickens (cf. Bale and Barner, 2009).
(4) He did not eat the chickens this evening. (5) He did not eat the chicken this evening.
Previous empirical research shows that English-speaking children acquire the portioning-out function of count determiners and plural morphology earlier than their individuation function. In particular, the early acquisition of the portioning-out function can be found in the study of Kouider et al. (2006). In this study based on a Preferential Looking Task, English-speaking Children as young as 36 months responded properly toward the distinct portioning-out information expressed by singular-plural markers. When these young children were presented with the sentences like look at the blickets (in which the plural marker -s was attached to the novel word blicket), they looked at the set of multiple individual objects. By contrast, when they were presented with the sentences like look at the blicket (in which the singular form of blicket was used), they looked at a single individual object. This suggests that children are aware of the plural units of portioning-out as encoded by -s, as distinguished from the singular unit of portioning-out encoded by the same word without the plural marker (see also Snedeker, 2005, 2006).
In contrast with the early emergence of the portioning-out function of English count determiners and plural morphology, the individuation function is delayed in English-speaking children. The delay of the individuation function is manifested by children's comprehension and production of English number words, count determiners (i.e., a, more, every, and both) and plural morphology. Unlike adults, 3-to-4-year-old children treat discrete physical objects (i.e., parts of broken individual objects) as units of portioning-out. This kind of non-adult-like behavior is reported in Brooks et al. (2011) (see also Shipley and Shepperson, 1990;Wagner and Carey, 2003). For example, on a Counting Task in which children were asked with questions like 'can you count the shoes?, ' children included in their counting partial objects (e.g., three divided parts of a shoe), as well as whole objects. This kind of non-adult-like response can be attributed to the delay of the individuation function of the plural morphology in 3-to-4-year-old English-speaking children, in the sense that children have not acquired the linguistic knowledge that the multiple entities associated with the plural morphology must be individuals.
Unlike English, classifiers languages like Mandarin do not have grammatical categories like plural morphology and determiners to encode portioning-out and individuation. Rather, classifiers are used to express the meanings associated with portioning-out and individuation (Borer, 2005;Li, 2013;Zhang, 2013). In addition, Mandarin has another peculiar typological feature that is not attested in English, i.e., the use of bare nouns. Unlike English in which nouns are used either in a count or a mass form, nouns in Mandarin can appear in a bare form, in which no grammatical category is used to mark their countability. These typological features of Mandarin nouns and classifiers bring about heated discussion and debates regarding the expression and representation of countability in this language. In particular, it is widely accepted that Mandarin classifiers encode portioning-out (Chen, 2003;Au Yeung, 2005;Borer, 2005;Huang, 2009;Huang and Lee, 2009;Li, 2013;Zhang, 2013). However, it is controversial with regard to the encoding of individuation. While some scholars contend that individuation is encoded in nouns (Chao, 1968;Fung, 1993;Doetjes, 1997;Cheng and Sybesma, 1998, 1999Cheng et al., 2008;Liu, 2014), others argue that individuation is encoded and specified by Mandarin individual classifiers (Hansen, 1983;Bach, 1989;Graham, 1989;Krifka, 1995;Chierchia, 1998;Borer, 2005;Huang, 2009;Huang and Lee, 2009;Rothstein, 2010;Pelletier, 2012).
To address the theoretical controversy, the present study investigates portioning-out and individuation associated with bare nouns and classifiers co-occurring with non-interrogative Mandarin wh-pronominal phrases, a new area of research that has barely drawn linguists' attention so far. We focus on a minimal pair of wh-pronominal phrases when they are used in conditionals (e.g., Cheng and Huang, 1996;Lin, 1996), namely, the bare wh-pronominal phrase 'duo-shao N' ('bare' in the sense that there is no co-occurring classifier) and the classifier-bearing wh-pronoun phrase 'duo-shao-ge N' (in which the individual classifier ge appears between the wh-pronoun duoshao and the head noun). In two experiments, we used a Truth Value Judgment Task (Crain and Thornton, 1998) to test the interpretation of non-interrogative sentences containing these two wh-pronominal phrases by Mandarin-speaking adults and children. Our experimental data provide strong evidence for the view that (i) portioning-out and individuation in Mandarin are encoded in classifiers rather than in nouns; (ii) portioningout and individuation are two distinct linguistic components in the characterization of countability in Mandarin, with portioning-out taking precedence over individuation (Borer, 2005;Huang, 2009;Huang and Lee, 2009;Duan, 2011). In a word, the present study contributes new data to adjudicate the theories of the count-mass issues in Mandarin. From a cross-linguistic perspective, the similar developmental pattern on the acquisition of portioning-out and individuation between Mandarin and English indicates some language universals in encoding portioning-out and individuation, despite their distinct ways of encoding these two semantic properties.
The remaining parts of the present study are arranged as follows. Sections "Portioning-Out and Individuation in Mandarin" and "Portioning-Out and Individuation in Mandarin wh-Pronominal Phrases" introduce portioning-out and individuation in Mandarin and in Mandarin wh-pronominal phrases, and Section "Portioning-Out and Individuation in Child Mandarin" introduce how these two semantic functions of Mandarin classifiers are acquired by Mandarin-speaking children. Section "Experiments" reports our two experiments. Section "General Discussion and Conclusion" discusses the experimental data and concludes the paper.
Portioning-Out and Individuation in Mandarin
Nouns in Mandarin are not systematically marked with the count-mass syntax like some Indo-European languages such as English do, and Mandarin nouns can be used in bare forms. On the other hand, the expression of countability is closely related to the Mandarin classifier system (e.g., Krifka, 1995;Sybesma, 1998, 1999;Borer, 2005). These typological features of Mandarin nouns and classifiers generate heated discussion and debates regarding the expression and representation of portioning-out and individuation in this language.
As introduced in Section "Introduction, " the portioning-out function of Mandarin classifiers carves out a unit for counting. This function is evident if one looks at the interpretive differences between minimal pairs of bare nouns and classifier phrases. We have seen that while bare nouns are not portioned out and thus are underspecified in quantification, a classifier-noun phrase specifies a discrete unit of counting for the interpretation of associated nouns. The portioning-out function of classifiers as it is used here is identified via various terms in traditional Chinese grammar, e.g., danwei ci 'unit word' (Lü, 1942), danwei mingci 'unit-nominal' (Wang, 1944), shuwei ci 'counting-unit word' (Gao, 1948).
The portioning-out function is a basic function attested in all Mandarin classifiers, in the sense that each and every type of Mandarin classifiers specifies a discrete unit of counting (Greenberg, 1972, p. 26;Krifka, 1995;Au Yeung, 2005;Huang, 2009;Huang and Lee, 2009;Zhang, 2013, pp. 36-38). Importantly, the unit of counting does not specify the weight or size of the quantified entities 2 . Hence, the CL-N phrase ge pingguo in (3) above does not indicate whether the individual apple is big or small (cf. Snedeker, 2005, 2006;Bale and Barner, 2009). This concept is important for our experimental design, as it will become clear later.
The encoding of individuation in Mandarin is a controversial topic in the literature. Some scholars argue that individuation is encoded in nouns, and Mandarin nouns are divided into count nouns and mass nouns based on their denotation (Doetjes, 1997;Cheng and Sybesma, 1998, 1999Cheng et al., 2008;Liu, 2014). Count nouns refer to nouns that denote entities that "present themselves naturally in discrete, countable units" (Cheng and Sybesma, 1998, p. 385), such as ping-guo 'apple.' On the other hand, mass nouns refer to nouns like shui 'water' whose denotation does not present itself naturally in discrete entities. As for the function of Mandarin classifiers, Sybesma (1998, 1999) propose that individual classifiers (or 'count classifiers' in their terminology) "name" inherent units of counting that are encoded in associated count nouns, or "make the semantic partitioning of count nouns syntactically visible" (p. 520) (cf. Doetjes, 1997). On the other hand, other types of classifiers (or 'massifiers' in their terminology) "create" units of counting. The distinction between count classifiers and massifiers is regarded as the realization of the grammatical count-mass distinction at the classifier level in Mandarin [cf. see Tang, 2005;Li, 2013;Zhang, 2013 for their arguments against Sybesma (1998, 1999) account]. The account proposed by Cheng and Sybesma is named as the 'lexico-syntactic approach' by Lin and Schaeffer (2018).
Differing from the lexico-syntactic approach, some other scholars contend that bare nouns in Mandarin do not specify their count or mass status, and it is classifiers that determine and specify individuation of a noun (Hansen, 1983;Bach, 1989;Graham, 1989;Krifka, 1995;Chierchia, 1998;Borer, 2005;Huang, 2009;Huang and Lee, 2009;Rothstein, 2010;Pelletier, 2012). We focus on the accounts proposed by Borer (2005) and Pelletier (2012) here. Both accounts argue that classifiers determine individuation in Mandarin, but they differ in their characterization of bare nouns, as detailed below.
In Borer's (2005) account, both count nouns and mass nouns are grammatically constructed. Thus, "all nouns, in all languages, are mass, and are in need of being portioned out, in some sense, before they can interact with the 'count' system (p. 93)." From a cross-linguistic perspective, Borer proposes that the portioningout function is accomplished in Mandarin through the projection of count classifiers, on a par with the portioning-out function of plural morphology and count determiners and quantifiers in English. In this account, bare nouns in Mandarin, in the absence of a portioning-out category, are regarded as having only their default mass interpretation.
In Pelletier's (2012) account, the count-mass interpretation involves the interaction between four features at two levels: +COUNT syn and +MASS syn at the syntactic level, and +COUNT sem and +MASS sem at the semantic level. In particular, at the semantic level, "the semantic value of every lexical noun contains all the values of which the noun is true (p. 19)." Thus, both count and mass values are available, and nouns are unspecified in the lexicon for their count and mass interpretation before they enter the syntax. When the syntactic feature +COUNT syn is introduced, the opposite semantic feature +MASS sem on the noun is deleted, resulting in a count interpretation. The mass interpretation is obtained in a similar way by introducing the syntactic feature +MASS syn and deleting the semantic feature +COUNT sem . Under Pelletier (2012) account, number-marking languages introduce the feature +COUNT syn via plural morphology and in combination with count determiners, and introduce +MASS syn in combination with other determiners. Classifier languages, on the other hand, introduce the +COUNT syn or +MASS syn in construction with count and mass classifiers, respectively. When it comes to the interpretation of nouns in Mandarin, since neither +COUNT sem nor +MASS sem is deleted, bare nouns in Mandarin are flexible with count and mass readings. When co-occurring with a count classifier, nouns allow only the individual-denoting reading.
Overall, portioning-out and individuation are two fundamental notions related to the count-mass interpretation of nominal expressions in Mandarin. Scholars agree that Mandarin classifiers specify a discrete unit for counting and thus encode the portioning-out function. However, it is still controversial whether individuation in Mandarin is encoded and specified in nouns or in classifiers, and how to interpret bare nouns. In the present study, we investigate portioning-out and individuation in another under-investigated area, namely the Mandarin wh-pronominal system and then discuss, after presenting our experimental data, which account fares better to characterize countability of Mandarin.
Portioning-Out and Individuation in Mandarin wh-Pronominal Phrases
To investigate portioning-out and individuation in the Mandarin wh-pronominal system, we focus on two wh-pronominal phrases, i.e., duo-shao-N and duo-shao-ge-N. The difference between these two wh-pronominal phrases is that while duo-shao-N is 'bare, ' in the sense that it includes no classifier, duo-shao-ge-N includes the individual classifier ge in its lexical morphology 3 . Next we show that, the interpretive differences between duo-shao and duo-shaoge-N are in parallel with those between bare nouns and CL ge -N as shown in examples in (1) and (3).
Consider sentences in (6)-(7), in which duo-shao-N and duo-shao-ge-N occur in conditional structures, a typical structure licensing the non-interrogative use of wh-pronouns (e.g., Cheng and Huang, 1996;Lin, 1996;Chierchia, 2000;Liu, 2016). These are the two types of sentences we used in our experiments, as will be shown later. Duo-shao-N in (6) and duo-shao-ge-N in (7) receive distinct semantic interpretations in portioning-out and individuation. In (6), the bare wh-pronoun phrase duo-shao-li does not contain a linguistic element encoding the portioning-out and individuation functions. Therefore, this phrase is underspecified in terms of the discrete unit of counting, and the referents of the associated noun li 'pear' can be measured by multiple scales, such as a cardinal scale, a scale of weight, a scale of volume, etc. This explains why sentence (6) is ambiguous. On a substance-denoting reading, this sentence states that Dog and Cat ate the SAME AMOUNT of pear(s), in which the referent of li 'pear' is measured on a scale of weight and other information such as the number and the shape of pear(s) is not specified. Alternatively, on an individualdenoting reading, this sentence means that Dog and Cat ate the SAME NUMBER of pears, in which the referent of li 'pear' is measured on a cardinal scale and information such as the size or weight of the pears is not specified. These are two possible readings that can be conveyed by sentence (6), among many other possible readings. These are also the readings we aim to trigger for the interpretation of the bare duo-shao-N in Experiment 1.
By contrast, in interpreting duo-shao-ge-li phrases in sentence (7), a discrete unit of counting is specified, due to the portioningout function of ge. Furthermore, the individuation function of ge requires that this discrete unit corresponds to the natural unit of individual objects as denoted by the associated noun. Thus, duo-shao-ge-li in sentence (7) must denote individual pears and this sentence can only have the individual-denoting reading: Dog and Cat ate the SAME NUMBER of pears. The portioning-out function of ge is examined in Experiment 1, and its individuation function of ge is examined in Experiment 2.
Overall, the interpretation of the sentences in (6)- (7), together with the interpretation of (1)-(3) in Section "Introduction, " boils down to one parameter of variation: the presence/absence of the individual classifier ge determines their portioning-out and individuation. In the absence of such an individual classifier, bare nouns and duo-shao-N are underspecified on portioningout and individuation, thus allowing both count readings (i.e., the 'individual-denoting' reading) and mass readings (i.e., the 'substance-denoting' reading). By contrast, the presence of the individual classifier ge in classifier-bearing nominal phrases and duo-shao-ge-N determines that they can only convey the count readings (i.e., the 'individual-denoting' reading). Next, we will see how these two semantic functions are acquired by Mandarinspeaking children. We will first review some previous studies, then report our own experiments.
Portioning-Out and Individuation in Child Mandarin
It has been reported that the portioning-out and individuation functions of Mandarin classifiers develop independently in the course of language development. In particular, Huang (2009), Huang andLee (2009), andDuan (2011) report that the portioning-out function emerges earlier than the individuation function in Mandarin-speaking children's interpretation of Mandarin classifiers. Differing from previous research, which investigates either the interpretation of classifiers (Chien et al., 2003;Li et al., 2008Li et al., , 2010Cheung et al., 2010) or bare nouns (Lin and Schaeffer, 2018), Huang and Lee and Duan examine both the interpretation of bare nouns and classifier-bearing structures. Let us see the details below. Huang (2009) and Huang and Lee (2009) investigated the interpretation of the sentences containing three individual classifiers, ge, tiao and zhang, as compared with the interpretation of sentences containing bare nouns. According to them, the portioning-out function of these three individual classifiers is acquired by children as young as 3 years old, while their individuation function is not acquired until they reach 6 years of age. These two functions were tested using a picture verification task with 3-to-6-year-old children. Children were presented with minimal pair sentences which differ only in the presence or absence of an individual classifier, as exemplified in (8) (with CL-N ge lizi 'CL-pear) and (9) (with the bare noun lizi 'pear'). Each of these two sentences was tested with five pictures as shown in Figure 1. Among the pictures, Pictures 1-3 were used to test the portioning-out function of ge and Pictures 4-5 were used to test its individuation function.
(8) Dishang you ge lizi, zhuoshang ye you ge lizi, ground-on exist CL ge pear table-on also exist CL ge pear 'There is a pear on the ground, and there is also a pear on the table.' (9) Dishang you lizi, zhuoshang ye you lizi ground-on exist pear table-on also exist pear 'There is/are a pear/pears on the ground, and there is/are a pear/pears on the table.' In their response to the individual classifier-bearing sentence in (8), children as young as 3 years correctly accepted the sentence as a correct description of Picture 1 in Figure 1 (which shows one object on the table/on the floor for the interpretation of the structure ge lizi 'CL-pear'), but rejected the sentence for Picture 2 and Picture 3 (which contain more than one object on the table for the interpretation of ge lizi). By contrast, in their response to the bare noun-bearing sentence in (9), these children accepted the sentence for all of the three pictures (Pictures 1-3). Based on children's responses, these two studies concluded that while young children are aware that a singular unit of counting is involved in individual classifier-bearing sentences as in (8), but not necessarily in bare noun-bearing sentences as in (9).
Moreover, when tested for the individuation function of individual classifiers, 3-to-5-year-old children judged Picture 4 and Picture 5 in Figure 1 (which contain partial objects) to be good descriptions of both the sentence (8) (with an individual classifier) and the sentence (9) (with a bare noun), while 6-yearold children started being adult-like and thus only accepted these two pictures with the bare noun sentence (9) but not with the individual classifier-bearing sentence (8). The younger children's non-adult-like behavior is attributed by the authors to the lack of the individuation function of individual classifiers in the early stage of language development, such that children of younger age do not know that individual classifier structures must refer to individual whole objects. Duan (2011) looked into the acquisition of collective classifiers. Collective classifiers in Mandarin encode the portioning-out and individuation functions, specifying that the associated nouns denote multiple individual objects (cf. Huang, 2009;Zhang, 2013). Duan reported that 6-to-10-yearold children exhibit adult-like responses when tested with the portioning-out function of collective classifiers, but their individuation function is not acquired until children reach 10 years of age. She tested five collective classifiers, including shuang 'pair, ' dui 'pair, ' qun 'group, ' chuan 'string' and pai 'row.' To illustrate, we use dui 'pair' here. When it comes to the portioning-out function of this collective classifier, children correctly accepted sentences like (10) in situations presenting a pair of objects, but rejected the same sentences in situations presenting one single object, three objects, or two pairs of objects (Experiment 1).
(10) Tupian shang you yi dui shouzhuo picture on exist one CL dui bracelet 'There is a pair of bracelets in the picture.' As for the individuation function associated with dui, duicontaining sentences were judged against three different pictures, one with two whole objects, one with two partial objects of the same shape, and one with two partial objects of different shapes (Experiment 5). Her findings indicate a developmental pattern. For the 6-year-old group, in addition to their acceptance of the sentences for the whole-object pictures, a large percentage of children accepted the sentences when presented with the two kinds of partial-object pictures: 75% (for the pictures of partial objects of the same shape); 44% (for the pictures of partial objects of different shapes). To compare, in the 8-year-olds and 10-yearolds, the percentage for allowing the test sentences to match the two kinds of partial object situations dropped to 30% or so, close to the adult level.
Summing up, previous research shows that the portioningout function of Mandarin classifiers is acquired by children as young as 3 years old, while the individuation function is not acquired until they reach 6 years of age. Therefore, these two functions develop independently in the course of language development, with the portioning-out function being acquired earlier than the individuation function. The empirical data also show that bare nouns are underspecified in portioning-out and individuation, allowing both count and mass readings. Now we turn to our experiments.
EXPERIMENTS
In what follows we will present two experiments we conducted in order to investigate the acquisition of portioning-out and individuation involved in the comprehension of Mandarin whpronominal phrases with and without the individual classifier ge: duo-shao-N and duo-shao-ge-N. In particular, Experiment 1 focuses on the portioning-out function of ge and the interpretation of bare nouns, and Experiment 2 on the individuation function of ge.
Experiment 1
Experiment 1 investigated the acquisition of portioning-out as involved in the interpretation of duo-shao-N and duo-shao-ge-N. The experimental design is as follows.
Test Sentences, Research Questions and Predictions
There are two types of test sentences, exhibiting the noninterrogative uses of duo-shao-N and duo-shao-ge-N in the conditional structure. Recall that this structure requires that the pair of wh-pronouns in the antecedent and in the consequent denotes the same quantificational information, as exemplified in (6) and (7), repeated here as (11) and (12) As discussed earlier, due to the absence of a linguistic element encoding the portioning-out function and the individuation function, the bare duo-shao sentence in (11) is ambiguous and the referent of the duo-shao-li phrase in this sentence can be measured by multiple scales. Among many other possible readings, this sentence can denote an individual-denoting reading ('Dog and Cat ate the same number of pears') and a substance-denoting reading ['Dog and Cat ate the same amount of pear(s)'] when the context highlights an appropriate scale of measurement. To be more specific, the individualdenoting reading can be triggered when a cardinal scale is under consideration, and the substance-denoting reading can be triggered when a scale of weight is in focus. Our experiment will provide appropriate contexts to trigger these two readings, as will be shown shortly.
By contrast, due to the portioning-out function of the individual classifier ge, the duo-shao-ge-li in (12) specifies a discrete unit of counting. Furthermore, the individuation function of ge requires that, this discrete unit of counting corresponds to the inherent natural unit of individual pears as denoted by the noun li 'pear.' In this case, the referents of the duo-shao-ge-li phrase can only be measured by a cardinal scale. Taken together, sentence (12) conveys only an individualdenoting reading ('Dog and Cat ate the same number of pears').
It is worthwhile to point out that the individual-denoting reading assigned to duo-shao-ge-li in (12) comes from a different source, as compared to the same individual-denoting reading assigned to the bare duo-shao-li in (11). As we stated above, the individual-denoting reading in (11) is triggered by context (via a cardinal scale). However, the individual-denoting reading in sentence (12) is imposed by morpho-syntax, i.e., the presence of the individual classifier ge. This morpho-syntacticdriven reading cannot be overridden by the context. Thus, we expect that whatever context is provided, sentence (12) only has the individual-denoting reading. We will confirm this in our experiment.
Another important thing we need to clarify is that, even though the individual classifier ge in duo-shao-ge-N phrases has both the portioning-out and individuation functions, Experiment 1 only tested the portioning-out function. We thus left the examination of the individuation function to Experiment 2. To examine the portioning-out function, we compare the interpretive differences between duo-shao-ge-N and duo-shao-N in portioning-out: while duo-shao-N allows multiple scales of measurement, duo-shao-ge-N specifies a discrete unit of counting. As we will show later, in the experiment we provide two different scales of measurements: a cardinal scale and a scale of weight. If participants allow both of these scales of measurement in their interpretation of the bare duo-shao-N sentences but only a cardinal scale for the duo-shao-ge-N sentences, we can conclude that they are aware of the portioning-out function of ge.
Since morpho-syntax (i.e., the presence/absence of a classifier) and contextual information affect the interpretation of these two wh-pronominal phrases in specifying a unit of portioningout, we formulated the following two research questions for Experiment 1. First, we ask whether Mandarin-speaking children will behave like adults and allow both individualdenoting and substance-denoting readings in interpreting bare duo-shao sentences, but only individual-denoting readings in interpreting classifier-bearing duo-shao-ge sentences. If so, we ask further whether Mandarin-speaking children know that contextual manipulation affects the interpretation of bare duoshao sentences but not that of classifier-bearing duo-shao-ge sentences. We predict that the answers to these two questions are positive, considering the early acquisition of the portioning-out function of Mandarin classifiers as reviewed earlier (Huang, 2009;Huang and Lee, 2009;Duan, 2011).
Participants
We recruited 20 4-to-5-year-old Mandarin-speaking children from a kindergarten affiliated to Soochow University, Jiangsu Province, China. The child group ranged in age from 4;3.28 to 5;7.13, with a mean age of 4;11.26. Based on previous research on the acquisition of Mandarin classifiers and wh-pronouns (Li and Tang, 1991;Huang, 2009;Huang and Lee, 2009;Fan, 2012;Zhou et al., 2012), we estimate this is the youngest age we can test for the portioning-out function of Mandarin classifiers associated with the two wh-pronouns. We also included a control group of twenty adults. The adult participants were postgraduate students from Soochow University.
Procedures
The experiment used a Truth Value Judgment Task (Crain and Mckee, 1985;Crain and Thornton, 1998). The task involves two experimenters. One experimenter narrates the stories using toys and props. The other experimenter plays the role of a puppet who watches the story alongside the child. At the end of each story, the puppet is invited to explain to the child what has happened in the story. The child's task is to judge whether the puppet says the right thing or not. If the child informs the puppet that s/he is wrong, then s/he is asked to explain "what really happened?" The child participants were introduced to the task and tested individually. Each child was given one practice trial to familiarize with the task. Only those children who responded correctly in the practice trial proceeded to the test session. The adult participants were tested on the same stories, but they were tested in a group. After listening to the narration of the stories by the experimenter, the adults were asked to indicate on an answer sheet whether the puppet was right or wrong. As with the child participants, the adult participants were asked to provide a justification if they judged that the puppet had offered an inaccurate description of the story. They were told to work on their sheet independently and were not allowed to discuss among themselves. The practice trial was also given to the adult participants at the beginning of the testing.
Test Conditions
There are two test conditions, representing two distinct contexts that are associated with portioning-out: the number of individual objects and the amount/size of entities (cf. Snedeker, 2005, 2006;Bale and Barner, 2009). Test Condition 1 is designed to create an amount-oriented context, by comparing the AMOUNT of two entities which differ in size (e.g., eating two big pumpkins versus eating two small pumpkins). Thus, a scale of weight is embedded in the design of this condition. Test Condition 2 is designed to create an individual-oriented context, highlighting the EQUAL NUMBER of entities acted upon by two animal characters (e.g., each of the animals makes two flowers, and all the flowers are considered as good art works, regardless of their size). Clearly, a cardinal scale is designed in this condition.
Each of the two types of test sentences shown in (11) and (12) was tested in the two distinct contexts. We expect that the sentences containing the bare wh-pronoun duo-shao [e.g., sentence in (11)] are ambiguous, and aim to trigger two distinct readings upon our contextual manipulation: a substancedenoting reading in the amount-oriented context and an individual-denoting reading in the individual-oriented context. On the other hand, we expect that the sentences containing the classifier-bearing wh-pronoun duo-shao-ge [e.g., sentence in (12)] will select only the individual-denoting reading, no matter how the context is manipulated. The experimental design of Experiment 1 is summarized in Table 1 below.
Test Materials
From Table 1, we can see that two independent variables are created in the experimental design, namely the morpho-syntactic factor (i.e., presence/absence of the individual classifier ge) and contextual information (i.e., amount-oriented context versus individual-oriented context). Thus, Experiment 1 is designed to investigate how these two factors determine and influence the portioning-out associated with the wh-pronouns duo-shao and duo-shao-ge. We will now illustrate the experimental design with some typical trials.
First, let us consider the amount-oriented context, as shown in (13). In this story, there were six animal characters eating three kinds of vegetables. Among these six animals, three animals each ate two big vegetables and became very full, while the other three animals each ate two small vegetables of the same kind and were still hungry. With this design, these six animals constituted three pairs, with each pair eating two vegetables of the same kind, but of a different size (i.e., Elephant eating two big pumpkins versus Monkey eating two small pumpkins; Rabbit eating two big carrots versus Horse eating two small carrots; Giraffe eating two big cabbages versus Dog eating two small cabbages). Importantly, the uneven amount of food eaten by each pair of the animal characters is significant, as the big amount made one animal full, while the small amount did not relieve the other animal's hunger at all. The last scenario of the story is shown in the picture in Figure 2.
Rabbit, Elephant, Giraffe, Horse, Monkey, and Dog went to buy vegetables. Rabbit, Elephant, and Giraffe each bought two big vegetables: Rabbit bought two big carrots, Elephants two big pumpkins, and Giraffe two big cabbages; they ate all their big vegetables, and became very full. Horse, Monkey, and Dog each bought two small vegetables: Horse bought two small carrots, Monkey two small pumpkins, and Dog two small cabbages. They ate the small vegetables, but were still hungry.
Against this kind of scenarios, both duo-shao sentences and duo-shao-ge sentences were tested in the same participants. In the case of duo-shao sentences, the puppet was asked to use three duo-shao sentences to compare the quantity of vegetables eaten by the three pairs of animals. This allowed us to introduce three tokens of the duo-shao sentences in a single story. We use the sentence in (14) as an example to illustrate the structure of the test sentences, and the other two sentences are of the same sentence structure. As discussed earlier, the duo-shao sentences are ambiguous between the individual-denoting reading ('X and Y ate the same number of vegetables') and the substancedenoting reading ['X and Y ate the same amount of vegetable(s)']. However, since the amount-oriented context underscores the amount/volume of the vegetables eaten by each pair of the two animal characters, the substance-denoting reading (i.e., 'X and Y ate the same amount of vegetable(s)') should be the favored reading in this amount-oriented context, if participants are sensitive to the context. Since this reading did not match the situation in the story [as X and Y in the story ate different amount of vegetable(s)], the test sentences were false descriptions of the story and participants were expected to reject the duo-shao sentences in this condition. In the same vegetable-eating scenarios as shown in (13), the classifier-bearing duo-shao-ge sentences were also presented, as exemplified in (15) below. (Noted that in the actual testing, the duo-shao-ge sentences were tested in separate sessions and the animal characters were also changed to different ones. We use the same animal names as in (14) Due to the presence of the individual classifier ge in the sentence (15), this sentence allows only the individual-denoting reading: Rabbit and Horse ate the same number of carrots. Obviously, this sentence is a true description of the story, as these two animals did eat the same number of carrots (i.e., two carrots). Thus, we expect that participants would accept the duo-shao-ge sentences in this amount-oriented context.
To sum up, in the amount-oriented context, we expected that participants would tend to reject the bare duo-shao sentence and assign the substance-denoting reading, but accept the classifier-bearing duo-shao-ge sentences and exclusively assign the individual-denoting reading in the same amountoriented context. Now consider the story designed for the individual-oriented context, as shown in (16). In this story, there were six animal characters doing three kinds of paper crafts: three animals each made two big paper crafts, and the other three animals each made two small paper crafts of the same kinds. Therefore, the six animals constituted three pairs, with each pair making two paper crafts of the same kinds, but of different sizes (Rainbow Bird made two big flowers and Duck made two small flowers; White Bird made two big books and Penguin made two small books; Black Bird made two big butterflies and Blue Bird made two small butterflies). The size difference did not affect the assessment of the animals' work, as all of the paper crafts were greatly cherished by Fairy. The last scenario of the story is showed in Figure 3.
(16) Story for the individual-oriented context (Condition 2).
Fairy is going to have her birthday. To celebrate her birthday, her friends Rainbow Bird, White Bird, Black Bird, Duck, Penguin and Blue Bird discuss to make some gifts for her. They decide to make three kinds of paper crafts: Rainbow Bird makes two big red flowers and Duck two small orange flowers; White Bird makes two big letter books and Penguin two small number books; Black Bird makes two big red butterflies and Blue Bird two small blue butterflies. Fairy likes all of the paper crafts made by her friends, and kisses each of them.
As we did in the amount-oriented context, both duoshao sentences and duo-shao-ge sentences were used in this individual-oriented context to compare the performance of the three pairs of animals. In the case of duo-shao sentences, the puppet produced three duo-shao sentences at the end of the story to each participant. An example sentence is given in (17) for illustration. In this individual-oriented context, the individual-denoting reading (i.e., 'X and Y made the same number of paper crafts') should be preferred, even though the duo-shao sentence is ambiguous. This reading matched the story situation (as X and Y did make the same number of paper crafts), In the same craft-making scenarios as shown in (16), the classifier-bearing duo-shao-ge sentences were also presented, as exemplified in (18) below. (Again, in the actual testing, the duo-shao-ge sentences were tested in separate sessions and the animal characters were changed to different ones. We use the same animal names as in (17) In addition to the test sentences, the puppet also produced a filler sentence before or after each test sentence. The filler sentences were true or false. They served to obscure the research purpose of the study, and to ensure that children remained aware of the task.
To wrap up, we designed a vegetable-eating story and a paper craft-making story for bare duo-shao sentences, and two similar stories (i.e., only a change of animal characters) for the individual classifier-bearing duo-shao-ge sentences. Overall, we had four stories in total in this experiment. We adopted the withinsubject design, testing each participant with the two types of test sentences in the two test conditions. That is, each participant was tested with the four stories provided. For both the child group and the adult control group, we had 60 test items (3 test sentences * 20 subjects) for each type of the test sentences in each condition, and the same number of filler sentences. The number of 'Yes' and 'No' responses were counterbalanced. The two types of the test sentences were counterbalanced among the participants, and they were tested on two different sessions, with at least half a day apart. Each session consisted of two stories, with one story presenting the amount-oriented context (Condition 1) and the other one presenting the individualoriented context (Condition 2); the ordering of the two stories was counterbalanced among the participants. Each testing session lasted about 15 min.
Results
Let us first consider the responses to the classifier-containing duo-shao-ge sentences. Both children and adults accepted the test sentences over 98% of the times in both the amountoriented context [children and adults: 100% (60/60 trials)] and in the individual-oriented context [children: 98% (59/60 trials); adults: 100% (60/60 trials)] (Figure 4). The acceptance of these test sentences indicates that the participants quantified over a cardinal scale and made the quantity judgment based on the number of individual objects in the two test conditions, as the two animals in question acted upon the same number of individual objects in our story situations (e.g., one animal ate two big strawberries, and the other ate two small strawberries). The data hence suggest that both children and adults assigned the individual-denoting reading to the duo-shao-ge sentences in the two distinct contexts, and the interpretation of this type of sentences is thus independent of context. This confirms our theoretical analysis of the wh-pronominal phrase duo-shao-ge-N.
The experimental data on the bare duo-shao sentences present a more complicated picture (see Figure 5). Consider the adults' data first. In the individual-oriented context, they accepted the duo-shao sentences 98% of the times (59/60 trials). This suggests that adults quantified over the number of individual objects in the story situations and assigned the individual-denoting reading to the duo-shao sentences in this context. Conversely, in the amount-oriented context, adults rejected the duo-shao sentences 80% of the times (48/60 trials). In justifying their rejections of the puppet's statements, they pointed out that the two animals in question acted upon uneven amounts of objects. For instance, in their justification for the rejection of sentence (14), participants pointed out that Rabbit ate the big carrots, while Horse ate the small carrots. The high percentage of the rejection rate (i.e., 80%) in the amount-oriented context hence indicates that the majority of the adults quantified over the amount of objects and assigned the substance-denoting reading to the duo-shao sentences in this amount-oriented context. A Wilcoxon-test shows that adults chose the individual-denoting reading for the duo-shao sentences in the amount-oriented context significantly less than in the individual-oriented context (20% vs. 98%, Z = 3.9, p < 0.001). Thus, we conclude that the adults made a clear distinction in their responses to the duo-shao sentences in these two conditions. By examining each adult participant's responses to the duoshao sentences across the two test conditions, we found that 80% of the adults (16 out of 20 adults) exhibited both the individualdenoting and substance-denoting readings in their interpretation of the duo-shao sentences. They assigned the substance-denoting reading in the amount-oriented context and the individualdenoting reading in the individual-oriented context. We call this pattern of responses Pattern I: a combination of the individualdenoting and substance-denoting readings. Moreover, 20% of the adults (4 out of 20 adults) accepted the duo-shao sentences across the two distinct contexts and assigned exclusively the individualdenoting reading to the duo-shao sentences. These four adults showed a preference of the individual-denoting reading, and did not change their interpretation of this type of sentences even in the amount-oriented context. We call this Pattern II: an individual-denoting reading.
Now consider children's responses to the duo-shao sentences. In the individual-oriented context, they accepted the test sentences 85% of the times (51/60 trials), assigning the individual-denoting reading to the duo-shao sentences in this context. In the amount-oriented context, children rejected the test sentences 35% of the times (21/60 trials) and assigned the substance-denoting reading in this context (Figure 5). Their rejections were justified by mentioning the uneven amounts of objects in the stories, just as the adults did in the same situations. This means that 65% of the times children still access the individual-denoting reading in the amount-oriented context. Moreover, a Mann-Whitney test shows that children assigned the individual-denoting to the duo-shao sentences in the amount-oriented context significantly more than adults did in the same context (children:65%; adults:20%; Z = 2.842, p < 0.05). Nevertheless, the children made a clear distinction in their responses to the duo-shao sentences in the two conditions, as they assigned the individual-denoting reading to the duo-shao sentences in the amount-oriented context significantly less than in the individual-oriented context (65% vs. 85%, Wilcoxon-test, Z = 2.0, p < 0.05).
Three patterns of responses are found in the children's interpretation of the duo-shao sentences, including the two patterns identified in the adult group and an additional pattern. First, 20% of the children (4/20) rejected the duo-shao sentences in the amount-oriented context but accepted them in the individual-oriented context, exhibiting both the substancedenoting reading and the individual-denoting reading. These children behaved like the majority of the adult group, and belong to Pattern I as defined above. Second, 65% of the children (13/20) accepted the duo-shao sentences in the two distinct contexts, assigning only the individual-denoting reading to the sentences (Pattern II). Third, 15% of the children (3/20) rejected the duo-shao sentences in the two distinct contexts, and assigned exclusively the substance-denoting reading to the sentences. These children justified their rejections by pointing out the uneven "amount" of objects in the amount-oriented context (e.g., Rabbit ate the big carrots, but Horse ate the small carrots), and the different sizes of objects in the individual-oriented context (e.g., Rainbow Bird made the big flowers, but Duck made the small flowers). We call this pattern, not attested in the adult group, Pattern III: a substance-denoting reading.
The distribution of the three patterns of responses is summarized in Figure 6.
Comparing the individual data of the adult group and the child group, we can conclude that the majority of the adult group are sensitive to the distinct contexts provided, and assign the substance-denoting reading and the individualdenoting reading to the bare duo-shao sentences in the specific contexts. Moreover, 4-to-5-year-old Mandarin-speaking children start assigning the individual-denoting and substance-denoting readings to the bare duo-shao sentences, but they are still not as sensitive to the contextual information as adults are. Children showed a preference for the individual-denoting reading in their interpretation of the duo-shao sentences.
Discussion
Now we are ready to answer the two related research questions we raised for Experiment 1. Our first question was whether Mandarin-speaking children would behave like adults and allow both individual-denoting and substance-denoting readings in interpreting bare duo-shao sentences, but would allow only individual-denoting readings in interpreting classifier-bearing duo-shao-ge sentences. Our second question was whether Mandarin-speaking children know that contextual manipulation affects the interpretation of bare duo-shao sentences but not that of classifier-bearing duo-shao-ge sentences.
The experimental results allow us to give positive answers to these two questions. First, children treated the classifierbearing duo-shao-ge sentences differently from the bare duoshao sentences in specifying a unit of counting. They assigned exclusively the individual-denoting reading to the classifierbearing sentences across the two distinct contexts. By contrast, they started assigning multiple readings to the bare duo-shao sentences, exhibiting three distinct patterns of responses in their interpretation of this type of sentences. Furthermore, even though children showed a preference for the individual-denoting reading in their interpretation of the duo-shao sentences in the amount-oriented context, a Wilcoxon-test shows that the percentage (65%) is still significantly lower than the percentage (100%) of the individual-denoting reading that they assigned to the duo-shao-ge sentences in the same amount-oriented context (Z = 2.646, p < 0.01). The assignment of the multiple readings assigned to the bare duo-shao sentences indicate that multiple scales of measurement are adopted in the interpretation of this type of sentences, due to the lack of a linguistic element encoding the portioning-out function. On the other hand, the assignment of the sole individual-denoting reading to the duo-shao-ge sentences indicate that only a cardinal scale is adopted, due to the portioning-out function of ge. Therefore, we conclude that Mandarin-speaking children are well aware of the portioningout function of Mandarin classifiers, and they are sensitive to the interpretive differences caused by the presence/absence of a classifier in their interpretation of these Mandarin whpronominal phrases.
Second, children also showed that contextual manipulation (amount-oriented context vs. individual-oriented context) affected their interpretation of the bare duo-shao sentences, but not the classifier-bearing duo-shao-ge sentences. In interpreting the duo-shao-ge sentences, they behaved like adults and assigned rigidly the individual-denoting reading in both the amount-oriented and individual-oriented contexts. On the other hand, children started assigning both the individual-denoting reading and the substance-denoting reading to the duo-shao sentences in the appropriate contexts, even though they were not yet sensitive to the contextual information as adults were.
This brings us to an issue raised by one reviewer, namely, why few adults (4 out of 20) and more than half of the children (65%) assigned the individual-denoting reading to the bare duoshao sentences in the amount-oriented context. Although we do not have an explicit answer to this question, we still think these 'participants' behavior fits with our proposal. According to our account, in fact, duo-shao is ambiguous between the individualdenoting reading and the substance-denoting reading. So the assignment of the alternative readings largely depends on how participants are sensitive to the specific contexts we designed. Even though we aimed to trigger the substance-denoting reading in the amount-oriented context and the individual-denoting reading in the individual-oriented context, the percentage of either interpretation is never at ceiling. The sentence remains ambiguous and preference for one particular reading can be hard to override. The absolute accuracy only applies to those sentences that are not ambiguous at all, like the duo-shao-ge sentences as show above.
This concludes our report of Experiment 1.
Experiment 2
Let us now turn to our second experiment, designed to investigate whether and at what age children are able to apply the individuation function of the classifier ge in interpreting the whwh-pronominal phrase duo-shao-ge-N. Such function determines that phrases containing duo-shao-ge can only refer to whole objects (and not their parts).
Test Sentences
A typical test sentence is shown in (19). In this sentence, the wh-pronominal phrase duo-shao-ge-N is contained in the same conditional structure we used in Experiment 1. Due to the individuation function of the classifier ge, the phrase duo-shao-ge-xi'gua in (19) has to denote individual watermelons, and cannot denote non-individuals such as slices of watermelons. Therefore, sentence (19) can only receive an individual-denoting reading: 'Mummy Giraffe and Baby Giraffe ate the same number of individual watermelons.'
Participants and Experimental Method
Two groups of 20 children participated in this experiment. The first group ranged in age from 4;3.11 to 5;5.6 (mean age 5;1.3); the second group ranged in age from 6;4.15 to 6;9.25 (mean age 6;7.23). We call these two groups of children '5year-old group' and '6-year-old group, ' respectively. We also included a control group of twenty adults, with a mean age of 20 years. The child and adult participants are not the same ones in Experiment 1. We adopted the same experimental method used in Experiment 1, namely, the Truth Value Judgment Task. Like we did in Experiment 1, we tested the child participants individually, and tested the adult participants in a group. There was a practice trial to familiarize the participants with the task, and only those participants who correctly responded in the practice trial proceeded to the test session.
Test Conditions and Materials
There were two test conditions, including the Whole Object Condition and the Partial Object Condition, and these two test conditions corresponded to two events of a story. In the Whole Object Condition, three pairs of characters, i.e., Mummy Giraffe and Baby Giraffe, Mummy Dog and Baby Dog, and a boy and a girl, went to buy food for a picnic. While the first member of each pair bought one food item, the second one bought two food items of the same kind and size. The English translation of the story script is shown in (20). The last scenario of the story is shown in Figure 7.
(20) A boy and a girl planned to have a picnic with their animal friends: Mummy Giraffe and Baby Giraffe, Mummy Dog and Baby Dog. They went to a supermarket to buy food. Mummy Giraffe bought two watermelons while Baby Giraffe bought one watermelon; Mummy Dog bought two sweet potatoes while Baby Dog bought one sweet potato; the boy bought two lemons while the girl bought one lemon.
Right after the narration of this part of the story, the puppet was invited to say what had happened in the story. Then the puppet replied by uttering three test sentences containing duo-shao-ge. An example is given in (21), which compares the number of watermelons bought by Mummy Giraffe and Baby Giraffe. The other two sentences, which we omit here, are of On the individual-denoting reading ('Mummy Giraffe and Baby Giraffe bought the same number of watermelons'), the example sentence (21) is a false description of the story and should be rejected, because Mummy Giraffe bought two watermelons while Baby Giraffe bought only one watermelon in the story.
In the Partial Object Condition, the same three pairs of characters each ate one food item, but they ate the food in two different ways: while one member of the pair ate his/her food with one gulp, the other one cut the food into two pieces and ate the two pieces separately. The English translation of this part of the story is given in (22). The last scenario of the story is shown in Figure 8.
(22) The boy, the girl and their animal friends got tired, so they took a nap. When they were fast asleep, a mouse came to steal their food. The mouse stole a watermelon from Mummy Giraffe, a sweet potato from Mummy Dog, and a lemon from the boy. After a while, the boy, the girl and their animal friends woke up, and found one of their food items had been stolen. So they started their picnic immediately. The boy, Mummy Giraffe and Mummy Dog were very hungry, and ate their food in one gulp. The girl, Baby Giraffe and Baby Dog cut their food in half, and then each of them ate the two pieces one by one.
After this part of the story, the puppet was invited again to state what had happened. The puppet produced another set of three duo-shao-ge sentences. An example is given in (23), which compares the number of watermelons eaten by Mummy Giraffe and Baby Giraffe. The other two sentences, which we omit here due to the limit of space, are of the same structure comparing the number of vegetables eaten by two other pairs of characters. Sentence (23) conveys the individual-denoting reading 'Mummy Giraffe and Baby Giraffe ate the same number of watermelons' and it is a true description of the story: the two halves eaten by Baby Giraffe came from a whole watermelon, and hence Baby Giraffe ate the same number of watermelons as Mummy Giraffe did, who did not cut her watermelon and ate it in one gulp. Therefore, adults were expected to accept the test sentences in this test condition.
As for children, however, considering the possible delay of the individuation function of Mandarin classifiers (cf. Huang, 2009;Huang and Lee, 2009;Duan, 2011), we predict that young children might reject the three test sentences in the Partial Object Condition. To exemplify with sentence (23), if young children are not yet aware of the individuation function of the classifier ge, they would then quantify over discrete entities and count two halves of the watermelon eaten by Baby Giraffe as 'two watermelons.' Therefore, for young children Baby Giraffe did not eat the same number of watermelons as Mummy Giraffe did, who ate one whole watermelon. This would lead to their rejection of the target sentence, which states that the two characters ate the same number of watermelons.
In addition, the puppet produced three simple sentences as shown in (24) The three sentences comment upon the number of food items eaten by the characters in the story who cut their food in half. For the reason explained above, if children acquire the individuation function of the classifier ge, they would reject sentences (24) and (26), which state that the animal characters ate two vegetables, and accept sentence (25), which states that the animal character ate one vegetable. Otherwise, they would accept (24) and (26), but reject (25). The filler sentences give us an additional source to look into the individuation associated with duo-shao-ge phrases.
Results
In the Whole Object Condition, both adults and children correctly rejected the test sentences 100% of the times (60/60 trials). They justified their rejections by mentioning the uneven number of food items that bought by the two characters in each test sentence. For example, a typical justification for the rejection of sentence in (21) is that while Mummy Giraffe bought two watermelons, Baby Giraffe bought only one watermelon.
In the Partial Object Condition, adults accepted the test sentences 95% of the times (57/60 trials). The high acceptance of the test sentences in this condition indicates that adults considered two halves as one individual object, thus assigning the individuation function to the individual classifier ge. Children exhibited a developmental pattern in their responses to the test sentences in this condition. In particular, the group of 5-year-old children accepted the test sentences only 35% of the times (21/60 trials), but the percentage increased to 90% of the times (54/60 trials) in the 6-year-old-group. A Mann-Whitney test shows that the 6-year-old children accepted the test sentences significantly more often than the 5-year-old children (Z = 3.547, p < 0.01), but there is no significant difference between the 6-year-old children and adults (Z = 0.593, p > 0.05). This result shows that children do not acquire the individuation function of the individual classifier ge until they reach the age of 6. This generalization is confirmed by children's justifications. For instance, when rejecting sentence (23), one child stated Baby Giraffe had eaten 'two watermelons, ' as shown in (27) Clearly, the child used the individual classifier phrase liang ge xi'gua 'two watermelons' to refer to two halves of a whole watermelon eaten by Giraffe Baby. Thus, younger children who did not acquire the individuation function of ge quantified over discrete entities, and rejected the test sentences in the Partial Object Condition just as they did in the Whole Object Condition. The experimental data are summarized in Figure 9 below. Further confirmation comes from children's responses to the sentences (24)-(26). Two kinds of responses are observed. First, the children who rejected the test sentences in the Partial Object Condition accepted the sentences (24) and (26), and rejected the sentence (25). These children did not acquire the individuation function of ge, allowing duo-shao-ge to quantify over discrete entities and counting two halves of a food item as 'two food items.' 65% of the children (13 out of 20) from the 5-yearold group exhibited this pattern of response. Second, those who correctly accepted the test sentences in the Partial Object Condition rejected the sentences (24) and (26) but accepted the filler sentence (25) as adults did. These children exhibited answers underlining an adult-like grammar in both kinds of sentences, and assigned the individuation function to ge. Hence, they considered two halves as one single individual object in their comprehension of the duo-shao-ge phrases. 90% of the children (18 out of 20) in the 6-year-old group displayed this pattern of response. To wrap up, Experiment 2 shows that the individuation function of Mandarin classifiers in duo-shao-ge is delayed in Mandarin-speaking children. Children do not acquire this function until they reach the age of 6. These results are consistent with the findings from previous studies on the acquisition of the individuation function of Mandarin classifiers (Huang, 2009;Huang and Lee, 2009;Duan, 2011).
GENERAL DISCUSSION AND CONCLUSION
In the present study, we conducted two experiments to investigate the portioning-out and individuation functions in the minimal pairs of wh-pronominal phrases with and without the classifier ge, i.e., duo-shao-N and duo-shao-ge-N. In Experiment 1, we found that 5-year-old Mandarin-speaking children were sensitive to the interpretive differences in portioning-out between these two wh-pronominal phrases. They assigned the individual-denoting and substance-denoting readings to duo-shao-N, but only the individual-denoting reading to duo-shao-ge-N. This indicates children's awareness of the portioning-out function associated with the classifier ge. In Experiment 2, we found that Mandarinspeaking children quantified over partial entities rather than individual objects in their comprehension of duo-shao-ge-N before they reached 6 years. We attribute this kind of non-adult responses to the delay of the individuation function of classifiers. Taken together, our experimental data show that Mandarinspeaking children, like adults, allow both count and mass readings in their interpretation of the bare wh-pronominal phrase duo-shao-N, and the portioning-out and individuation functions of the individual classifier ge associated with duo-shao-ge-N develop independently in the course of language development, with the portioning-out function taking precedence over the individuation function.
Based on our experimental findings, the present study can help adjudicate the main alternative accounts of the Mandarin count-mass issue, as reviewed in Section "Portioning-Out and Individuation in Mandarin." First of all, our experimental data give support to the view that individuation is encoded in classifiers rather than in nouns (Hansen, 1983;Bach, 1989;Graham, 1989;Krifka, 1995;Chierchia, 1998;Borer, 2005;Huang, 2009;Huang and Lee, 2009;Rothstein, 2010;Pelletier, 2012). As clearly shown in our Experiment 2, individuation is unambiguously specified with the presence of the individual classifier ge in the sentences containing duo-shao-ge, but not in the sentences containing the bare wh-pronoun duo-shao. Such a contrast between the minimal pair duo-shao and duoshao-ge allows us to see that bare elements like duo-shao-N phrases do not specify a fixed count or mass interpretation, and it is classifiers that play the decisive role of encoding individuation in Mandarin. This brings us to our comments on the lexico-syntactic account proposed by Cheng and Sybesma (1998), which claim that individuation is specified in nouns rather than in classifiers (see section "Portioning-Out and Individuation in Mandarin"). This account would predict that only individualdenoting readings are available for the nouns used in our experiment (i.e., nangua 'pumpkin, ' huluobo 'carrot' and baicai 'cabbage'). According to this account, these nouns would be classified as count nouns, as they "present themselves naturally in discrete, countable units, " and the function of individual classifiers is merely to "name" the natural unit of counting and make the semantic partitioning of the count nouns syntactically visible. In other words, contra our experimental findings, the lexico-syntactic account would not expect an interpretive difference between the nouns that co-occur with duo-shao and the nouns that co-occur with duo-shao-ge; and this account would not expect the multiple readings of the nouns cooccurring with duo-shao either. Therefore, the lexico-syntactic account proposed by Cheng and Sybesma cannot explain our experimental data, and the present study poses a challenge to this account.
Furthermore, both Borer (2005) and Pelletier (2012) hold that individuation is specified by classifiers, but the present study offers empirical evidence showing that Pelletier's account fares better than Borer's account in their characterization of bare nouns in Mandarin. As reviewed in Section "Portioning-Out and Individuation in Mandarin, " while Borer argues that bare nouns are mass by default, Pelletier holds that both count and mass interpretations are available for bare nouns. In our Experiment 1, both count and mass readings are attested in Mandarinspeaking children's and adults' interpretation of the sentences containing duo-shao.
In a word, among the three accounts on the Mandarin count-mass issue, Pelletier (2012) is the one that is consistent with our experimental data. All the main ideas of this account (i.e., individuation is specified by classifiers, and Mandarin bare nouns allow both count and mass interpretations) are empirically supported in our experiments. In the literature, a similar discussion on the interpretation of bare nouns can be found in Lin and Schaeffer (2018). In this study, 2-to-5 Mandarinspeaking children and adults are reported to assign both count and mass readings to three types of bare nouns, including count nouns (e.g., qiu 'ball'), mass nouns (e.g., mianfen 'flour'), flexible nouns (e.g., shengzi 'string'), even though various preferences are identified due to the factor of linguistic experience. However, this study only tested Mandarin-speaking children's and adults' interpretation of bare nouns in their experiments. In our experiments, we tested the interpretation of both bare nouns and classifier-bearing phrases. In this regard, we provide new and more convincing data for the study of the Mandarin countmass issue.
Our experimental data are consistent with the findings from Huang (2009), Huang andLee (2009), andDuan (2011), which report that the portioning-out function of Mandarin classifiers is acquired earlier than their individuation function (see section "Portioning-Out and Individuation in Mandarin wh-Pronominal Phrases"). From a cross-linguistic perspective, our experimental data are also consistent with the findings on the asymmetric acquisition of portioning-out and individuation in the interpretation of English plural morphology (see section "Introduction"). Thus, in both Mandarin and English, the portioning-out function emerge earlier than the individuation function in the course of language development. The asymmetric development of these two functions suggests, first of all, that the portioning-out function is more fundamental than the individuation function (Au Yeung, 2005), considering the assumption that core linguistic properties are part of the initial state of our grammar and occur early in the course of language development (Crain, 2012). This generalization is also compatible with the observation that the portioning-out function is the basic function of all Mandarin classifiers, while the individuation function is a special function encoded only in certain classifiers such as individual classifiers and collective classifiers (see section "Portioning-Out and Individuation in Mandarin wh-Pronominal Phrases"). Moreover, the crosslinguistic parallel suggests that, languages may differ in their ways of encoding portioning-out and individuation by using typological distinct formal categories (e.g., the plural morphology and count determiner in English, individual classifiers in Mandarin), but what these formal categories convey are similar in semantic functions.
Before we conclude the paper, we consider a remaining issue raised by the reviewer about children's nonadult responses in Experiment 2, i.e., young children's quantifying over partial objects and counting two halves of a watermelon as 'two watermelons.' We attribute the lack of the 'wholeness requirement' to the delay of the individuation function of the individual classifier ge in children's early grammar. However, the reviewer asked how we can exclude the possibility that it is actually the delay of "knowledge of the world": young children do not know how complete an object needs to be for it to be considered an individual object.
We do not have independent data to rule out this possibility. However, we can do so by resorting to some experimental findings as reported by Brooks et al. (2011). As we introduced in Section "Introduction, " this study found that 4-year-old English-speaking children treated pieces of broken things as units of counting when interpreting count quantifiers like more, every, both, and when labeling sets using plural morphology (Experiment 1). Furthermore, this study also found that when two familiar objects (e.g., two cups) were glued together, 4-yearold children counted the glued things as two rather than one (Experiment 2). Moreover, 4-year-old children did not include parts of objects with specific names (e.g., wheels of a bicycle) in their counting (Experiment 3) (see Srinivasan et al., 2013 for similar findings). Clearly, these experimental data indicate that 4year-old children knew well what constitutes an individual object. Therefore, we believe that 4-year-old English-speaking children accepted broken objects as units of counting, not because they did not know how complete an object should be in order to be called an individual object, but because they had not acquired the individuation function of those count quantifiers. Adopting the arguments of Brooks et al. (2011) to explain our Mandarin data, we hold that Mandarin children's non-adult behavior is not due to the lack of the real world knowledge about what constitutes an individual object. Rather, we attribute the non-adult behavior to the delay of the individuation function of ge, as we argue throughout the paper.
Furthermore, Brooks et al. (2011) propose that the learning of names for parts of objects (e.g., wheel) and unitizers like chunk, bit, slice, portion, and piece (e.g., piece of a shoe) could help English children attain the adult grammar. The acquisition of these expressions could indicate to children that pieces of things are labeled differently from whole things: parts of shoes should be counted as pieces of shoes rather than as shoes (see also Srinivasan et al., 2013). We think the same acquisition strategy may be applied by Mandarin children. Through the acquisition of classifiers such as kuai 'chunk' 'piece' and names for parts of objects, Mandarin children gradually understand that partial objects should be referred to by non-individual classifier phrases or specific nouns, restricting individual classifier phrases to refer to individual objects. Of course, more research needs to be done to explore these issues, and we leave it for future endeavors. Now we can conclude the paper. In line with the previous research, the present study contributes new data to support the view that portioning-out and individuation are encoded in classifiers rather than in nouns, and bare linguistic expressions are underspecified in portioning-out and individuation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Human Research Ethics Committee, School of Foreign Languages, Soochow University, China. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
AH designed the experiments, collected the data, and drafted the whole article. F-AU and LM discussed and edited the article. All authors contributed to the article and approved the submitted version. | 2021-02-16T14:19:36.176Z | 2021-02-16T00:00:00.000 | {
"year": 2020,
"sha1": "102c1cf5426495dc6cb0657658753916e2be96e7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.592281/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "102c1cf5426495dc6cb0657658753916e2be96e7",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8505554 | pes2o/s2orc | v3-fos-license | Equilibrium balking strategies for a clearing queueing system in alternating environment
We consider a Markovian clearing queueing system, where the customers are accumulated according to a Poisson arrival process and the server removes all present customers at the completion epochs of exponential service cycles. This system may represent the visits of a transportation facility with unlimited capacity at a certain station. The system evolves in an alternating environment that influences the arrival and the service rates. We assume that the arriving customers decide whether to join the system or balk, based on a natural linear reward-cost structure. We study the balking behavior of the customers and derive the corresponding Nash equilibrium strategies under various levels of information.
Introduction
Queueing systems with batch services are often used to represent the visits of a transportation facility at a certain station. This allows the quantification of the congestion of the station and can be used to take control measures (e.g. changing the frequency of the visits), so that the quality of service is kept within acceptable limits. The capacity of the facility is usually assumed unlimited. This is justified, because in most applications the capacity of the facility is chosen large enough, so that the probability that some waiting customers cannot be accommodated is negligibly small. Moreover, the waiting customers that cannot be served at a visit of the facility are not in general willing to wait for its next visit and abandon the system. Therefore, it is realistic to assume that all present customers are removed at the visit points of the facility. Such systems are referred to as stochastic clearing systems.
Stochastic clearing systems have been studied extensively in the literature (see e.g. Stidham (1974), Serfozo and Stidham (1978), Artalejo and Gomez-Corral (1998) and Yang, Kim and Chae (2002)). They have been also studied in the framework of stochastic systems subject to (total) catastrophes or disasters, where catastrophic events are assumed to remove all the customers/units of the system/population (see e.g. Kyriakidis (1994), Economou andFakinos (2003,2008), Stirzaker (2006Stirzaker ( ,2007 and Gani and Swift (2007)). In the majority of such studies the interest of the investigators lies in the transient and/or the stationary distribution of the process of interest. However, optimization issues for this class of systems have also attracted the interest in the literature (see e.g. Kyriakidis (1999a,b), Economou (2003), Kyriakidis and Dimitrakos (2005)).
During the last decades, there is an emerging tendency to study queueing systems from an economic viewpoint. In the context of stochastic clearing systems, the optimization questions that have been considered so far concern the central planning of the systems. In these studies, the objective is the determination of optimal policies for the server, about when he should remove the customers from the system (see e.g. Stidham (1977), Kim and Seila (1993), Economou (2003), Kyriakidis and Dimitrakos (2005)). However, to the best of our knowledge, there are no economic studies that concern the behavior of the customers when they are free to make decisions to maximize their own benefit. Such considerations lead to a game theoretic economic analysis of their behavior in the system.
In general, the economic analysis of customer behavior in a queueing system is based on some reward-cost structure which is imposed on the system and reflects the customers' desire for service and their unwillingness to wait. Customers are allowed to make decisions about their actions in the system, for example they may decide whether to join or balk, to wait or abandon, to retry or not etc. The customers want to maximize their benefit, taking into account that the other customers have the same objective, and so the situation can be considered as a game among them. In this type of studies, the main goal is to find individual and social optimal strategies. The study of queueing systems under a game-theoretic perspective was initiated by Naor (1969) who studied the M/M/1 model with a linear reward-cost structure. Naor (1969) assumed that an arriving customer observes the number of customers and then makes his decision whether to join or balk (observable case). His study was complemented by Edelson and Hildebrand (1975) who considered the same queueing system but assumed that the customers make their decisions without being informed about the state of the system. Since then, there is a growing number of papers that deal with the economic analysis of the balking behavior of customers in variants of the M/M/1 queue, see e.g. Hassin and Haviv (1997) Hassin and Haviv (2003) and Stidham (2009) summarize the main approaches and several results in the broader area of the economic analysis of queueing systems.
The aim of the present paper is to study the equilibrium behavior of the customers regarding balking in the framework of a Markovian clearing queueing model. The balking behavior of customers in stochastic clearing systems that model transportation stations is important and should be taken into account if one wants to obtain a reliable representation of what is going on in these systems. However, such systems usually evolve in random environment, i.e. there is some external process that influences the arrival and the service rates. In the present study we will concentrate on a clearing system evolving in an alternating random environment (modeled by a 2-state continuous-time Markov chain). We will determine equilibrium balking strategies for the customers under various levels of information. In particular, we will consider several information cases, as an arriving customer may observe or not the number of customers in the system and/or the state of the environment, before making his decision about whether to join or balk.
The paper is organized as follows. In Section 2, we describe the stochastic dynamics of the model, the reward-cost structure and the decision framework (information cases). In Section 3 we consider those cases, where the strategies of the other customers do not influence the expected net benefit of a tagged customer. These are the unobservable cases, where the tagged customer does not observe the number of customers in the system before making his decision and the fully observable case, where he observes both the number of customers in the system and the state of the environment. In all these cases, we show that the expected net benefit of a tagged customer depends only on his strategy and not on the strategies followed by the other customers, a fact that implies the existence of dominant strategies. This is a special feature of the system that is related to the nature of the stochastic clearing mechanism. In Sections 4 and 5 we consider the almost observable case, where the customers get informed upon arrival about the number of waiting customers in the station but not about the state of the environment. In this case, the waiting customers do not imply any additional cost on the individual, but their presence provides a signal about the clearing rate. Depending on the parameters of the model, a large number of waiting customers may increase or decrease the conditional probability that the clearing rate is the slow one. In Section 4, we present some preliminary results. More concretely, we first compute the stationary distributions of the system, when the customers follow either a threshold or a reverse-threshold strategy. Then we compute the net benefit of an arriving customer who decides to join, given that he observes n customers and that the others follow a threshold or a reverse-threshold strategy. In Section 5, we conclude our study and we characterize all equilibrium strategies within the class of threshold and reverse-threshold strategies. The main contribution of the paper is an algorithm that computes efficiently all these equilibrium strategies. In Section 6, we summarize our findings and discuss the Follow-The-Crowd and Avoid-The-Crowd notions for this model as well as the problem of social optimization.
The model
We consider a transportation station with infinite waiting space that operates in an alternating environment. The environment is specified by a 2-state continuous-time Markov chain {E(t)}, with state space S E = {1, 2} and transition rates q ee ′ , for e = e ′ . Whenever the environment is at state e, customers arrive according to a Poisson process at rate λ e , whereas a transportation facility visits the station according to a Poisson process at rate µ e . The two Poisson processes are assumed independent. At the visit epochs of the transportation facility all customers are served instantaneously and removed from the station. Therefore, we have a stochastic clearing system in an alternating random environment.
We represent the state of the station at time t by a pair (N (t), E(t)), where N (t) records the number of customers at the station and E(t) denotes the environmental state. The stochastic process {(N (t), E(t)) : t ≥ 0} is a continuous-time Markov chain with state space S N,E = {(n, e) : n ≥ 0, e = 1, 2} and its non-zero transition rates are given by q (n,e)(n+1,e) = λ e , n ≥ 0, e = 1, 2, (2.1) q (n,e)(0,e) = µ e , n ≥ 1, e = 1, 2, (2.2) q (n,1)(n,2) = q 12 , n ≥ 0, We define ρ e = λe µe , e = 1, 2. The value of ρ e can be thought of as a measure of congestion of the system under the environmental state e, as it expresses the mean number of customers accumulated between two successive visits of the transportation facility (given that the environment remains continuously in state e).
We are interested in the behavior of customers, when they have the option to decide whether to join or balk. We assume that a customer receives a reward of R utility units for completing service. Moreover, a customer accumulates costs at a rate of C utility units per time unit that he remains in the system. We also assume that customers are risk neutral and wish to maximize their net benefit. Finally, their decisions are assumed irrevocable, in the sense that neither reneging of entering customers nor retrials of balking customers are allowed.
Since all customers are assumed indistinguishable, we can consider the situation as a symmetric game among them. Denote the common set of strategies (set of available actions) and the utility (payoff) function by S and U respectively. More concretely, let U (s tagged , s others ) be the payoff of a tagged customer who follows strategy s tagged , when all other customers follow s others . A strategy s 1 is said to dominate strategy s 2 if U (s 1 , s) ≥ U (s 2 , s), for every s ∈ S. A strategy s * is said to be dominant if it dominates all other strategies in S. A strategys is said to be a best response against a strategy s others , if U (s, s others ) ≥ U (s tagged , s others ), for every s tagged ∈ S. Finally, a strategy s e is said to be a (symmetric) Nash equilibrium, if and only if it is a best response against itself, i.e. U (s e , s e ) ≥ U (s, s e ), for every s ∈ S. The intuitive interpretation of a Nash equilibrium is that it is a stable point of the game, in the sense that if all customers agree to follow it, then no one can benefit by deviating from it. We remark that the notion of a dominant strategy is stronger than the notion of an equilibrium. In fact, every dominant strategy is an equilibrium, but the converse is not true. Moreover, while equilibrium strategies exist in most situations, dominant strategies rarely do.
In the next sections we obtain customer equilibrium strategies for joining/balking. We distinguish four cases depending on the information available to the customers at their arrival instants, before the decision is made: • Fully unobservable case: Customers do not observe N (t) nor E(t).
• Almost unobservable case: Customers do not observe N (t), but observe E(t).
• Fully observable case: Customers observe both N (t) and E(t).
• Almost observable case: Customers observe N (t), but do not observe E(t).
From a methodological point of view, the first three cases are similar and they lead to dominant strategies, so we study all of them in Section 3. The almost observable case which is the most interesting and methodologically demanding is treated in Sections 4 and 5.
The unobservable and the fully observable cases: Dominant strategies
Let S e denote the time till the next arrival of the transportation facility, given that the environment is at state e. A moment of reflection shows that S e is independent of the number of customers in the system, because of the mechanism of the total removals of customers at the visits of the facility and the memoryless property of the exponential distribution. By employing a first-step argument, conditioning on the next transition of the Markov chain {(N (t), E(t))} that is either a visit of the facility or a change in the environment, we obtain the equations
The fully unobservable case
We can now proceed and determine the equilibrium strategies of the customers in the fully unobservable case. A general balking strategy in the fully unobservable case is specified by a single joining probability q. The case q = 0 corresponds to the pure strategy 'to balk' whereas the case q = 1 corresponds to the pure strategy 'to join'. Any value of q ∈ (0, 1) corresponds to a mixed (randomized) strategy 'to join with probability q or balk with probability 1 − q'. We have the following Theorem 3.1.
We have three cases that are summarized in Table 1. Proof. Suppose that the customers follow a certain strategy and consider a tagged customer upon arrival. The probability that he finds the environment at state e is , (3.6) where (p E (e), e = 1, 2) is the stationary distribution of the environment which is given by Therefore, the expected net benefit of the tagged customer if he decides to join is given by The tagged customer prefers to join if S f u > 0, prefers to balk if S f u < 0 and he is indifferent between joining and balking if S f u = 0 . Solving with respect to R C , we obtain the three cases of Table 1.
The almost unobservable case
We can now proceed and determine the equilibrium strategies of the customers in the almost unobservable case. A general balking strategy in the almost unobservable case is specified by an ordered pair of joining probabilities (q 1 , q 2 ), where q e is the joining probability of a customer if the environmental state upon arrival is e, e = 1, 2. We have the following Theorem 3.2. V min au = min(µ 1 , µ 2 ) + q 21 + q 12 µ 1 µ 2 + µ 1 q 21 + µ 2 q 12 , V max au = max(µ 1 , µ 2 ) + q 21 + q 12 µ 1 µ 2 + µ 1 q 21 + µ 2 q 12 . (3.11) If µ 1 = µ 2 , then V min au < V max au and we have five cases that are summarized in Table 2.
and V max au . We have three cases that are summarized in Table 3.
The fully observable case
Regarding the fully observable case, where the arriving customers observe both the number of waiting customers and the state of the environment, the situation is identical to the almost unobservable case. This happens because the mean sojourn time of an arriving customer, given that he finds n customers in the system and the environment at state e does not depend on n. Therefore, if the environmental state e is observed upon arrival, then the information about the number of customers n is superfluous and is discarded by the customers. We conclude that the dominant balking strategies are the ones described in Theorem 3.2.
The almost observable case: Preliminaries
In this section, we consider the almost observable case. In this case, the customers, upon arrival and before making their decisions about whether to join or balk, observe the number of customers in the system but not the state of the environment. Thus a general balking strategy in this case is specified by a vector of joining probabilities (θ 0 , θ 1 , θ 2 , . . .), where θ i is the joining probability of a customer that sees i customers in the system upon arrival (excluding himself).
Suppose that a tagged customer observes n customers in the system upon arrival. Although his mean sojourn time does not depend on n, the information about n influences the probabilities that the environment is found at state 1 or 2. We expect intuitively that there are two cases: Either the 'slow service' environmental state e with µ e = min(µ 1 , µ 2 ) coincides to the 'more congested' environmental state e ′ with ρ e ′ = max(ρ 1 , ρ 2 ) or it coincides to the 'less congested' environmental state e ′′ with ρ e ′′ = min(ρ 1 , ρ 2 ). In the former case, the greater the number n of the customers found by the tagged customer, the more probable is that the environment is found at the 'slow service' environmental state. Therefore, the tagged customer becomes less willing to join the system as n increases. Thus, we expect that the tagged customer will benefit from joining the system, if the number of customers n is below a certain threshold, i.e. he will adopt a threshold strategy. On the contrary, in the latter case, the situation is reversed. Then, the greater the number n of the customers found by a tagged customer, the more probable is that the environment is found at the 'fast service' environmental state. Therefore, we expect that the tagged customer will benefit from joining the system, if the number of customers n exceeds a certain threshold, i.e. he will adopt a so called reverse-threshold strategy. Following this reasoning, we will limit our search for equilibrium strategies within the class of threshold and reverse-threshold strategies. As we will see, this family is rich enough to ensure the existence of an equilibrium strategy for any values of the underlying parameters of the model. Definition 4.1 A balking strategy (θ 0 , θ 1 , θ 2 , . . .), where θ i is the joining probability of a customer that sees i customers in the system upon arrival (excluding himself ) is said to be a mixed threshold strategy, if there exist n 0 ∈ {0, 1, . . .} and θ ∈ [0, 1] such that θ i = 1, for i < n 0 , θ n 0 = θ and θ i = 0, for i > n 0 . Such a strategy will be referred to as the (n 0 , θ)-mixed threshold strategy (symbolically the ⌈n 0 , θ⌉ strategy) and it prescribes to join if you see less than n 0 customers, to join with probability θ if you see exactly n 0 customers and to balk if you see more than n 0 customers.
An (n 0 , 0)-mixed threshold strategy which prescribes to join if you see less than n 0 customers and to balk otherwise will be referred to as the n 0 -pure threshold strategy (symbolically the ⌈n 0 ⌉ strategy).
A balking strategy (θ 0 , θ 1 , θ 2 , . . .) is said to be a mixed reverse-threshold strategy, if there exist n 0 ∈ {0, 1, . . .} and θ ∈ [0, 1] such that θ i = 0, for i < n 0 , θ n 0 = θ and θ i = 1, for i > n 0 . Such a strategy will be referred to as the (n 0 , θ)-mixed reverse-threshold strategy (symbolically the ⌊n 0 , θ⌋ strategy) and it prescribes to balk if you see less than n 0 customers, to join with probability θ if you see exactly n 0 customers and to join if you see more that n 0 customers. An (n 0 , 1)-mixed reverse-threshold strategy which prescribes to join if you see at least n 0 customers and to balk otherwise will be referred to as the n 0 -pure reverse-threshold strategy (symbolically the ⌊n 0 ⌋ strategy).
The strategy which prescribes to join in any case is considered to be both a threshold and a reverse-threshold strategy (symbolically the ⌈∞⌉ or ⌊0⌋ strategy). The same is true for the strategy which prescribes to balk in any case (symbolically the ⌈0⌉ or ⌊∞⌋ strategy).
Stationary distributions
In this subsection, we determine the stationary distributions of the system, when the customers follow any given strategy from the ones that have been described in Definition 4.1. We will first determine the stationary distribution of the original system when all customers join. The result is reported in the following Proposition 4.1.
Proposition 4.1 Consider the stochastic clearing system in alternating environment, where all customers join. The stationary distribution (p(n, e)) is given by the formulas and p E (1), p E (2) are the stationary probabilities of {E(t)} given from (3.7)-(3.8).
For determining the stationary probabilities, we may follow the standard probability generating function approach. Thus, we define the partial stationary probability generating functions of the system as G e (z) = ∞ n=0 p(n, e)z n , |z| ≤ 1, e = 1, 2. (4.13) and we observe that Summing equation (4.9) and equations (4.10) multiplied by z n , n ≥ 1, yields after some straightforward algebra a linear equation in G 1 (z) and G 2 (z). Similarly, equations (4.11) and (4.12), n ≥ 1, yield another linear equation in G 1 (z) and G 2 (z). Solving the system of these equations we obtain G 1 (z) and G 2 (z) as rational functions of z with known coefficients expressed in terms of the parameters of the model. Using partial fraction expansion and then expanding the simple fractions in powers of z yields (4.1) and (4.2). Indeed, by direct substitution, we can easily check that p(n, 1) and p(n, 2) given by (4.1) and (4.2) satisfy (4.10). By a simple summation, we can also check that p(n, 1) and p(n, 2) given by (4.1) and (4.2) satisfy (4.9). The validity of (4.12) and (4.11) is checked similarly.
We will now deduce the stationary distribution of the system when the customers follow a mixed threshold strategy. We have the following Proposition 4.2.
Proof. We assume that the customers follow the (n 0 , θ)-mixed threshold strategy. Then the evolution of the system can be described by a Markov chain which is absorbed with probability 1 in the positive recurrent closed class of states S N,E ao (⌈n 0 , θ⌉) = {(n, e) : 0 ≤ n ≤ n 0 +1, e = 1, 2}. For the sake of brevity, we suppress the notation regarding ⌈n 0 , θ⌉ in the rest of the proof . Thus, we will refer to the corresponding stationary probabilities p ao (n, e; ⌈n 0 , θ⌉) by p ao (n, e).
We will now deduce the stationary distribution of the system when the customers follow an (n 0 , θ)-mixed reverse-threshold strategy. It is left to show what happens when the customers follow a (0, θ)-mixed reverse-threshold strategy. The proof of Proposition 4.3 for θ = 0 is immediate, as in this case the customers balk whenever they arrive at an empty system. Therefore under such a strategy the corresponding continuous-time Markov chain is absorbed with probability 1 into the subset {(0, 1), (0, 2)} of the state space and the stationary distribution is the one given by (4.29) and (4.30) as in Corollary 4.2. In case θ = 1, the customers always join so we apply Proposition 4.1. Thus, the only interesting case is for θ ∈ (0, 1). Then, the proof of Proposition 4.3 follows a similar line of argument as the proofs of Propositions 4.1 and 4.2. Therefore, for the sake of brevity, it is omitted.
Expected net benefit functions
Based on the results of subsection 4.1, we can now compute the expected net benefit of a tagged customer if he decides to join the system after observing n customers upon arrival. Of course, his expected net benefit depends on the strategy followed by the other customers. Thus, we have various cases, according to whether the customers follow a threshold or a reverse-threshold strategy. We have the following Propositions 4.4-4.6 and the Corollary 4.3.
Proposition 4.4
Consider the almost observable model of the stochastic clearing system in alternating environment, where all customers join the system. Then, the expected net benefit S ao (n; ⌈∞⌉) ≡ S ao (n; ⌊0⌋) of an arriving customer, if he decides to join, given that he finds n customers in the system, is given by Proof. The mean sojourn time of an arriving customer, if he decides to join, given that he finds n customers in the system is given by where p − ao (e|n; ⌈∞⌉), e = 1, 2, is the probability that an arriving customer finds the environment at state e, given that he observes n customers in the system and that the ⌈∞⌉-strategy is followed by the other customers. The embedded (Palm) probabilities p − ao (e|n; ⌈∞⌉) are given by Plugging the formulas (4.1)-(4.2) into (4.42) and subsequently into (4.43) yields (4.36).
Proposition 4.5
Consider the almost observable model of the stochastic clearing system in alternating environment, where the customers join the system according to the (n 0 , θ)-mixed threshold strategy. Then, the expected net benefit S ao (n; ⌈n 0 , θ⌉) of an arriving customer, if he decides to join, given that he finds n customers in the system, is given by where A, B, D, E, z 1 , z 2 are given by (4.37)-(4.40) and (4.8).
In the case of the n 0 -pure threshold strategy, we obtain the following Corollary 4.3.
Corollary 4.3 Consider the almost observable model of the stochastic clearing system in al-
ternating environment, where the customers join the system according to the n 0 -pure threshold strategy. Then, the expected net benefit S ao (n; ⌈n 0 ⌉) of an arriving customer, if he decides to join, given that he finds n customers in the system, is given by When the customers follow a (0, θ)-mixed reverse-threshold strategy, with θ ∈ (0, 1), we can use the same line of argument with Propositions 4.4 and 4.5, using the stationary distribution given by (4.33)-(4.34). Then we have the following Proposition 4.6.
Proposition 4.6
Consider the almost observable model of the stochastic clearing system in alternating environment, where the customers join the system according to the (0, θ)-mixed reversethreshold strategy for some θ ∈ (0, 1). Then, the expected net benefit S ao (n; ⌊0, θ⌋) of an arriving costumer, if he decides to join, given that he finds n customers in the system, is given by To express the various formulas reported in Propositions 4.4-4.6 and in Corollary 4.3 for the expected net benefit function in a compact, unified way, we introduce the functions 5 The almost observable case: Equilibrium strategies As we have already discussed in the beginning of Section 4, it seems plausible that threshold strategies are adopted by the customers when the 'fast service' environmental state coincides with the 'less congested' environmental state, i.e. when (µ 1 − µ 2 )(ρ 1 − ρ 2 ) < 0. On the contrary, reverse-threshold strategies are plausible when the 'fast service' environmental state coincides with the 'more congested' environmental state, i.e. when the opposite inequality holds. This intuitive finding is associated with the monotonicity of H U (n) which plays a key role in the subsequent analysis. More specifically, we have the following Proposition 5.1.
Proposition 5.1
We have the following equivalences: The proof of this proposition is omitted, since its first case follows easily by simple algebraic manipulations that start from the relation H U (n + 1) − H U (n) < 0 and lead to AE − BD > 0 and (µ 1 − µ 2 )(ρ 1 − ρ 2 ) < 0, through successive equivalences. The other two cases are treated similarly. Moreover, the monotonicity of the function F (n,θ) G(n,θ) with respect to θ depends on the sign of (µ 1 − µ 2 )(ρ 1 − ρ 2 ). Specifically, we have the following Proposition 5.2.
Proposition 5.2
We have the following equivalences: The proof of this proposition is also omitted, since the result is deduced easily after some algebra. We now state some properties of F (n, θ), G(n, θ) and H U (n), H L (n) that we will use in the sequel. Their proof is straightforward from their definition and thus it is omitted.
The functions F (n, θ), G(n, θ) satisfy the following properties: The intuitive discussion at the beginning of Section 4 in combination with Propositions 5.1 and 5.2 suggests that we should methodologically proceed by considering separately three cases, corresponding to the sign (negative, positive or zero) of (µ 1 − µ 2 )(ρ 1 − ρ 2 ).
Case
In Case A, we will prove that an equilibrium threshold strategy always exists. Moreover, we will present a systematic procedure for determining all equilibrium threshold strategies. We first introduce several quantities that we will need in the sequel.
Using Lemma 5.2 we will now prove the existence of threshold equilibrium strategies, when (5.11) holds. We present the results in the following Theorem 5.1.
Theorem 5.1
In the almost observable model of the stochastic clearing system in alternating environment, where (5.11) holds, equilibrium threshold strategies always exist. In particular, in the three cases of Lemma 5.2 we have: Then, there is a unique equilibrium threshold strategy, the ⌈0⌉-strategy (always to balk).
Case II: H U (0) ≥ 0 and lim n→∞ H U (n) < 0. Then, an equilibrium pure threshold strategy always exists. Moreover, the equilibrium strategies within the class of all pure threshold strategies are the strategies ⌈n 0 ⌉ with n 0 = n L , n L + 1, . . . , n U . Also, the equilibrium strategies within the class of genuinely mixed threshold strategies are the strategies ⌈n 0 , θ(n 0 )⌉ with n 0 ∈ {n + L , . . . , n − U − 1} and θ(n 0 ) the unique solution in (0, 1) of F (n 0 , θ) = 0 with respect to θ.
Then, there is a unique equilibrium threshold strategy, the ⌈∞⌉-strategy (always to join).
Proof. Case I: Consider a tagged customer at his arrival instant and assume that all other customers follow an ⌈n 0 ⌉ strategy for some n 0 ≥ 0. Inequality (5.17) and relations (4.60) and (4.61) imply that the expected net benefit of the tagged customer, when he finds n customers and decides to join is S ao (n; ⌈n 0 ⌉) < 0, for 0 ≤ n ≤ n 0 . Thus, he always prefers to balk and his best response against ⌈n 0 ⌉ is ⌈0⌉.
If all customers follow the ⌈∞⌉ strategy, (5.17) and (4.56) yield S ao (n; ⌈∞⌉) < 0 for n ≥ 0. Again, due to the negative expected net benefit, it is preferable for the tagged customer to balk. So, his best response against ⌈∞⌉ is ⌈0⌉. Thus, we conclude that the only best response against itself within the class of (pure and mixed) threshold strategies is ⌈0⌉.
Case II: Consider a tagged arriving customer and suppose that all other customers follow an ⌈n 0 ⌉ strategy, for some n 0 ≤ n L − 1. If the tagged customer finds n 0 customers and decides to join, his expected net benefit will be S ao (n 0 ; ⌈n 0 ⌉) > 0, from (5.24) and (4.61). This implies that when he finds n 0 customers, he is willing to join. Thus, ⌈n 0 ⌉ cannot be a best response against itself. So such a strategy cannot be an equilibrium.
Consider, now, a tagged arriving customer and suppose that all other customers follow an ⌈n 0 ⌉ strategy, for some n 0 ≥ n U + 1. Using (4.60) and (5.22), we have that S ao (n; ⌈n 0 ⌉) < 0, for n U ≤ n ≤ n 0 − 1. This means that when the tagged customer finds n customers, with n U ≤ n ≤ n 0 − 1, then he is unwilling to enter. Thus, the ⌈n 0 ⌉ strategy cannot be an equilibrium. We conclude that the search for equilibrium strategies within the class of pure threshold strategies should be restricted to strategies ⌈n 0 ⌉ with n L ≤ n 0 ≤ n U .
We mark an arriving customer and we assume that all other customers follow an ⌈n 0 ⌉ strategy, for some n 0 with n L ≤ n 0 ≤ n U . From (4.60), (4.61), (5.20), (5.21), (5.25) and (5.26), we have that the expected net benefit of a customer who finds n customers upon arrival and decides to join is S ao (n; ⌈n 0 ⌉) ≥ 0, for 0 ≤ n ≤ n 0 − 1 and S ao (n 0 ; ⌈n 0 ⌉) ≤ 0. Thus ⌈n 0 ⌉ is a best response against itself and we conclude that all such strategies are equilibrium strategies.
To finish with our search for equilibrium strategies in the class of pure threshold strategies, we have to examine the ⌈∞⌉ strategy. This cannot be an equilibrium, since (4.56) and (5.22) imply that S ao (n; ⌈∞⌉) < 0, for n ≥ n U , which means that it is not optimal for the tagged customer to join when he sees n customers for some n ≥ n U . Therefore, we conclude that the equilibrium strategies within the class of pure threshold strategies are exactly the strategies ⌈n 0 ⌉ with n L ≤ n 0 ≤ n U .
Case III: Following the same line of argument as in case I, we now find that when all customers follow a pure threshold strategy ⌈n 0 ⌉ or a mixed threshold strategy ⌈n 0 , θ 0 ⌉ the expected net benefit function is always positive. Thus, the best response of a customer is always to join the system. Thus, the only best response against itself in the class of threshold strategies is the ⌈∞⌉ strategy.
Note that although pure threshold strategies always exist, it is possible that genuinely mixed threshold strategies do not. This happens if n − U − 1 < n + L .
Case
In Case B, we seek for equilibrium strategies in the class of reverse-threshold strategies. We will exclude strategies ⌊n 0 ⌋ and ⌊n 0 , θ 0 ⌋ with n 0 ≥ 1. Indeed, all these strategies prescribe to balk, when a tagged arriving customer sees an empty system. Thus, under such a strategy, the system remains continuously empty, after the first service completion. Therefore, in steady state, these strategies are equivalent to the 'always balk' strategy ⌊∞⌋. Thus, we seek for equilibrium strategies only in the set S r−t = {⌊0⌋, ⌊∞⌋} ∪ {⌊0, θ 0 ⌋ : θ 0 ∈ (0, 1)}. We first introduce several quantities that we will use in the sequel. Then, there is a unique equilibrium reverse-threshold strategy, the ⌊0⌋ strategy ('always to join'). If m − U = 0, the ⌊0⌋ strategy ('always to join') is the unique equilibrium reverse-threshold strategy. If m + L ≥ 1, then the ⌊∞⌋ strategy ('always to balk') is the unique equilibrium reverse-threshold strategy. Otherwise, the ⌊0, θ(0)⌋ strategy is the unique equilibrium reverse-threshold strategy.
Then, there is a unique equilibrium reverse-threshold strategy, the ⌊∞⌋ strategy ('always to balk').
Proof. Case I: Consider a tagged customer at his arrival instant and assume that all other customers follow the ⌊0⌋ strategy. Inequality (5.39) and relation (4.56) imply that his expected net benefit, when he finds n customers and decides to join is S ao (n; ⌊0⌋) > 0, for n ≥ 0. Thus, he always prefers to join so his best response against ⌊0⌋ is ⌊0⌋ itself.
Similarly, let mark an arriving customer and suppose that all other customers follow a ⌊0, θ 0 ⌋ strategy, for some θ 0 ∈ (0, 1). Then, the expected net benefit of the tagged customer, who finds n customers at his arrival instant and decides to join, will be S ao (n; ⌊0, θ 0 ⌋) > 0, for n ≥ 0 due to (5.39) and (4.63). Therefore, the tagged customer is always willing to join and we have that ⌊0⌋ is the best response against ⌊0, θ 0 ⌋.
If all customers follow the ⌊∞⌋ strategy, equations (5.39) and (4.62) imply that S ao (0; ⌊∞⌋) > 0, so the tagged customer prefers to join. Thus, we have again that ⌊0⌋ is the best response against ⌊∞⌋. So the only reverse-threshold strategy which is best response against itself is the ⌊0⌋ strategy.
Case II: Assume that m − U = 0. Then F (0, 1) = 0 and m U = 1. Consider now a tagged customer at his arrival instant and suppose that all other customers follow the ⌊0⌋ strategy. Inequality (5.43) and relation (4.56) imply that his expected net benefit, when he finds n customers and decides to join is S ao (n; ⌊0⌋) ≥ 0, for n ≥ 0. Thus, ⌊0⌋ is a best response to itself.
Case III: Following the same line of argument as in case I, we now conclude that the expected net benefit function is negative. Thus, the best response to every reverse-threshold strategy is ⌊∞⌋. Thus the only equilibrium reverse-threshold strategy is ⌊∞⌋.
Case C:
Case C occurs when µ 1 = µ 2 or λ 1 µ 1 = λ 2 µ 2 . In this case, the distinction 'fast environmental state' and 'slow environmental state' has no sense or the distinction 'more congested environmental state' and 'less congested environmental state' has no sense. Therefore, we conclude that the information on the number of customers in the system, does not affect the decision of a tagged arriving customer. A similar analysis is possible as in the other two cases and we have the following Theorem 5.3.
Theorem 5.3
In the almost observable model of the stochastic clearing system in alternating environment, where µ 1 = µ 2 or ρ 1 = ρ 2 , (5.54) an equilibrium strategy exists within the class of threshold and reverse-threshold strategies. In particular we have the following three cases: Then, the unique equilibrium strategy in the class of threshold and reverse-threshold strategies is the ⌈0⌉ ≡ ⌊∞⌋ strategy ('always to balk').
Then, every strategy in the class of threshold and reverse-threshold strategies is equilibrium strategy.
Then, the unique equilibrium strategy in the class of threshold and reverse-threshold strategies is the ⌈∞⌉ ≡ ⌊0⌋ strategy ('always to join').
Summary and conclusions
In this paper we considered the problem of analyzing customer strategic behavior, in a clearing system in alternating environment, where customers decide whether to join the system or balk upon arrival. We identified four cases with respect to the level of information provided to arriving customers and derived the equilibrium strategies for each case. It is important to notice that in each case we identified all equilibrium strategies within the appropriate class of strategies. Moreover, in the almost observable case, which is the most interesting one, Theorems 5.1, 5.2 and 5.3 suggest that the equilibrium strategies in the class of threshold and reverse-threshold strategies are completely characterized by the signs of the quantities (µ 1 − µ 2 ) (ρ 1 − ρ 2 ), H U (n), lim n→∞ H U (n) and H L (n). Thus, we can easily combine these theorems and develop an algorithm for determining the equilibrium strategies. We present the algorithm in pseudo-code form in Figure 1. Figure 2 shows schematically the various cases I,II,III when (µ 1 − µ 2 ) (ρ 1 − ρ 2 ) < 0.
We have also to notice that the results in the almost observable case are qualitatively different for the two cases A and B, where (µ 1 − µ 2 )(ρ 1 − ρ 2 ) is negative and positive respectively. Indeed, in case A, there is, in general an interval of thresholds that constitute equilibrium threshold strategies. On the contrary, in case B, there is a unique equilibrium reverse-threshold strategy. These observations correspond to the regimes of Follow-The-Crowd (FTC) and Avoid-The-Crowd (ATC) as defined in Hassin andHaviv (1997, 2003). Indeed, in case A, where (µ 1 − µ 2 )(ρ 1 − ρ 2 ) < 0 we have that the 'fast service' environmental state coincides with the 'less congested' environmental state. Then, we can argue as follows, if we want to compare two threshold strategies with thresholds n and n + 1: If the customers follow a threshold strategy with threshold n and an arriving customer observes n customers in the system, then he deduces that at least n customers arrived since the last clearing epoch. If the customers follow a threshold strategy with threshold n + 1 and the arriving customer observes n customers, then he deduces that exactly n customers arrived since the last clearing epoch. Thus, in the latter case, the arriving customer has the sense that the system is less congested and therefore the environmental state is most probably the 'fast service' one. We conclude that the arriving customer is more willing to enter the system. Therefore, if the customers adopt a higher threshold, an arriving customer tends to follow them in adopting a higher threshold and we have an FTC situation.
On the other hand, in case B, where (µ 1 − µ 2 )(ρ 1 − ρ 2 ) > 0, we have that the 'low service' environmental state coincides with the 'less congested' environmental state. The usual definition of the ATC situation is not applicable here, since we consider reverse-threshold instead of threshold strategies. Moreover, under any reverse-threshold strategy ⌊n, θ⌋ with n ≥ 1, the system remains continuously empty after the first visit of the transportation facility and so we have excluded these strategies in our seek for equilibrium strategies. Thus, in case B, we will limit our intuitive discussion of the ATC phenomenon to the class of strategies {⌊0, θ⌋ : θ ∈ [0, 1]}, as we have already done in the analysis of subsection 5.2. Suppose that the customers follow a reverse-threshold strategy ⌊n, θ⌋ and then they move to another reverse-threshold strategy ⌊n, θ ′ ⌋ with θ ′ > θ. Consider now an arriving customer that finds 0 customers in the system. Knowing the strategies of the other customers, the arriving customer has the sense that the system is in the less congested environmental state in the second case, where the customers enter with probability θ ′ . Indeed in this case, the customers are more willing to join than in the first case (since θ ′ > θ) so the information of an empty system imply that it is more probable that the system is in the less congested environmental state. Therefore, the customer becomes less willing to enter, as the less congested environmental state coincides with the low service state. Thus, when the other customers increase the probability of entering, the tagged customer tends to decrease his probability of entering, i.e. we have an ATC situation.
The focus of this work was on equilibrium analysis. On the other hand, one can think of a situation where a central planner employs acceptance policies that maximize the social benefit, under the various levels of information on the system state. It is easy to see that in the fully unobservable, the fully observable and the almost unobservable cases the strategies that maximize the social benefit are the equilibrium strategies. This coincidence between equilibrium and socially optimal strategies can be explained by the total removals. Since the server removes all customers at service completion epochs, each customer who decides to join does not impose any externalities to other customers. In the almost observable case equilibrium and socially optimal strategies are identical except from the case where H U (n) is strictly decreasing, H U (0) ≥ 0 and lim n→∞ H U (n) < 0. In this case the unique socially optimal strategy is the ⌈n U ⌉ strategy, which is also an equilibrium. | 2011-12-23T11:14:42.000Z | 2011-12-23T00:00:00.000 | {
"year": 2013,
"sha1": "596c5fe3024513888f35a1c5ef64f52fa307add0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.5555",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "596c5fe3024513888f35a1c5ef64f52fa307add0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
255998381 | pes2o/s2orc | v3-fos-license | Probing electron-electron interaction along with superconducting fluctuations in disordered TiN thin films
Here, we demonstrate an interplay between superconducting fluctuations and electron-electron interaction (EEI) by low temperature magnetotransport measurements for a set of 2D disordered TiN thin films. While cooling down the sample, a characteristic temperature T* is obtained from the R(T) at which superconducting fluctuations start to appear. The upturn in R(T) above T* corresponds to weak localization (WL) and/or EEI. By the temperature and field dependences of the observed resistance, we show that the upturn in R(T) originates mainly from EEI with a negligible contribution from WL. Further, we have used the modified Larkins electron-electron attraction strength beta(T/Tc), containing a field induced pair breaking parameter, in the Maki-Thompson (MT) superconducting fluctuation term. Here, the temperature dependence of the beta(T/Tc) obtained from the magnetoresistance analysis shows a diverging behavior close to Tc and it remains almost constant at higher temperature within the limit of ln(T/Tc)<1. Interestingly, the variation of beta(T/Tc) on the reduced temperature (T/Tc) offers a common trend which has been closely followed by all the concerned samples presented in this study. Finally, the temperature dependence of inverse phase scattering time , as obtained from the magnetoresistance analysis, clearly shows two different regimes; the first one close to Tc follows the Ginzburg-Landau relaxation rate , whereas, the second one at high temperature varies almost linearly with temperature indicating the dominance of inelastic electron-electron scattering for the dephasing mechanism. These two regimes are followed in a generic way by all the samples in spite of being grown under different growth conditions.
In 2D disordered superconductors prior to superconducting transition, the appearance of a resistance peak in the temperature dependent resistance [R(T)] measurements indicates the presence of weak localization (WL) & electron-electron interaction (EEI) in diffusion channel and superconducting fluctuations in the Cooper channel. Here, we demonstrate an interplay between superconducting fluctuations and electronelectron interaction by low temperature magnetotransport measurements for a set of 2D disordered TiN thin films. While cooling down the sample, a characteristic temperature T* is obtained from the R(T) at which superconducting fluctuations start to appear. The upturn in R(T) above T* corresponds to WL and/or EEI. By the temperature and field dependences of the observed resistance, we show that the upturn in R(T) originates mainly from EEI with a negligible contribution from WL. Further, we have used the modified Larkin's electron-electron attraction strength β(T/Tc), containing a field induced pair breaking parameter, in the Maki-Thompson (MT) superconducting fluctuation term. Here, the temperature dependence of the β(T/Tc) obtained from the magnetoresistance analysis shows a diverging behavior close to Tc and it remains almost constant at higher temperature within the limit of ln(T/Tc) <1. Interestingly, the variation of β(T/Tc) on the reduced temperature (T/Tc) offers a common trend which has been closely followed by all the concerned samples presented in this study. Finally, the temperature dependence of inverse phase scattering time ( −1 ), as obtained from the magnetoresistance analysis, clearly shows two different regimes; the first one close to Tc follows the Ginzburg-Landau relaxation rate ( −1 ), whereas, the second one at high temperature varies almost linearly with temperature indicating the dominance of inelastic electron-electron scattering for the dephasing mechanism. These two regimes are followed in a generic way by all the samples in spite of being grown under different growth conditions.
I. INTRODUCTION
In a superconductor, the transition from the metallic state occurs in two phases: first, the order parameter is established with a finite amplitude at the critical temperature (Tc), and then, the formation of the global phase coherent state at the characteristic temperature TBKT [1][2][3]. In 2D disordered superconductors above the Tc, electrical transport properties are mainly controlled by the quantum corrections to the conductivity (QCC) [4,5]. The QCC can broadly be summarized into two parts; first is the weak localization (WL) due to quantum interference of complementary electron waves travelling in a closed loop but opposite in direction, second is the disorder induced electron-electron interaction (EEI) [6][7][8]. Further, quantum corrections originating from EEI can be divided into two parts: The first part includes Coulomb interaction between the particles with close momenta in the diffusion channel (ID), and the second part includes Coulomb interaction between the particles having opposite momenta in the Cooper channel.
Correction to the conductivity due to the Cooper channel becomes important, once the system transits to superconducting state. However, corrections to the conductivity arising from the channel of the Cooper pairs are further divided into three superconducting fluctuations, namely, Aslamazov-Larkin (AL) contribution which is mainly due to the participation of Cooper pairs in conduction through parallel channel [9], Maki-Thompson (MT) contribution which reflects the influence of superconducting fluctuations on normal quasiparticles [10,11] & Density of states (DOS) contribution which originates due to the formation of superconducting pairs that lead to the reduction in the density of states for normal electrons [12]. Here, superconducting fluctuations related to AL & MT give positive and DOS gives negative contribution to the conductivity under zero magnetic field [13]. Whereas, WL+ID together offers a negative contribution to the conductivity. However, superconducting fluctuations & WL are very much sensitive to the magnetic field and they get suppressed under the application of magnetic field, whereas the contribution from the EEI remains unaffected under a high magnetic field [14].
In this article, we have studied an interplay between superconducting fluctuations in the Cooper channel and electron-electron interaction in the electronic diffusion channel by low temperature magnetotransport measurements for a set of disordered TiN thin film samples that are in 2-dimensional limit. A detailed study using all the aforesaid quantum corrections to the conductivity is reported by Baturina et al [13] for disordered superconducting TiN thin films of thickness in the same range (<5 nm) as that of the samples presented in this work. The observations and conclusions on the zero field R(T) measurements for the present work closely follow the reported results from Ref. 13. However, the study presented here extends up to an advanced level where an external magnetic field, applied perpendicular to the sample plane, is used to probe only the EEI by suppressing the other relevant mechanism like WL which may lead to an upturn in zero-field R(T) measurements. By the analysis of magnetoresistance results, we show here that indeed electron-electron interaction is dominant for these superconducting films above the transition temperature.
Here, samples are produced by using previously demonstrated substrate mediated nitridation technique where the annealing temperature and the film thickness were varied. The set of samples selected for this study includes samples grown with different annealing temperature and also samples varying in film thickness. However, the results from the transport measurements carried out on these varieties of samples follow similar characteristics when we consider the temperature dependent resistance measurements [R(T)] and/or the magnetoresistnace (MR) measurements. For example, while cooling down from room temperature, zero-field R(T) characteristics for all the samples feature a resistance dip at a specific temperature Tmin which is then followed by an upturn with negative dR/dT slope and finally reaching to a resistance peak at temperature Tmax and further cooling leads to superconductivity related drop in resistance. All these distinct regions, characterized mainly by the sign of the slope dR/dT, are present in zero-field R(T) for each of the samples presented here. Further, we have obtained the characteristic temperature T* at which superconducting fluctuations start to appear and the experimental R(T) starts to deviate from the WL+ID path. The characteristic temperatures T*, Tmax & Tmin and the resistance peak have been explicitly monitored with respect to magnetic field and we find that the contribution from WL to the QCC is very weak compared to that from EEI and hence, EEI can be considered as the main mechanism behind the upturn and the resistance peak observed in the R(T). Further, the MR measurements show positive MR at temperature far above Tc and no trace of negative MR is observed even above T* where superconducting fluctuations can be ignored. As negative MR is a hallmark for WL [15], MR measurements too indicate that the contribution from WL is not significant.
As far as the superconducting fluctuations are concerned, above Tc, the MT correction is the most dominant contribution to QCC. The strength of the MT contribution is generally expressed by the electron-electron attraction strength β(T/Tc) which was originally proposed by Larkin [16]. Here, β(T/Tc) varies differently for (ln(T/Tc)<< 1) and (ln(T/Tc) >>1). However, it is problematic to evaluate the value of β(T/Tc) in the intermediate temperature regime and in this regime, no clear report/guidance about the form of the β(T/Tc) is available in the literature. However, a modified magnetic field dependent β(T/Tc) has been proposed by Lopes dos Santos and Abrahams in the literature which is valid for low temperature regime [17] . In this study, we have considered the modified β(T/Tc) which depends on a pair breaking parameter 'δ' via the phase scattering time ( ) which is obtained from the MR analysis . The dependence of β(T/Tc) on the reduced temperature (T/Tc) shows a diverging behavior close to Tc, whereas, it behaves almost independent of temperature for the regime which is a little far from Tc but still satisfying the condition (ln(T/Tc)<1). Interestingly, the variation of β(T/Tc) on T/Tc offers a common path which is being followed by all the samples in a collective/universal manner. Furthermore, the inverse phase relaxation time ( −1 ) as obtained from the MR varies generically on reduced temperature (T/Tc) for all the samples. Close to Tc, superconducting fluctuations dominate and −1 follows Ginzburg-Landau relaxation rate ( −1 ) and at higher temperature, the phase relaxation time varies almost linearly with temperature indicating the dominance of inelastic electron-electron scattering for the dephasing mechanism.
II. EXPERIMENTAL
We have employed undoped Si (100) substrate covered with 80 nm Si3N4 dielectric spacer layer grown by low pressure chemical vapor deposition (LPCVD). Initially, substrates went through the standard cleaning process involving sonication in acetone & isopropanol bath for 15 minutes each. Thereafter, cleaned substrates were loaded into the Ultra high vacuum (UHV) chamber for pre heating at about 820C for 30 minutes to remove adsorbed or trapped organic molecules on the surface of the substrate. The cleaned substrates were then transferred in situ to the sputtering chamber where a thin layer of Ti was deposited on the substrate by using dc magnetron sputtering of Ti target (99.999% purity) in the presence of high purity Ar (99.9999%) gas.
Sputtering of Ti target was done with the base pressure less than 1.5 x 10 -7 Torr. Finally, Ti deposited substrates were transferred in situ to an UHV chamber for annealing. Ti thin film deposited substrates were annealed at different annealing temperatures of about (820C, 780C & 750C) for 2 hours at a pressure less than 5 x 10 -8 Torr. During the annealing process, Ti transformed into TiN by the substrate mediated nitridation technique [18][19][20][21] where Si3N4 substrate decomposed into Si (s) & N (g) atoms and due to high affinity of titanium towards the both, formation of superconducting TiN as the majority phase along with & the nonsuperconducting minority phase TiSi2 took into place. However, more detail about substrate mediated nitridation technique has been reported elsewhere [20]. For carrying out the electrical transport measurements at low temperature, TiN thin film based multi-terminal devices were fabricated by using stainless steel shadow mask to pattern the TiN thin films based superconducting channel. We have used a complimentary separate shadow mask to make the contact leads for voltage and currents probes. The contact leads were made of Au (80-100 nm)/Ti
III. RESULTS AND DISCUSSION
Temperature-dependent resistance R(T) measurements have been carried out on TiN thin films of dimensions 1100 m (length) 500 m (width) by using the conventional four-probe geometry. Further, to investigate the role of annealing temperature (Ta ) and film thickness on the transport properties, we have fabricated TiN thin films by keeping one growth parameter fixed and altering the other. In Fig. 1(a resemblance in their RN and its variation from room temperature down to the temperature just before transiting to the superconducting (SC) state. As it is evident in Fig. 1(a), a change in the film thickness from 4 nm to 3 nm shows a significant change in the RN for any particular Ta (820C or 780C or 750C) considered here. Therefore, particularly in the normal state, variation in the film thickness has a greater influence on the transport properties than the influence of Ta. However, superconducting properties such as critical temperature (Tc), transition width etc. depend strongly on Ta as it is apparent from Fig. 1(a) that normal metallic to superconductor transition shifts towards lower temperature with broader transition width for reducing the Ta from 820C to 780C and further to750C. In order to observe the effect of thickness on the overall transport properties, we have collected a set of R(T) measurements for three samples TN8, TN9 & TN10 having different film thicknesses of about 4 nm, 3 nm & 2 nm, respectively and have been grown with a fixed Ta of about 780C as shown in Fig. 1(b). With the reduction in the film thickness, RN starts to increase while Tc shifts towards the lower temperature side. Here, the samples TN8 & TN9 undergo a complete superconductivity, however, with a reduction in the film thickness from 3 nm to 2 nm, sample TN10 shows partial superconductivity as observed in Fig. 1(b). Further, we have investigated the zero-field R(T) characteristics in more detail for the set of samples (TN8, TN9, TN10) presented in Fig. 1(b). A narrow temperature window for R(T) is considered for each of the samples in this set for emphasizing the close vicinity of the metal-superconductor transition and they are shown in the main panels of Fig. 2(a-c). The details about the R(T) variation in the normal state for all the three samples are highlighted in the insets of Fig. 2(a-c) that show different regions mainly based on the slope dR/dT from the measured R(T). For all the samples while cooling down from room temperature, resistance decreases with decreasing temperature till it reaches to a minimum at the characteristic temperature Tmin indicating a metallic behavior with positive dR/dT. Further lowering the temperature, an upturn in R(T) appears where resistance starts to increase and reaches to a maximum at the temperature Tmax. In this regime between Tmin and Tmax, the negative dR/dT indicates an insulating or semiconducting type of behavior. At temperature below Tmax, resistance drops sharply and the superconducting fluctuations take over. For example, the thinnest sample TN10 with about 2 nm thickness shows metallic behavior with positive dR/dT at T > Tmin=95 K and an upturn accompanied by a resistance peak with negative dR/dT for the temperature window from 95 K to 5.8 K (Tmax ~ 5.8 K) as shown in Fig. 2(a).
Finally, below 5.8 K (Tmax), resistance starts to drop sharply as the sample transits to superconducting state.
Similarly, while cooling from room temperature, a resistance minimum followed by an upturn and superconducting drop appear in the R(T) for other two samples (TN9 & TN8) as displayed in the insets of Here, the slope of the upturn increases with decreasing film thickness. This is expected when EEI plays the dominant role, as reduction in thickness introduces more disorder and EEI increases with disorder. An upturn in resistance can originate from granularity also. However, for the granular superconducting systems, no systematic reduction in Tc occurs for decreasing thickness [22] but in the present work, as shown in Fig. 1(c) and also in Table-1, we observe that the Tc decreases systematically with the decreasing thickness and at the same time, the transitions remain sharp. This indicates the granularity might not be the reason behind the observed upturn in the zero-field R(T) [23].
Further, the surface morphology, as observed through atomic force microscopy (AFM) images shown in Fig. S7 in the Supplemental Material [24], does not clearly indicate the granular nature of the films as the images reflect the surface roughness rather than isolated grains.
Generally, WL and EEI play major roles for the appearance of resistance peak and upturn in R(T) for twodimensional homogeneously disordered materials [4,8,25,26]. However, the dimensionality of the system is very much sensitive to WL & EEI and superconducting fluctuations, therefore characteristic length scales such as the superconducting coherence length D as the diffusion coefficient should be less than the film thickness. Moreover, for all the samples presented here, the film thickness is less than the characteristic superconducting coherence length (9 nm) and the thermal coherence length (9 to 13 nm at 100 K). Therefore, quantum corrections to the conductivity that are applicable in 2D materials can be considered here for understanding the origin of the upturn and the related resistance peak appearing in the zero-field R(T) characteristics. Furthermore, the sample specific characteristic parameters like Ta, film thickness (d), sheet resistance (Rmax) at Tmax before superconducting transition, superconducting critical temperature (Tc) as obtained by QCC fit, sheet resistance at 300 K (R300 K), upper critical field at T= 0 K (Bc2(0)), Diffusion constant (D) & the thermal coherence length (LT) at 100 K for all the TiN samples are listed in Table 1. & EEI in diffusion channel (ID) play prominent roles for the appearance of an upturn and corresponding resistive peak at Tmax in the zero-field R(T). Inclined to this, we have followed here the 2D treatment with WL and EEI to address the observed upturn in the zero-field R(T).
In 2D case, WL and EEI (ID) contribution to the conductivity can be obtained as [12,26], with G00 = e 2 /(2π 2 ℏ). Here, A is a proportional constant and τ is the electron mean free time and they are considered as fitting parameters. From Eq. (1), corrections due to both WL and EEI to the conductivity vary logarithmically with the temperature and accordingly, the experimental R(T) data for the temperature window from Tmax to Tmin is fitted by using Eq. the samples are coming less than 3 which is expected for the homogenously disordered thin films [26].
Interestingly, if we look into more detail on the WL+ID fitting from Tmin to Tmax, we observe that the fit deviates from the experimental points before reaching the Tmax. The temperature at the deviation point is marked as T* and the vertical dashed lines in the insets of Fig. 2(a-c) correspond to T* which occur at 9. Tc, superconducting fluctuation related to the Aslamazov-Larkin (AL) term contributes significantly in addition to the MT [9][10][11]. Further, the reduction in density of states (DOS) due to the Cooper pair formation contributes also to the superconducting fluctuations [12]. Therefore, the relevant contributions from superconducting fluctuations are [9][10][11][12], Here, introduces phase breaking processes due to mainly inelastic scattering (as spin-flip scattering can be ignored) [27,28] and ( ⁄ ) relates to the strength function characterizing electron-electron interaction which has been introduced by Larkin [16]. Summing up all the aforementioned contributions related to the total quantum corrections to the conductivity (QCC), the experimental data from T* (deviation point from WL+ID fitting) to the lowest available temperature can be fitted by using the following equation, The black solid curves are the fits to the experimental data by using Eq. 6 for the samples TN10, TN9 and TN8 as shown in Fig. 2(a-c), respectively. The fits follow the experimental data nicely indicating the existence of superconducting fluctuations (mainly MT) above Tmax along with the quantum interference (WL) and EEI. However, the contribution of WL+EEI can't be neglected in the region from T* to Tmax, though this region is mainly dominated by superconducting fluctuations. The superconducting critical temperature (Tc) is obtained from the MT contribution in the QCC fitting [10,11]. Moreover, the samples belonging to other Ta (820C & 750C) demonstrate the same behaviour in R(T) as that is observed for Ta = 780C and the corresponding R(T) fittings by using Eq. 6 are shown in Fig. S1 in the Supplemental Material (SM) [24]. Finally, the characteristic temperature points such as Tmin, Tmax & T* are extracted from the R(T) data presented in Fig. 2(a-c) along with the Tc from QCC fit and plotted them in Fig. 2 Summarizing, the full scale R(T) from 300 K down to 2K is divided into four distinct regimes starting from the metallic state (the light blue region) to WL+EEI regime (the cyan region) to QCC (the green region) to superconducting fluctuations (the yellow region) to finally the superconducting state (the orange color). Further, the data in Fig. 1(b) is replotted with respect to the dimensionless conductance 00 ⁄ in Fig. 3 by using semi-logarithmic scale. The logarithmic temperature dependence of conductance as shown in Fig 3 confirms the two dimensionality of the samples considered for this study [8,29] and rules out the 3D theory, where conductance G varies with temperature T as, ∝ √ [30]. The logarithmic temperature dependence of conductance is the signature for the presence of WL & EEI in 2D systems [8,29] and similar behaviour is also observed for the samples annealed at 820C & 750C as shown in the SM (Fig. S2) [24]. The linear fits (black dashed lines) marked in Fig. 3 show the deviation from the linearity around 10 K for the sample TN10 and around 18 K for the sample (TN9 & TN8) and these deviation points in temperature resemble with the value of T* shown in Fig. 2(a-c) as obtained by the fit for WL+ID contribution to the conductivity theory.
A. R(T) measurements under perpendicular magnetic field
As quantum interference phenomenon like WL is very sensitive to the magnetic field, we have carried out the R(T) measurements in the presence of magnetic field applied perpendicular to the sample plane.
External magnetic field can be used as a tool to distinguish the quantum phenomena WL & EEI through magnetoresistance (MR) & R(T) measurements. When R(T) measurements are carried out under perpendicular magnetic field, a relatively small field can destroy the weak localization effect but the contribution from EEI remains unaffected [14]. As already discussed, in 2D, zero-field resistance depends logarithmically with temperature for both WL and EEI. But, with the application of magnetic field, EEI is only responsible for logarithmic R(T) dependence as magnetic field destroys WL [14]. Here, in order to find out the actual mechanism behind the upturn observed in R(T) and also to find out the contribution only from EEI, we have carried out the R(T) measurements under external magnetic field for a sample (TN10A) selected from the same batch of TN10 and the corresponding field dependent R(T) is shown in Fig. 4.
Fig. 4: Field-dependent R(T) measurements for the sample TN10A (from the same batch of TN10) with magnetic field applied perpendicular to the sample plane. (a)Temperature dependence of conductance (G) in semi-logarithmic scale under various applied magnetic field. The red solid lines are the linear fits indicating the temperature dependence for the conductance remains logarithmic even after applying magnetic field. (b) Corresponding R(T)s measured under the magnetic field as mentioned in (a). The field dependent R(T)s are fitted with the contribution from (WL+ID) theory by using Eq. (2) and the fits are shown by the solid red curves. Inset: Three selective R(T)s measured under 0T, 3T & 5T from the main panel for a clearer view. For a particular field, the deviation of the (WL+ID) fit from the experimental data occurs at T* which is marked by the dashed vertical line. (c) The variation of Tmax & the corresponding maximum resistance value with the applied magnetic field. Here, the saturation in Tmax after 3.75T appears due to the temperature limitation of the measuring instrument which is limited by 2 K as the lowest achievable temperature. (d) B-T phase diagram obtained from the extracted T*& Tmax from the field-dependent R(T) data.
First, we have considered the temperature dependence of conductance G (1/Rs) measured under external magnetic field and the same is presented in Fig. 4(a) observed. Whereas, at 4 T and above, resistance continues to increase with temperature down to the lowest accessible temperature 2K indicating an insulating type of behavior as evident from Fig. 4(b).
Here, we have fitted the resistive upturn from 25 K to the lower temperature regime by using Eq. 2 which deals with (WL+ID) correction for 2D disordered metal and the fits are represented by the red solid curves. For clarity, three representative R(T)s measured at 0 T, 3 T and 5T are shown selectively in the inset of Fig. 4(b). Here, the deviation of the WL+ID fit from the experimental data is evident for 0 T and 3 T and the related temperature T* is marked by the vertical dashed lines. However, for 5 T field, the fit extends the whole experimental range down to the lowest temperature (2 K). Therefore, T* shifts towards lower temperature with increasing magnetic field. Here, the upturn region from T* to Tmax can be attributed to the presence of superconducting fluctuations along with WL & EEI contributions. Further, when T* is not distinguishable from Tmax as in the case of higher field, the magnetic field destroys superconducting fluctuations & WL contribution but EEI remains prominent and upturn in R(T) gets stronger. Therefore, at high magnetic field (5T), the upturn region from Tmin to Tmax is mainly due to the presence of EEI.
Further, Tmax and its respective resistance values under the presence of an external magnetic field are displayed in Fig. 4(c). As observed in Fig. 4(b), with increasing magnetic field, Tmax shifts towards lower temperature. Above 3.75 T, Tmax gets saturated at 2 K due to the limitation in the measurement temperature as the lowest accessible temperature of the system is 2 K. Whereas, the resistance at Tmax shows a reverse trend with the magnetic field and reaches to the maximum value around 2 kΩ at 5T at the lowest temperature 2K. Increment in the resistance value at Tmax with magnetic field is opposite to the phenomenon of WL, where a relatively small magnetic field destroys the constructive quantum interference of scattered electron waves and hence reduction in resistance occurs under magnetic field.
Further, the presence of a constant slope in the R(T) at high magnetic field and at low temperature is the signature of EEI as evident from Fig. 4(a) & (b). Therefore, it is clear from the field dependent R(T) measurements that EEI in the diffusion channel is the main mechanism behind the observed upturn and the associated resistance peak as appeared in the R(T). Moreover, we have carried out MR measurements at temperature far above the Tc in order to have an idea about the contribution from WL. However, no trace of negative MR which is the signature of WL is observed even at temperature above the T* where the superconducting fluctuations can be ignored. We have observed positive MR at temperature above the T* which confirms the presence of EEI rather than WL. The MR at higher temperature for the sample TN10A is shown in Fig. S3 in the SM [24].
Furthermore, from the variation of Tmax and T* with the field, we have constructed a phase diagram which is shown in Fig. 4(d). Here, the extracted temperature points Tmax & T* are observed to shift towards lower temperature under application of a magnetic field and finally, they meet at about 4.75 T which is marked as the crossover field from (QCC) to EEI dominated regime at higher magnetic field.
In addition to the field dependent R(T) measurements, we have carried out isothermal MR measurements to have an insight into the interplay between EEL and the superconducting fluctuations in the presented disordered superconducting TiN thin films. In Fig. 5 (Fig. S4) [24]. As mentioned before, the Tc is obtained from the QCC fit using Eq. 6 and all the samples for T>Tc show positive magnetoresistance originated typically from superconducting fluctuations [31]. With increasing temperature, the MR curves shift towards lower magnetic field and the effect of magnetic field gets suppressed. In Fig. 6 WL gives a positive contribution to the magnetoconductivity [31]. Now, theoretically, the field-dependent electrical conductance can be expressed as, Where the first term denotes Drude's conductance, the second term represents the conductance from superconducting fluctuations and the last term originates from disorder-induced quantum interference of scattered electronic waves. Here, as quantum contributions are considered, we omit the classical Drude's conductance from Eq. 7 and the total quantum corrections to the magnetoconductivity can be expressed as, parameter that determines the effective electron-electron interaction strength [16]. δ is the pair breaking or cut-off parameter [27,31,33]. Lopes dos Santos and Abrahams [17] have extended Larkin's results for lower temperatures (close to ( ) ≪ 1 ⁄ )) and higher magnetic field range ( ≪ 4 ) ⁄ [17,31,33] because Larkin's calculation on the temperature and magnetic field exclude the immediate vicinity of Tc and is subjected to low magnetic field. Therefore, the extended form of MT contribution from Lopes dos Santos and Abrahams can be written as [17,33]. Where Thirdly, WL's contribution to MC can be written as, Where, ( ) = ln( ) + ( 1 2 + 1 ) and = ℏ 4 Here, is dephasing scattering time and the associated magnetic field ( ) is known as the phasebreaking field, and D is the diffusion constant. The coefficient N in equation (12) represents the number of channels participating in the conduction process [28]. Further, we have calculated magnetoconductance from the measured longitudinal magnetoresistance with resistance in Ohm/Square by using the given expression, and have plotted them in unit of 00 = 2 2 2 ℏ as, ∆ ( )/ 00 in Fig. 6.
Here, we have taken care of the dimensionality as discussed above & validity of MT expression ( ≪
) ⁄
in selecting the magnetic field range for fitting the experimental data shown in Fig. 6.
Combining the AL, MT & WL contributions by using Eqns. 9, 10 & 12, respectively, we have fitted the experimental magnetoconductance for temperature above Tc, i.e., > 1, where = ⁄ , is the reduced temperature. For the fits as shown by the red solid curves in Fig. 6, the characteristic fields have been used as the free parameters and ( , ) ⁄ is exactly taken as it is expressed in Eq. 11. The fits show excellent agreement with the experimental data as shown in Fig. 6. For a clearer view, the magnetoconductance isotherms for the sample TN10 are split into two sets based on the temperature range. For the set with t in the range from 1.08 to 1.55 is shown in Fig. 6(a) and the second set up to t =2.59 is shown in Fig. 6(b). For TN9 and TN8, the MC data along with the fits are presented in Fig 6(c) and (d), respectively. The MC curves and the corresponding fits for rest of the samples belonging to Ta (820C & 750C) are shown in the SM (Fig. S5) [24]. From the fit, has been extracted through the phase breaking field . Further, the coefficient for the MT contribution to the MC, Hence, the dependence of ( , ⁄ ) on the reduced temperature follows a unanimous trend and doesn't depend on the growth parameters as long as the two-dimensionality is maintained. In this case, the growth parameters are mainly the annealing temperature (Ta) and the sample thickness (d). As evident in Fig. 7(a), the variation of ( , ⁄ ) shows weak dependence or becomes almost independent of temperature for ⁄ > 1.5 and it diverges as T approaches to Tc. In order to have the confirmation of this generalized trend in the dependence of ( , ⁄ ) on the reduced temperature, inset of Fig. 7(a) magnifies the diverging part of ( , ⁄ ) close to Tc and indeed, all the samples follow the common path in the plot. for all the TiN samples and they are presented as inverse phase scattering time ( −1 ) with respect to the reduced temperature in Fig. 7(b). Here, also we observe that the samples follow a general trend with two distinct regions with clear variation of the slopes that are highlighted by two different shades. The first region, as highlighted by the cyan shade (close to Tc), shows an abrupt decrement in −1 as temperature approaches to Tc from high temperature. The second region highlighted in light green shade for ⁄ > 1.17 demonstrates a linear variation of −1 with temperature.
Generally, there are three inelastic scattering mechanisms that lead to the phase relaxation in 2D superconductors in dirty limit and the corresponding scattering rates are (i) the electron-phonon scattering rate − ℎ −1 (ii) the inelastic electron-electron scattering rate − −1 and (iii) the inelastic scattering rate of electrons due to superconducting fluctuations − −1 [34]. The electron-phonon scattering rate ( − ℎ −1 ∝ 3 ) is mainly prominent at high temperature [27,35]. As the magnetotransport study presented in the work has been carried out for the temperature range T ≤ 2.5Tc where superconducting fluctuations and electron-electron interaction mechanisms are dominant, the electron-phonon scattering can be ignored [13]. Further, for a dirty superconductor in 2D limit where the thermal diffusion length is more than the sample thickness (which is the case for the present set of samples as listed in Table- The experimental data points show excellent agreement with the theoretical expression given in Eq. 15 for inelastic electron-electron scattering as shown by the red solid curves in Fig. 7(b) for the light green shaded region. However, the region below 1.17Tc As the temperature approaches to Tc, the inelastic scattering of electrons due to superconducting fluctuations becomes prominent and the electron-fluctuation rate − −1 is given by [36,37], .
According to Eq. 16 a strong upturn in the temperature dependence of −1 at T very close to Tc is predicted as ln(T/Tc) in the denominator of the − −1 term becomes almost zero. However, here in the present study, we observe a strong downturn in −1 as the temperature approaches to Tc from higher temperature. Similar behavior of −1 near Tc has been also observed for In/InOx composite films [38], Re70W30 and Nb1-xTax thin films [39,40]. The phase relaxation rate −1 in this regime varies proportionally to ( − ) as shown by the red dashed curves in Fig. 7(b) in a similar fashion as that of the Ginzburg-Landau (GL) phase relaxation rate −1 = (8 ℏ)( − ⁄ ) at T very close to Tc [40].
In the immediate vicinity of Tc, the resemblance in temperature variation of −1 with that of −1 can be understood from the AL contribution which is one of the most prominent contributions to the MC near Tc [41]. The AL term is fundamentally different from the other terms as the only characteristic field BSF (Eq. 9) used in AL term is different than the characteristic dephasing field used in other terms [41]. The characteristic field BSF is associated with the Ginzburg-Landau time = ℏ 8 ( ⁄ ) [40]. Hence the phase relaxation rate −1 obtained from the AL term near Tc can measure the GL phase relaxation rate −1 [40]. In Fig. 7(b), the solid black curve for T/Tc <1.17 represents the −1 for Tc = 2.43 K and the open symbols represent the inverse characteristic time related to BSF. It is clear from the figure that the phase relaxation rate related to BSF closely follow the GL phase relaxation rate −1 for the set of samples presented in Table-1. The deviation for the experimental points from the GL rate −1 as appeared in Fig. 7(b) can be originating from some contributions from normal electrons that are modified due to superconducting fluctuations near the transition [42]. Therefore, clearly the first regime with steeper slope relates to dephasing caused by superconducting fluctuations whereas, the second regime corresponds to inelastic electron-electron scattering induced dephasing and all the samples offer the generic trend as evident in Fig. 7(b).
IV. CONCLUSION
To summarize, we have revisited the quantum corrections to the conductivity (QCC) terms for disordered 2D superconducting TiN thin films. We observe a strong interplay between the superconducting fluctuations and electron-electron interactions. The R(T) measurements, carried out under zero magnetic field, feature different regimes that are mainly defined by the sign of the slope dR/dT. Transitions from metallic to weak insulating and further to superconducting regime are obtained for the samples while cooling down from room temperature to 2 K. In spite of having different growth conditions such as annealing temperature and film thickness, all the samples presented in this article follow a similar trend in their zero-field R(T) characteristics that feature a resistance dip at the end of the metallic state, an upturn along with a resistance peak indicating a weak insulating state and finally a sharp drop in the resistance due to the onset of superconductivity. We have shown that the intermediate upturn and hence the weakinsulating type of behavior in the zero-field R(T) is mainly due to the electron-electron interaction which is further supported by the field dependent R(T). However, the samples presented here are of low resistance (< 1,5 kΩ) values and hence, the observed upturns are also weak. In order to observe a stronger upturn, we have measured a highly resistive sample with Rmax ~ 18 kΩ which eventually offers a much stronger upturn in the zero-field R(T). The results from the zero-field R(T) and R(T) under external magnetic field (Fig. S6 in the supplemental Material) [24] are consistent with that obtained from the low resistive samples presented here.
Here, weak localization does not play any significant role as supported by the MR measurements also. Fig. 2 & Fig. S1. | 2023-01-19T20:33:56.093Z | 2023-01-18T00:00:00.000 | {
"year": 2023,
"sha1": "3e84f5f74f19108bfaccb0ac9c812f2616853291",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3e84f5f74f19108bfaccb0ac9c812f2616853291",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253481541 | pes2o/s2orc | v3-fos-license | Impact of Inoculating with Indigenous Bacillus marcorestinctum YC-1 on Quality and Microbial Communities of Yibin Yacai (Fermented Mustard) during the Fermentation Process
Bacillus species play an important role in improving the quality of some fermented foods and are also one of the dominant bacteria in Yibin Yacai (fermented mustard). However, little is known about their effects on the quality of Yibin Yacai. Here, the effect of Bacillus marcorestinctum YC-1 on the quality and microbial communities of Yibin Yacai during the fermentation process was investigated. Results indicated that the inoculation of Bacillus marcorestinctum YC-1 promoted the growth of Weissella spp. and Lactobacillus spp. and inhibited the growth of pathogens, accelerating the synthesis of free amino acids and organic acids and the degradation of nitrite. Furthermore, inoculating Yibin Yacai with YC-1 could effectively enhance the synthesis of alcohols and terpenoids in yeasts, thus producing more linalool, terpinen-4-ol, and α-muurolen in Yibin Yacai, and endowing it with pleasant floral, fruity, woody, and spicy aromas. These findings reveal that the inoculation of B. marcorestinctum YC-1 can improve the quality and safety of Yibin Yacai by changing microbial communities as fermentation proceeds.
Introduction
Fermented foods have received increasing attention because of their nutritional and health benefits [1]. Among these foods, fermented vegetables are widely favored by people thanks to their rich nutrients and flavors and prolonged shelf-life [2]. In China, Yibin Yacai, one of the most typical fermented vegetables, has received increasing attention due to its favorable fragrance, crispness, and sweetness; its annual processing is up to 200,000 tons [3,4].
Yibin Yacai is traditionally manufactured by spontaneous fermentation and contains a wide variety of microbial communities during fermentation and storage [5]. In Yibin Yacai, Halomonas and Bacillus were found as the dominant bacteria, while Saccharomycetales and Debaryomyces were the major fungi [6,7]. Microbes play a crucial role in the unique flavor formation of fermented vegetables; thus, the inoculation of a pure strain as a starter becomes one of the main methods to improve the quality of fermented vegetables [8,9]. Some microbial communities are positively correlated with the good quality of fermented products [10]. For instance, Bacillus spp. can positively affect the flavor formation of soy sauce [11]. The participation of Bacillus licheniformis can shorten the fermentation period and enrich the metabolite profile, thus improving the functionality and safety of sufu [12]. Moreover, bioaugmentation inoculation of Bacillus spp. could increase the abundance of Lactobacillus and Candida, which were considered the core microbes in Daqu, and thus improve the flavor character of Daqu [13,14]. Our previous studies also demonstrated that the inoculation of Bacillus spp. could promote the growth of Lactobacillus and Lactococcus, enhance flavor, and improve the safety of Sichuan paocai [15,16]. However, the correlation between quality and microbial communities during the fermentation of Yibin Yacai is largely unknown. We surmised that the inoculation of Bacillus spp. could positively tune the fermentation of microbial communities involved in Yibin Yacai. Hence, systematic research on the correlation between quality and microbial communities during the fermentation period is performed to elucidate the properties of Yibin Yacai.
To this end, herein, B. marcorestinctum YC-1 (NCBI GenBank accession No.: OM 033504; Figure S1A,B and Table S1), a Bacillus spp. isolated from a commercial Yibin Yacai in our lab [17], was inoculated as a starter to ferment Yibin Yacai in this study, and its role in changing the physicochemical characteristics, flavor-relevant compounds, and microbial communities of the resulting Yibin Yacai during fermentation were systematically investigated. Meanwhile, the correlation between metabolites and microbes after B. marcorestinctum YC-1 inoculation was established to reveal its positive role in adjusting the fermentation of Yibin Yacai. The obtained results provide great insights into how to inoculate Bacillus spp. to tune the fermentation of Yibin Yacai in order to produce high-quality, safe Yibin Yacai.
Preparation and Sampling of Yibin Yacai
B. marcorestinctum YC-1 was isolated from a commercial Yibin Yacai with a fermentation of 5 years (Sichuan Hefeisi Biotechnology Co., LTD., Sichuan, China). The safety evaluation showed that B. marcorestinctum YC-1 exhibits no hemolytic activity ( Figure S1C), and no resistance to the tested antibiotics (Table S2), demonstrating great potential in fermenting food. B. marcorestinctum YC-1 was cultured on a nutrient agar solid medium (NA) at 37 • C for 24 h. Then, a single colony was inoculated into nutrient broth medium (NB) and shaken for 24 h until the concentration of bacterial solution reached 10 8 CFUs/mL. The starter of B. marcorestinctum YC-1 was collected by centrifugation at 6000 rpm for 10 min at 4 • C.
The fresh Er ping Zhuang (Brassica juncea Coss. var. faliosa Bailey, belonging to the Cruciferae family), were collected locally in Yibin city, and its manufacturing process is detailed in Figure 1. Firstly, the separated roots were cut into even strips and ventilated for 24 h. After salting with 12% NaCl for 24 h, Yacai was obtained by washing with warm water. Then, Yacai was sugared with 15% brown sugar for 24 h, which was further seasoned with spices, including 10% anise, 5% galangal, 5% cinnamon, and 2% Sichuan pepper. To investigate the role of B. marcorestinctum YC-1 inoculation, two groups were fermented in glass jars at 20-25 • C for three months to obtain Yibin Yacai. One group called BMF was inoculated with B. marcorestinctum YC-1, while the other one, NF, was untreated. After 10, 30, 60, and 90 days of fermentation, BMF and NF were sampled in triplicate and stored at −80 • C for the subsequent test.
Determination of Physicochemical Characteristics
The pH, reducing sugar, nitrite, and salinity of both NF and BMF samples after 10, 30, 60, and 90 days of fermentation were determined. Reducing sugar and nitrite were measured according to Chinese national standards (GB 5009.7-2016 and GB 5009. . pH of the samples was measured on a pH meter (PHS25, INESA, Shanghai, China), and their salinity was determined by a salinity meter (ES-421, ATAGO, Tokyo, Japan).
Organic Acids (OAs) Analysis
The content of OAs was measured by high-performance liquid chromatography (HPLC) according to a published procedure [18]. The separation was carried out on an Amethyst C18-H column (
Determination of Physicochemical Characteristics
The pH, reducing sugar, nitrite, and salinity of both NF and BMF samples after 10, 30, 60, and 90 days of fermentation were determined. Reducing sugar and nitrite were measured according to Chinese national standards (GB 5009.7-2016 and GB 5009. . pH of the samples was measured on a pH meter (PHS25, INESA, Shanghai, China), and their salinity was determined by a salinity meter (ES-421, ATAGO, Tokyo, Japan).
Organic Acids (OAs) Analysis
The content of OAs was measured by high-performance liquid chromatography (HPLC) according to a published procedure [18]. The separation was carried out on an Amethyst C18-H column (5 μm, 4.6 × 250 mm, Sepax Technologies, Inc., Newark, DE, USA) at a temperature of 30 °C, and CH3OH/H2O (5:95, v/v) was used as an eluent. The flow rate of HPLC 1260 Infinity Ⅱ (Agilent Technologies, Inc, Palo Alto, CA, USA) was 0.6 mL/min, and the injection volume was 20 μL. OAs were detected by a diode array detector (DAD) at 210 nm.
Free Amino Acids (FAAs) Analysis
The extraction of FAAs was according to the Chinese national standard (GB/T 30987-2020) and analyzed by an Automatic amino acid analyzer A300 (MembraPure GmbH, Berlin, Germany), as suggested by a reported reference [16]. The injection volume was 20 μL, and the chromatogram was analyzed by aminoPeak software.
Volatile Compounds (VCs) Analysis
VCs in Yibin Yacai were collected by head space solid phase microextraction (HS-SPME), according to a modified procedure [19]. The SPME holder 57330-U (Supelco, Sigma Aldrich, St. Louis, MO, USA) with fiber (DVB/CAR/PDMS, 50/30 μm) was used for the collection of VCs, and the obtained VCs were detected and isolated by a gas chromatography-mass spectrometer (GCMS-QP2010 SE, Shimadzu, Kyoto, Japan). The tempera-
Free Amino Acids (FAAs) Analysis
The extraction of FAAs was according to the Chinese national standard (GB/T 30987-2020) and analyzed by an Automatic amino acid analyzer A300 (MembraPure GmbH, Berlin, Germany), as suggested by a reported reference [16]. The injection volume was 20 µL, and the chromatogram was analyzed by aminoPeak software.
Volatile Compounds (VCs) Analysis
VCs in Yibin Yacai were collected by head space solid phase microextraction (HS-SPME), according to a modified procedure [19]. The SPME holder 57330-U (Supelco, Sigma Aldrich, St. Louis, MO, USA) with fiber (DVB/CAR/PDMS, 50/30 µm) was used for the collection of VCs, and the obtained VCs were detected and isolated by a gas chromatography-mass spectrometer (GCMS-QP2010 SE, Shimadzu, Kyoto, Japan). The temperature program of GC-MS was suggested by a previous study [17], and the VCs were identified by calculating their retention indices (RI) based on n-alkanes (C8-C19) according to NIST14s MS data library. 3-Octanol was used as an internal standard to quantify the identified VCs, and the specific odors of identified VCs were analyzed on Perflavory (http://www.perflavory.com/, last accessed on 29 September 2022).
Bioinformatic Analysis
Trimmomatic software was used for intercepting the fuzzy base (N) of reads, and retaining the previous high quality sequence when the average base quality was below 20 [20]. The reads with ambiguous, homologous sequences or below 200 bp were abandoned, and only the reads with 75% of bases above Q20 were selected through QIIME software (Version 1.8.0) [21]. After that, the clean reads were subjected to primer sequences removal and clustered to generate operational taxonomic units (OTUs) using Vsearch software (Version 2.4.2) [22] with a 97% similarity cut-off. The representative read of each OTU was selected using the QIIME package and annotated and blasted according to the database.
Statistical Analysis
All experiments were conducted in triplicate, and the results were represented as mean ± standard deviation. The significant differences (p < 0.05) were measured by One-way ANOVA and Student's T-test in IBM SPSS Statistics 26 software (SPSS Inc., Chicago, IL, USA). The graphs were generated from the Origin Software 2021 (OriginLab Corporation, Hampton, MA, USA), and orthogonal partial least squares-discriminant analysis (OPLS-DA) and principal coordinates analysis (PCoA) were carried out using SIMCA-P software (Umetrics, Umea, Sweden). The metabolic pathway was analyzed based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database (https://www.genome.jp/kegg/pathway.html, last accessed on 29 September 2022), and the correlation was analyzed by R Package (AT&T BellLaboratories, Murihale, NJ, USA).
Physicochemical Characteristics
pH plays an important role in fermentation, and a value of about 4 usually indicates the maturity of fermented vegetables [5,23]. As shown in Table 1, the pH value of both groups reached 4 at the end of fermentation, revealing the maturity of Yibin Yacai. However, BMF had lower total acid value of 11.81 g/kg compared with NF (13.81 g/kg), suggesting that inoculation of YC-1 decreased the production of acids during fermentation. Reducing sugar, as a kind of carbon source, can be metabolized and converted to flavors by microbes [24]. In the present study, the reducing sugar contents of both two groups kept increasing throughout the fermentation (Table 1). On day 90, the reducing sugar reached 11.31 g/100 g in BMF, which was significantly higher than that in NF (9.83 g/100 g). This finding might be associated with the activities of microbes with cellulase secretion, because reducing sugar can be mainly obtained from cellulose in vegetables by the action of cellulase secreted by microbes [10].
Nitrite is a vital index to evaluate the safety of fermented foods and can be formed from nitrate by nitrate reductase [24]. On day 90, the nitrite content was 2.37 mg/kg in BMF, which was only 46% of that in NF at the same time (Table 1). This was consistent with the results of physiological and biochemical identification tests (Table S1), which suggested that the inoculation of B. marcorestinctum YC-1 could promote the nitrite metabolic pathway [16]. Salinity can directly affect the taste and flavor of fermented foods by changing microbial structure [25]. During fermentation, the salinity of all samples remained between 3% and 4%; it was lowest on day 60, which was 2.97% in NF and 3.08% in BMF. The results showed that fermentation had little effect on the salinity during the fermentation of Yibin Yacai. Collectively, the inoculation of B. marcorestinctum YC-1 could effectively increase the content of reducing sugar and decrease the content of nitrite, thereby generating Yibin Yacai with enhanced nutritional value and safety.
Changes in OAs during Fermentation
As important metabolites of microbes, OAs, not only provide unique flavors to Yibin Yacai, but also inhibit the growth of undesirable microbes [26]. As shown in Figure 2A, seven OAs were detected, and their contents in both groups increased initially, then declined after 30 days of fermentation. Compared to NF, BMF had a higher content of OAs after 90 days of fermentation. Its OA value reached 145.39 mg/g on day 90, which was 1.52 times higher than that of NF. This higher OA content represented that Yibin Yacai in BMF groups possessed a unique, mixed, pleasant odor, for OAs can endow fermented foods with distinctive odors, such as pungent, sour, vinegar-like, and cheesy [27]. Among these 7 OAs, malic acid was the most abundant during the whole fermentation process, which accounted for 72% and 77% of the total OAs in NF and BMF, respectively, on day 90. These results were in accordance with a previous study that found malic acid is the major organic acid in cruciferous vegetables [2]. In addition, the contents of lactic and acetic acids in both groups were increased after 90 days of fermentation, which could be attributed to the lactic fermentation involved in the fermentation process [2,28]. At the end of fermentation, BMF had a higher content of lactic and acetic acids than NF, suggesting that inoculation with B. marcorestinctum YC-1 may promote lactic fermentation by changing microbial communities [23]. The above results revealed that the inoculation of B. marcorestinctum YC-1 can significantly increase the contents of OAs, promoting fermentation and tuning flavors of Yibin Yacai.
Changes in FAAs during Fermentation
Seventeen FAAs were detected in all samples (Table S4) and can be divided into sweet amino acids, umami amino acids, and bitter amino acids [29]. As shown in Figure 2B, BMF always had a higher content of FAAs than NF, even though the contents of FAAs mostly declined during the entire fermentation. On day 90, BMF contained 84.72 mg/100 g of FAAs, while NF contained 62.27 mg/100 g (Table S4). It is worth noting that sweet amino acids were the major FAAs, and BMF on day 90 had 33.48 mg/100 g of sweet amino acids, which was higher than NF (23.88 mg/100 g) (Table S4).
FAAs are mainly converted from proteins by the decomposition of peptidase, and the richer contents of FAAs in BMF might be due to the promotion of peptidase activity after inoculation [30]. However, the contents of FAAs decreased as the fermentation progressed, because they could be converted into small flavoring molecules by microbes [9]. Additionally, FAAs are important contributors to the taste [29], and sweet and umami amino acids were significantly increased, thus enhancing the taste of sweetness and freshness in BMF. Overall, the inoculation of B. marcorestinctum YC-1 could significantly increase the contents of FAAs in Yibin Yacai, enriching its final sweet and umami flavors. Among these 7 OAs, malic acid was the most abundant during the whole fermentation process, which accounted for 72% and 77% of the total OAs in NF and BMF, respectively, on day 90. These results were in accordance with a previous study that found malic acid is the major organic acid in cruciferous vegetables [2]. In addition, the contents of lactic and acetic acids in both groups were increased after 90 days of fermentation, which could be attributed to the lactic fermentation involved in the fermentation process [2,28]. At the end of fermentation, BMF had a higher content of lactic and acetic acids than NF, suggesting that inoculation with B. marcorestinctum YC-1 may promote lactic fermentation by changing microbial communities [23]. The above results revealed that the inoculation of B. marcorestinctum YC-1 can significantly increase the contents of OAs, promoting fermentation and tuning flavors of Yibin Yacai.
Changes in VCs during Fermentation
A total of 126 VCs were detected in NF and BMF (Tables S5 and S6), and they were grouped as acids, alcohols, aldehydes, heterocycles, ethers, esters, ketones, terpenoids, and hydrocarbons according to their chemical structures ( Figure 2C). In NF and BMF, the contents of VCs showed a decline in the early stage, and then significantly increased until the end of fermentation. On day 90, BMF presented 62,617.96 µg/100 g of VCs, while NF had 52,200.85 µg/100 g, and esters and terpenoids were the major VCs in both groups (Tables S5 and S6). Furthermore, a significant increase of terpenoids was found in both groups after fermentation, and 10,885.64 µg/100 g of terpenoids was detected in BMF on day 90, which was 1.79 times higher than that in NF (6066.98 µg/100 g). Interestingly, methyl cinnamate, ethyl cinnamate, (+)-α-pinene, and γ-elemene, were only found in BMF, and these esters and terpenoids contributed to balsamic, sweet, fruity, and spicy aromas. Meanwhile, a significant high content of 10,270.92 µg/100 g for alcohols was found in BMF on day 90, which was 92% higher than that in NF. Among these alcohols, terpinen-4-ol and γ-terpineol only existed in BMF, which contained the fragrance of lilac and pine. Furthermore, the content of linalool was significantly increased in BMF on day 90 (4355.95 µg/100 g), about 2.13 times higher than that in NF (2046.01 µg/100 g), which could confer Yibin Yacai with sweet, floral, and fruity-like citrus aromas [24].
Based on these identified VCs, six representative odorants were selected according to a previous study [31], and characteristic odorant analysis was performed. As depicted in Figure 2D, a great difference was shown between the two groups at the end of fermentation. Floral and fruity were dominant aromas in BMF, and balsamic and herbal were dominant aromas in NF. These findings were consistent with the results of VCs analyses. Combined with the results of VCs and characteristic odorant analyses, we can conclude that BMF presented a better flavor than NF at the end of fermentation, suggesting that the inoculation of B. marcorestinctum YC-1 could greatly improve the richness of VCs in Yibin Yacai, generating a better flavor.
Significant Metabolites
To figure out the difference in metabolites between NF and BMF, OPLS-DA and S-plot models were performed ( Figure 3). As shown in Figure 3A, NF and BMF can be completely separated on days 60 and 90, while the spatial distance was not far on day 30, indicating that fermentation after day 30 was an important period for flavor development of Yibin Yacai. Furthermore, variable importance in projection (VIP) was measured in OPLS-DA, and S-plot was constructed to identify the metabolites contributing to the discrimination based on VIP. As shown in Figure 3B and Table S7, 31 metabolites were screened out as differential metabolites (VIP > 1 and p < 0.05), including 8 FAAs, 3 OAs, and 20 VCs (Table S8). These results clearly revealed that VCs, including alcohols, esters, and terpenoids, were the main differential metabolites that contributed to the flavor of Yibin Yacai.
Diversity of Microbial Communities in Yibin Yacai
A total of 6580 OTUs in bacteria and 4153 OTUs in fungi were identified, and the result of α-diversity analysis is shown in Figure S2. The Chao1 index presents the richness of community, while Shannon and Simpson indices present the evenness of community [15]. As depicted in Figure S2A, except for the constant decrease of the Chao 1 index in NF, the three indexes of bacteria in both groups continued to drop until day 60. The Chao1 index in BMF was lower than that in NF, which might be explained by the inoculation of B. marcorestinctum YC-1 changing the microbial structure, resulting in an increase in Bacillus richness and a decrease in community richness. In fungi, the Chao1 index declined as fermentation progressed, and Shannon and Simpson indices increased firstly, and then significantly decreased on day 90 ( Figure S2B). The α-diversity indices of fungi in BMF were richer than that in NF. Therefore, the inoculation of B. marcorestinctum YC-1 significantly affected the composition of both bacterial and fungal communities in Yibin Yacai.
Microbial Profile in Yibin Yacai
At the phylum and genus level, microbes with a relative abundance > 1% were shown in Figure 4. Firmicutes and Proteobacteria were two major bacterial phyla in the tested samples, which were also found in other fermented vegetables [5,9]. The relative abundance of Firmicutes reached its highest on day 60, which was 97.01% in BMF and 94.68% in NF ( Figure 4A), while the highest relative abundance of Proteobacteria was 16.30% in NF and 14.43% in BMF on day 10. At the genus level, Weissella and Lactobacillus were the dominant genera, while Pseudomonas, Escherichia-Shigella, and Pediococcus were the second dominant genera ( Figure 4B), and they were commonly found in fermented vegetables [2,15]. screened out as differential metabolites (VIP > 1 and p < 0.05), including 8 FAAs, 3 OAs, and 20 VCs (Table S8). These results clearly revealed that VCs, including alcohols, esters, and terpenoids, were the main differential metabolites that contributed to the flavor of Yibin Yacai.
Diversity of Microbial Communities in Yibin Yacai
A total of 6580 OTUs in bacteria and 4153 OTUs in fungi were identified, and the result of α-diversity analysis is shown in Figure S2. The Chao1 index presents the richness of community, while Shannon and Simpson indices present the evenness of community [15]. As depicted in Figure S2A, except for the constant decrease of the Chao 1 index in NF, the three indexes of bacteria in both groups continued to drop until day 60. The Chao1 index in BMF was lower than that in NF, which might be explained by the inoculation of B. marcorestinctum YC-1 changing the microbial structure, resulting in an increase in Bacillus richness and a decrease in community richness. In fungi, the Chao1 index declined as fermentation progressed, and Shannon and Simpson indices increased firstly, and then significantly decreased on day 90 ( Figure S2B). The α-diversity indices of fungi in BMF were richer than that in NF. Therefore, the inoculation of B. marcorestinctum YC-1 significantly affected the composition of both bacterial and fungal communities in Yibin Yacai.
Microbial Profile in Yibin Yacai
At the phylum and genus level, microbes with a relative abundance > 1% were shown in Figure 4. Firmicutes and Proteobacteria were two major bacterial phyla in the tested samples, which were also found in other fermented vegetables [5,9]. The relative abundance of Firmicutes reached its highest on day 60, which was 97.01% in BMF and 94.68% in NF ( Figure 4A), while the highest relative abundance of Proteobacteria was 16.30% in NF and 14.43% in BMF on day 10. At the genus level, Weissella and Lactobacillus were the dominant genera, while Pseudomonas, Escherichia-Shigella, and Pediococcus were the second dominant genera (Figure 4B), and they were commonly found in fermented vegetables [2,15]. On day 10, the relative abundance of Lactobacillus was 3.15% in BMF, much less than that of NF (25.23%), while the number of Lactobacillus significantly grew on day 30. On the other hand, Bacillus (7.68%) significantly increased in BMF on day 10 and decreased on day 30. Lactobacillus is an important bacterium for its tolerance to the anaerobic and high salt environment, and it can degrade sugar to produce acids [10,27]. These results indi- On day 10, the relative abundance of Lactobacillus was 3.15% in BMF, much less than that of NF (25.23%), while the number of Lactobacillus significantly grew on day 30. On the other hand, Bacillus (7.68%) significantly increased in BMF on day 10 and decreased on day 30. Lactobacillus is an important bacterium for its tolerance to the anaerobic and high salt environment, and it can degrade sugar to produce acids [10,27]. These results indicated that the natural growth of Lactobacillus might be negatively affected by the participation of external B. marcorestinctum YC-1 in the initial fermentation stage, resulting in less production of acids in BMF. In addition, BMF had more Weissella (50.46%) and Lactobacillus (38.44%) on day 90 than NF (37.03% and 33.16%), and these two bacteria can produce antimicrobial agents to inhibit the growth of pernicious microbes [25,32]. Compared with NF, a less relative abundance of Escherichia coli existed in BMF on day 90 ( Figure S3A) because of the increase of Weissella and Lactobacillus. In consequence, the inoculation of B. marcorestinctum YC-1 promoted the growth of LAB, thereby improving the safety of Yibin Yacai.
With regard to fungi, Basidiomycota and Ascomycota were the main phyla ( Figure 4C), and the difference between NF and BMF mainly existed on days 10 and 30. In comparison to NF, the relative abundance of Ascomycota in BMF increased by 29.76% on day 10, and the relative abundance of Basidiomycota increased by 20.08% on day 30. At the genus level, Sporobolomyces, Cystofilobasidium, and Monographella were major fungi ( Figure 4D), and they were reported as the characteristic fungi in the mustard varieties [5,33]. After inoculation, BMF had more Monographella (13.12% and 7.89%) than NF (2.79% and 3.95%) on days 10 and 30. Besides, the relative abundance of Cystofilobasidium (7.85-15.18%) in BMF remained higher than that in NF (3.92-7.96%) through the entire fermentation. These abovementioned fungi have been proven to release a variety of metabolism-related enzymes associated with the synthesis of flavors [34,35]. In addition, Cystofilobasidium macerans existed during the whole fermentation ( Figure S3B), which can produce extracellular enzymes with high proteolytic and cellulose hydrolysis activity [36], thus facilitating the generation of reducing sugar in Yibin Yacai. Therefore, the inoculation of B. marcorestinctum YC-1 favored the growth of fungi that can produce metabolism-related hydrolases, which was conducive to the production Yibin Yacai with better flavor.
Significant Microbes and Predicted Functions of Bacteria
PCoA analysis based on Bray-Curtis distance was conducted to explore the microbial community differences between the two groups. The variances of PC1 and PC2 were 47.31% and 26.6%, respectively, in bacteria ( Figure 5A), and the variances of PC1 and PC2 were 39.73% and 23.62%, respectively, in fungi ( Figure 5B). BMF and NF differed greatly in bacterial community for being almost separated, and the position of NF mainly changed in PC1, while the changes of BMF were shown both in PC1 and PC2. As for fungi, BMF showed smaller changes in spatial position compared with NF, and the difference mainly existed on days 10 and 60. These findings proved that the inoculation of B. marcorestinctum YC-1 significantly changed the composition of microbial community in Yibin Yacai.
Additionally, the top 10 differential bacteria and fungi at the genus level were analyzed, and the details are shown in Figure 5C,D. Weissella, Lactobacillus, Pediococcus, Bacillus, Aerococcus, and Lactococcus were the main differential bacteria, while the difference of Escherichia-Shigella, Pseudomonas, Muribaculaceae, and Enterococcus between the two groups was observed in the late stage of fermentation (60-90 days). On the other hand, Sporobolomyces, Grifola, Cystofilobasidium, Naganishia, Wallemia, Leucosporidium, and Aspergillus were the significant differential fungi. Foods 2022, 11, x FOR PEER REVIEW 11 of 15
Correlation between Differential Microbes and Metabolites
Pearson correlation analysis was performed to reveal the correlation between microbes and metabolites ( Figure 6A). The related metabolic pathways were depicted to systematically investigate the influence of inoculation of B. marcorestinctum YC-1 on Yibin Yacai ( Figure 6B). The correlation analysis revealed that Cystofilobasidium, Bacillus, and The biological functions of differential bacteria in NF and BMF were predicted by PI-CRUSt, and evaluated by the relative abundance in KEGG pathways ( Figure 5E). As a result, the significant difference was mainly in the late fermentation stage (60-90 days), and the proportion of metabolism-related difference was the most in KEGG level-1 pathway analysis, suggesting that the function of differential bacteria was mainly related to metabolism. In KEGG level-2, compared with NF, the relative abundance of amino acid metabolism, and metabolism of other amino acids, significantly increased in BMF on day 60, and the relative abundance in the metabolism of terpenoids and polyketides was higher on days 10 and 60. At level 3 of KEGG classification, BMF had a richer relative abundance in the biosynthesis of amino acids and terpenoids backbone on days 60 and 90 than NF. Notably, a boost in FAAs, terpenoids, glycolysis/gluconeogenesis, and pyruvate metabolisms on days 60 and 90 was only observed in BMF, and these metabolisms were vital pathways to produce flavors (Xiao et al., 2021). Collectively, the inoculation of B. marcorestinctum YC-1 significantly increased microbial metabolisms, thus greatly facilitating the production of flavors.
Correlation between Differential Microbes and Metabolites
Pearson correlation analysis was performed to reveal the correlation between microbes and metabolites ( Figure 6A). The related metabolic pathways were depicted to systematically investigate the influence of inoculation of B. marcorestinctum YC-1 on Yibin Yacai ( Figure 6B). The correlation analysis revealed that Cystofilobasidium, Bacillus, and Lactococcus were positively related to Asp, Glu, and Pro ( Figure 6A), and their relative abundances were increased in BMF compared to NF, which was in accordance with the higher contents of Asp, Glu, and Pro ( Figure 2B and Table S4). Moreover, Weissella, as the dominant bacteria during the entire fermentation ( Figure 4B), showed a positive correlation with umami and sweet amino acids, consistent with previous observations [16]. In particular, brown sugar was added during the manufacture of Yibin Yacai, and sucrose is its main ingredient, which is preferentially favored by microbes as carbon source [8], thus generating FAAs through the tricarboxylic acid cycle under aminopeptidase and transaminase secreted by microbes [30]. Lactococcus were positively related to Asp, Glu, and Pro ( Figure 6A), and their relative abundances were increased in BMF compared to NF, which was in accordance with the higher contents of Asp, Glu, and Pro ( Figure 2B and Table S4). Moreover, Weissella, as the dominant bacteria during the entire fermentation ( Figure 4B), showed a positive correlation with umami and sweet amino acids, consistent with previous observations [16]. In particular, brown sugar was added during the manufacture of Yibin Yacai, and sucrose is its main ingredient, which is preferentially favored by microbes as carbon source [8], thus generating FAAs through the tricarboxylic acid cycle under aminopeptidase and transaminase secreted by microbes [30]. Lactic acid, one of the essential OAs in fermented vegetables, not only provides a unique flavor, but also increases the acidity of fermented vegetables [9]. Pyruvate is an important precursor of OAs, which can be converted into lactic acid and other OAs under the catalysis of pyruvate dehydrogenase, pyruvate oxidase, and acetokinase [27]. Moreover, Lactobacillus was reported as a main producer of enzymes associated with pyruvate Lactic acid, one of the essential OAs in fermented vegetables, not only provides a unique flavor, but also increases the acidity of fermented vegetables [9]. Pyruvate is an important precursor of OAs, which can be converted into lactic acid and other OAs under the catalysis of pyruvate dehydrogenase, pyruvate oxidase, and acetokinase [27]. Moreover, Lactobacillus was reported as a main producer of enzymes associated with pyruvate conversion [8], which showed a positive correlation with lactic acid and malic acid in this study ( Figure 6A). Compared to NF, BMF presented a higher relative abundance of Lactobacillus during fermentation ( Figure 4B), seen in the fact that the dominant role of Lactobacillus in the synthesis of OAs had promoted after inoculation with B. marcorestinctum YC-1, which was consistent with higher content of OAs in BMF (Figure 2A and Table S3).
Nitrite can be converted into ammonia when catalyzed by nitrate reductase and nitrite reductase ( Figure 6B), and ammonia can be consumed by Lactobacillus as a nitrogen source to produce Glu and Arg [16]. Furthermore, malic acid and tartaric acid have been reported to facilitate the degradation of nitrite, because they can also act as nutrients to support the growth of Lactobacillus [37]. Therefore, a significantly higher content of OAs and lower content of nitrite in BMF (Tables 1 and S3) were associated with a higher abundance of Lactobacillus ( Figure 4B), suggesting that the inoculation promoted the activity of Lactobacillus, thus accelerating the degradation of nitrite and indirectly increasing the FAAs content.
VCs can create favorable aromas in fermented vegetables, and glucose metabolism is a vital pathway for the formation of VCs [30]. As shown in Figures 2A and 6B, terpenoids, as the major VCs in Yibin Yacai, are derived from isopentenyl diphosphate (IPP) and dimethyl diphosphate (DMAPP). IPP and DMAPP can be obtained by the methylerythritol 4-phosphate (MEP) pathway from pyruvate and glyceraldehyde-3phosphate, then metabolized into more complex terpenoids ( Figure 6B) [38]. Notably, terpinen-4-ol, crotonic acid, o-formylphenyl ester, and β-himachalene were only found in BMF (Table S5), and they had a significant positive relationship with Rhodotorula ( Figure 6A). Moreover, linalool, cinnamyl acetate, and α-muurolen contents were higher in BMF and showed a positive relationship with Leucosporidium and Rhodotorula ( Figure 6A). These results suggested that the unique VCs in Yibin Yacai of BMF were mainly influenced by yeasts, such as Rhodotorula, Leucosporidium, Cryptococcus, and Wallemia.
Salting Yibin Yacai with 12% NaCl at the beginning could make salt-tolerant yeasts (Sporobolomyces, Cystofilobasidium, Wallemia, and Rhodotorula) the core fungi during fermentation. These yeasts produce more stable hydrolases that not only directly utilize reducing sugar to create flavors, but also secrete glycosidase to hydrolyze glycosides, further generating terpenoids [8,30,39]. Additionally, yeasts can metabolize symbiotically with LAB, and assimilate other compounds to produce carbon sources for LAB [35], suggesting that yeasts and LAB had the same positive correlation to VCs in Yibin Yacai. For instance, Sporobolomyces, Leucosporidium, and Enterococcus, three representative LABs, all had a significant positive relationship with production of anisaldehyde and ethyl p-methoxycinnamate ( Figure 6A). Both Lactobacillus and Sporobolomyces were positively correlated with α-cubebene ( Figure 6A). Furthermore, LAB can embellish the flavors produced by yeasts during malolactic fermentation [38], which may also account for the richer aromas in BMF.
Conclusions
In the present study, the effect of B. marcorestinctum YC-1 as a starter on the quality of Yibin Yacai was investigated. The results showed that the quality of Yibin Yacai was significantly improved after inoculation. In particular, the abundance of LAB (Weissella, Lactobacillus) and yeasts had a significant increase through inoculation, thus resulting in more FAAs, OAs, terpenoids and alcohols generated, endowing strong fruity, floral, and sweet flavors, and accelerating the degradation of nitrite in Yibin Yacai. The change in the microbial community during fermentation after inoculation revealed the strong correlation between metabolites and microbes. Furthermore, we found that yeasts played a more prominent role in the synthesis of terpenoids and alcohols, contributing desirable flavor profiles. Overall, the inoculation of B. marcorestinctum YC-1 enriched the flavors, promoted safety, and further improved the quality of Yibin Yacai. These results provide a new direction for the application of Bacillus spp. in fermented vegetables.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/foods11223593/s1, Figure S1: Table S1: Physiological and biochemical identification results; Table S2: The results of the susceptibility test; Table S3: The contents of OAs in NF and BMF during fermentation; Table S4: The contents of FAAs in NF and BMF during fermentation; Table S5: Detailed information and contents of main VCs in NF; Table S6: Detailed information and contents of main VCs in BMF; Table S7: Ordinal list of metabolites in S-plot; Table S8: Differential metabolites between NF and BMF. | 2022-11-13T04:27:58.978Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "486387eff7e7fb1046087216ee71a9b378a42020",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "486387eff7e7fb1046087216ee71a9b378a42020",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
16852300 | pes2o/s2orc | v3-fos-license | A Complete Axiom System for Propositional Interval Temporal Logic with Infinite Time
Interval Temporal Logic (ITL) is an established temporal formalism for reasoning about time periods. For over 25 years, it has been applied in a number of ways and several ITL variants, axiom systems and tools have been investigated. We solve the longstanding open problem of finding a complete axiom system for basic quantifier-free propositional ITL (PITL) with infinite time for analysing nonterminating computational systems. Our completeness proof uses a reduction to completeness for PITL with finite time and conventional propositional linear-time temporal logic. Unlike completeness proofs of equally expressive logics with nonelementary computational complexity, our semantic approach does not use tableaux, subformula closures or explicit deductions involving encodings of omega automata and nontrivial techniques for complementing them. We believe that our result also provides evidence of the naturalness of interval-based reasoning.
Introduction
Intervals and discrete linear state sequences offer a natural and flexible way to model both sequential and parallel aspects of computational processes involving hardware or software. Interval Temporal Logic (ITL) [Mos86] (see also [ITL12]) is an established formalism for rigorously reasoning about such intervals. ITL has a basic construct called chop for the sequential composition of two arbitrary formulas as well as an analogue of Kleene star for iteration called chop-star. Although originally developed for digital hardware specification [Mos83a,Mos83b,HMM83,Mos85], ITL is suitable for logic-based executable specifications [Mos86], compositional reasoning about concurrent processes [Mos94,Mos95,Mos98,Mos11], refinement [CZ97], as well as for runtime analysis [ZZC99].
Until now, in spite of research over many years involving ITL and its applications, there was no known complete axiom system for quantifier-free propositional ITL (PITL) with infinite time. We present one and prove completeness by a reduction to our earlier complete PITL axiom system for finite time [Mos04] (see also [BT03]) and conventional propositional linear-time temporal logic (PTL). We do not use subformula closures, tableaux, or explicit deductions involving encodings of omega automata and nontrivial techniques for complementing them. Such encodings are typically found in completeness proofs for comparable logics discussed later on (see §11.1), which like PITL have omega-regular expressiveness.
See Thomas [Tho90,Tho97] for more about omega-regular languages, omega automata and some associated logics. Our simple axiom system avoids complicated inference rules and proofs such as axiom systems for an equally expressive version of PITL with restricted sequential iteration [Pae89] and a less expressive version of PITL lacking sequential iteration [RP86]. In the future we plan to use our axiom system as a hierarchical basis for obtaining completeness for some PITL variants. We also believe it can be applied to some other logics and discuss this in Section 12.
Our earlier completeness proof for a larger, more complicated axiom system for quantified ITL with finite domains and infinite time [Mos00] does not work if variables are limited to being just propositional. So that result, while serving as a stepping stone for further research on ITL, even fails to establish axiomatic completeness for a quantified version of PITL (QPITL) with infinite time! For these reasons, we feel justified in regarding the problem of showing axiomatic completeness for full PITL with infinite time as a previously open problem.
We now mention some recent publications by others as evidence of ITL's continuing relevance. None specifically motivate our new completeness proof. Nevertheless, they arguably contribute to making a case for the study of ITL's mathematical foundations, which naturally include axiomatic completeness.
The KIV interactive theorem prover [RSSB98] has for a number of years included a slightly extended version of ITL for interactive theorem proving via symbolic execution both by itself (e.g., see [BBN + 10, BSTR11]) and also as a backend notation which supports Statecharts [TSOR04] and UML [BBK + 04]. KIV can employ ITL proof systems such as ours. The concluding remarks of [BSTR11] note the following advantages of ITL: Our ITL variant supports classic temporal logic operators as well as program operators.
The interactive verifier KIV allows us to directly verify parallel programs in a rich programming language using the intuitive proof principle of symbolic execution. An additional translation to a special normal form (as e.g. in TLA [Temporal Logic of Actions [Lam02]]) using explicit program counters is not necessary. Axiomatic completeness of PITL is not an absolute requirement for the KIV tool but does offer some benefits. This is because some axioms, inference rules and associated deductions employed to prove completeness can be exploited in KIV, thereby reducing the number of adhoc axioms and inference rules. 1 Various imperative programming constructs are expressible in ITL and operators for projecting between time granularities are available (but not considered here). ITL influenced an assertion language called temporal 'e' [Mor99] which is part of the IEEE Standard 1647 [IEE08] for the system verification language 'e'.
The Duration Calculus (DC) of Zhou, Hoare and Ravn [ZHR91] is an ITL extension for real-time and hybrid systems. The books by Zhou and Hansen [ZH04] and Olderog and Dierks [OD08] both employ DC with finite time and discuss relatively complete axiom systems for it. The second book utilises DC with timed automata to provide a basis for specifying, implementing and model checking suitable real-time systems. Indeed, Olderog and Dierks explain how they regard an interval-oriented temporal logic as being better suited for these tasks than more widely used point-based ones and timed process algebras. Concerning point-based logics, they make this comment (on page 23): "In our opinion this leads to complicated reasoning similar to that . . . based on predicate logic." As for timed process algebras, they note the following (on page 25): "A difficulty with these formalisms is that their semantics are based on certain scheduling assumptions on the actions like urgency, which are difficult to calculate with." Within the last ten years, other complete axiom systems for versions of propositional and first-order ITL with infinite time have been presented. These include two by Wang and Xu [WX04] for first-order variants with restricted quantifiers and no sequential iteration as well as a probabilistic extension of theirs by Guelev [Gue07] which all build on an earlier completeness result of Dutertre [Dut95] for first-order ITL restricted to finite time. Like Dutertre, Wang and Xu and also Guelev use a nonstandard abstract-time semantics (e.g., without induction over time) instead of ITL's standard discrete-time one. Their proofs employ Henkin-style infinite sets of maximal consistent formulas. Duan et al. [DZ08,DZK12] give a tableaux-like completeness proof for a related omega-regular logic called Propositional Projection Temporal Logic (PPTL). The only primitive temporal operators in PPTL for sequential composition have varying numbers of operands and concern multiple time granularities. However, both chop and chop-star can be derived. The proof system has over 30 axioms and inference rules, some rather lengthy and intricate. The completeness proof itself involves the nontrivial task of complementing omega-regular languages which can be readily expressed in the logic but it is not discussed. Furthermore, the authors omit much of the prior work in the area developed in the course of over forty years (which we later survey in Section 11). More significantly, they do not explain how they bypass the associated hurdles faced by previous completeness proofs for logics with comparable expressiveness and nonelementary computational complexity. These points make checking the proof's handling of the complementation of omega-regular languages, liveness and other issues rather challenging. Mo, Wang and Duan [MWD11] describe promising applications of Projection Temporal Logic to specifying and verifying asynchronous communication. Zhang, Duan and Tian [ZDT12] investigate the modelling of multicore systems in Projection Temporal Logic. In view of this, the foundational issue of axiomatic completeness for PPTL should be addressed in the future more thoroughly and systematically and better related to other approaches. Incidentally, we already showed in [Mos95] that axiomatic completeness for a version of PITL with a standard version of temporal projection can be simply and hierarchically reduced to axiomatic completeness for PITL without temporal projection. Duan et al. [DZ08,DZK12] however make no mention of this by now long established and powerful technique in their review of prior work.
Here is the structure of the rest of this presentation: Section 2 overviews PITL and the new axiom system. Section 3 concerns a class of PITL theorems from which we can also deduce suitable substitution instances needed later on. Section 4 gives some infrastructure for systematically replacing formulas by other equivalent ones in deductions arising in the completeness proof. Section 5 introduces some useful PITL subsets for later use in the completeness proof. Section 6 reduces completeness for PITL with a kind of infinite sequential iteration to completeness for a subset without this. Section 7 shows how to represent deterministic finite-state semi-automata and automata in PITL. Section 8 employs semiautomata to test a given PITL formula in a finite interval's suffix subintervals. Section 9 shows completeness for the PITL subset without infinite sequential iteration. Section 10 includes some observations about the completeness proof. Section 11 reviews existing complete axiom systems for omega-regular logics. Section 12 discusses some topics for future research.
Below is the syntax of PITL formulas in BNF, where p is any propositional variable: The last two constructs are called chop and chop-star, respectively. The boolean operators false, A ∧ B, A ⊃ B (implies) and A ≡ B (equivalence) are defined as usual. We refer to A ⌢ B as strong chop, since a weak version A; B also exists. In addition, A ⋆ (strong chop-star) slightly differs from ITL's conventional weak chop-star A * , although the two are interderivable. The strong variants of chop and chop-star taken as primitives here are chosen simply because, without loss of generality, they help streamline the completeness proof. We use p, q, r and variants such as p ′ for propositional variables. Variables A, B, C and variants such as A ′ denote arbitrary PITL formulas. Let w and w ′ denote state formulas without the temporal operators skip, chop and chop-star. We have V denote a finite set of propositional variables. Also, V A denotes the finite set of the formula A's variables.
Time within PITL is discrete and linear. It is represented by intervals each consisting of a sequence of one or more states. More precisely, an interval σ is any finite or ω-sequence of one or more states σ 0 , σ 1 , . . . . Each state σ i in σ maps each propositional variable p to either true and false. This mapping is denoted as σ i (p). An interval σ has an interval length |σ| ≥ 0, which, if σ is finite, is the number of σ's states minus 1 and otherwise ω. So if σ is finite, it has states σ 0 , . . . , σ |σ| . This (standard) version of PITL, with state-based propositional variables, is called local PITL.
A subinterval of σ is any interval which is a contiguous subsequence of σ's states. This includes σ itself.
The notation σ |= A, defined shortly by induction on A's syntax, denotes that interval σ satisfies formula A. Moreover, A is valid, denoted |= A, if all intervals satisfy it.
Below are the semantics of the first five constructs: • True: σ |= true trivially holds for any σ.
Below are semantics for the versions of chop and chop-star found most suitable for the completeness proof. As already noted, other versions can be readily derived.
• Chop: σ |= A ⌢ B iff for some natural number i : 0 ≤ i ≤ |σ|, both σ 0:i |= A and σ i↑ |= B. This is called strong chop because both A and B must be true. • Chop-star: σ |= A ⋆ iff one of the following holds: − Interval σ has only one state (i.e., it is empty). − σ is finite and either itself satisfies A or can be split into a finite number of (finitelength) subintervals which share end-states (like chop) and all satisfy A. − |σ| = ω and σ can be split into ω finite-length intervals sharing end-states (like chop) and each satisfying A. In this version of chop-star, each iterative subinterval has finite length. The third case is called chop-omega and denoted as A ω .
As an example, we depict the behaviour of variable p in some 5-state interval σ and denote true and false by t and f, respectively: This interval satisfies the following formulas: For instance, the formula skip ⌢ ¬p is true because σ 0 σ 1 satisfies skip and σ 1 . . . σ 4 satisfies ¬p since σ 1 (p) = false. The fourth formula is true because both σ 0 . . . σ 2 and σ 2 . . . σ 4 satisfy p ∧ (skip ⌢ skip). The interval does not satisfy the formulas below: ¬p skip ⌢ p true ⌢ (¬p ∧ ¬(true ⌢ p)). Table 1 perhaps requires some more explanation. Its purpose is to specify that the value of A in a 6 B. MOSZKOWSKI Axioms: Table 2: Axiom system for PITL with finite and infinite time finite interval's last state equals the value of B for the interval. For example, the formula p ← ✷ q is true on an interval iff either (a) the interval is infinite or (b) it is both finite and has one of the following hold for the propositional variables p and q: • The (finite) interval's last state has p true and all states have q true.
• The (finite) interval's last state has p false and at least one state has q false.
Below are some sample valid PITL formulas: . Let PTL be the subset of PITL with just skip and the (derived) temporal operators and ✸ shown in Table 1. We use X and X ′ for PTL formulas.
Although we do not need existential quantification in our proof, it is convenient to define here since it helps the exposition concerning automata-based ways to represent PITL formulas in §7.2, §7.4 and §10.2 and also assists us when we compare our approach with related proofs for logics with quantification in Section 11. The syntax is ∃p. A for any propositional variable p and formula A. We let σ |= ∃p. A be true iff σ ′ |= A is true for some interval σ ′ identical to σ except possibly for p's behaviour. Existential quantification together with PITL yields QPITL and together with PTL yields QPTL.
2.1. PITL Axiom System. Table 2 shows the PITL axiom system with finite and infinite time. Axiom VPTL permits PITL substitution instances of valid PTL formulas with skip, and ✸. For instance, from the valid PTL formula p ⊃ ✸ p follows ⊢ A ⊃ ✸ A, for any PITL formula A. Axiom P10 gives an inductive way to introduce chop-omega. Our new Inference Rule ✷ f Aux permits auxiliary variables to capture behaviour in finite-length prefix intervals and is only needed for infinite time.
Axioms:
The axiom system in Table 2 for both finite and infinite time is adapted from our earlier one [Mos04] for just finite time (see Table 3), itself based on a previous one we originally presented in [Mos94]. That axiom system contains some axioms of Rosner and Pnueli [RP86] for PITL without chop-star and our own axioms and inference rule for the operators ✷ i (defined using weak chop in Table 1) and chop-star. The new PITL axiom system in Table 2 adapts the axioms for ✷ i to use ✷ f instead to shorten the completeness proof since ✷ f works better with the strong chop operator ⌢ .
For consistency with our usage here, the version of the earlier axiom system for just finite time given in Table 3 uses strong chop ⌢ instead of weak chop ";" and likewise uses ✷ f instead of ✷ i . It therefore very slightly differs from the original one in [Mos04] in an inessential way since for finite time the two pairs of operators are indistinguishable. In [Mos04] we prove completeness by reduction to PTL.
Appendix A contains a large variety of representative PITL theorems, derived rules and their proofs. Many are used directly or indirectly in our completeness proof.
Note that Inference Rule ✷ f FGen in Table 2 for ✷ f mentions finite in it, whereas the analogous Inference Rule ✷Gen for ✷ does not. A version of ✷ f FGen without finite and called ✷ f Gen can be deduced (see the derived inference rule DR4 in Appendix A). If just finite time is permitted, the two variants ✷ f FGen and ✷ f Gen for ✷ f are in practice identical since finite is valid and hence deducible by Axiom VPTL. In fact, our earlier axiom system for PITL with just finite time in Table 3 uses the version without finite. 2.2. Theoremhood, Soundness and Axiomatic Completeness. A formula A deducible from the axiom system is a theorem, denoted ⊢ A. Additionally, a formula A is consistent if ¬A is not a theorem, i.e., ⊢ ¬A. We claim the axiom system is sound, that is, ⊢ A implies |= A. A logic is complete if each valid formula is deducible as a theorem in the logic's axiom system. In other words, if |= A, then ⊢ A. Our goal is to show completeness for PITL. However, we actually prove a stronger result which requires some further definitions and we therefore defer the formal statement until Theorem 3.2 in Section 3. We also make use of the following variant way of expressing axiomatic completeness: Lemma 2.1 (Alternative notion of completeness). A logic's axiom system is complete iff each consistent formula is satisfiable.
We often use the next Theorem 2.2 about finite time: Theorem 2.2 (Completeness of PITL Axiom System for Finite Time). Any valid PITL implication finite ⊃ A is deducible as a PITL theorem ⊢ finite ⊃ A using the axiom system for PITL with both finite and infinite time in Table 2.
Proof.. This readily follows by deducing the axioms and inference rules of our earlier complete axiom system for PITL with just finite time [Mos04] in Table 3. The axiom system and proofs of theorems are easily relativised to make finite time explicit and deduced with the new axiom system for both finite and infinite time already presented in Table 2. The relativisation can use the fact that the two axiom systems are quite similar.
One can alternatively disregard Theorem 2.2 and instead treat our presentation as a self-contained proof reducing completeness for PITL with both finite and infinite time to that for PITL with just finite time.
2.3. Summary of the Completeness Proof. Our proof of axiomatic completeness for PITL establishes that any consistent PITL formula is satisfiable (see the earlier Lemma 2.1). The completeness proof makes use of a PITL subset called PTL u (defined later in §5.2) which is a version of PTL having an until operator. As we discuss in §5.2, axiomatic completeness for PTL u readily follows from axiomatic completeness for basic PTL so any consistent PTL u formula is satisfiable.
The PITL completeness proof can be roughly summarised as ensuring that for any consistent PITL formula A, there exists a consistent PTL u formula Y 0 , which possibly contains auxiliary propositional variables, such that the PITL implication Y 0 ⊃ A is deducible. Completeness for PTL u guarantees that Y 0 is satisfiable. The soundness of the PITL axiom system then ensures that any model of Y 0 also satisfies A thereby showing axiomatic completeness for PITL. Note that in the actual proof, we use make use of a PTL u conjunction Y ∧ X in place of Y 0 .
In the course of the PITL completeness proof, we also employ another PITL subset called PITL k (defined later in §5.3). It is a version of PITL without omega-iteration and serves as a kind of bridge between full PITL and PTL u . The PITL completeness proof first obtains from the PITL formula A a PITL k formula K such that we can deduce A ≡ K. We then show how to obtain the PTL u formula Y 0 such that the implication Y 0 ⊃ K is deducible. We further show that if A is consistent, so are K and Y 0 . Axiomatic completeness for PTL u ensures that the consistent PTL u formula Y 0 is satisfiable. The implication Y 0 ⊃ K together with the deduced equivalence A ≡ K guarantees the deducibility of the previously mentioned PITL implication Y 0 ⊃ A. Hence, any model of Y 0 also satisfies A, thereby establishing completeness for PITL since every consistent PITL formula is indeed satisfiable.
Here is a very brief summary of the main reductions: Only the reduction from PITL k to PTL u requires some explicit automata-theoretic constructions which involve finite words and are expressed in temporal logic. Below is the structure of our reduction from PITL to PTL u : • In Section 3 we describe a class of PITL theorems with useful substitution instances.
• In Section 4 we present lemmas for systematically replacing some of a formula's subformulas by others in proofs. • In Section 5 we formally introduce the very simple PTL subset NL 1 as well as the subsets PTL u and PITL k . Although PITL k lacks chop-omega, it still has the same expressiveness as PITL. We also describe three other classes of formulas called right-chops, chainformulas and auxiliary temporal assignments. • In Section 6 we show that any PITL formula is deducibly equivalent to one in PITL k .
• In Section 7 we show how to represent semi-automata and automata in PITL.
• Section 8 utilises the material in the previous section to test for a given PITL formula in suffixes of a finite interval. Sections 7 and 8 provide a basis for introducing suitable auxiliary variables via auxiliary temporal assignments. • In Section 9 we use the constructed auxiliary variables to reduce an arbitrary consistent PITL k formula K to one in PTL u . Axiomatic completeness for PITL with infinite time then readily follows from this. A large portion of the reasoning is done at the semantic level (for example, all of Section 8). We then employ axiomatic completeness for restricted versions of PITL (such as PITL with finite time) to immediately deduce the theoremhood of key properties expressible as valid formulas in these versions. This significantly shortens the completeness proof by reducing the amount of explicit deductions.
Right-Instances, Right-Variables and Right-Theorems
Before proceeding further, we need to introduce a class of PITL theorems for which suitable substitution instances are themselves deducible as theorems. Now in the completeness proof for PITL later on, if a deducible PITL formula has propositional variables not occurring in the left of chops or in chop-stars (e.g., p in the formula p ⊃ ✸ p), then in each step of the formula's deduction these particular variables likewise do not occur in the left of chops or chop-stars. We define more generally for any PITL formula A and subformula B in A, a right-instance of B in A to be an instance of B which does not occur within the left of a chop or within some chop-star. Consider for example the disjunction below: (3.1) The subformulas ¬q, and (p ⌢ ¬q) as well as the leftmost occurrence of p ⌢ p ′ are rightinstances in the overall formula (3.1). However, all three occurrences of p and the rightmost occurrences of p ′ and p ⌢ p ′ are not right-instances in We now look at why the concept of right-variable is needed. In the formula p ⊃ ✸ p, the variable p is a right-variable. Therefore, from the validity of p ⊃ ✸ p, we can infer the validity of the substitution instance skip ⊃ ✸ skip. Lemma 3.1, which is shortly presented, formalises this idea. However, if a variable is not a right-variable in a valid formula, we might incorrectly infer that a substitution instance of the formula is also valid. For instance, the variable p is not a right-variable in the formula p ⊃ ✷ f p which is an instance of Axiom P7 in Table 2. This formula is valid but the substitution instance skip ⊃ ✷ f skip is not. Now all propositional variables in a propositional formula with no temporal operators are right-variables of that formula. More generally, all propositional variables in a PTL formula are right-variables. In contrast, a chop-star formula has no right-variables.
The next simple lemma concerns substitution into right-variables in valid formulas: Lemma 3.1 (Substitution Instances into Right-Variables). Suppose A is a PITL formula, p is one of A's right-variables (i.e., in RV (A)) and B is some PITL formula. Then if A is valid, so is the substitution instance A B p . Proof by contradiction.. Let q be a variable not occurring in A or B and let C be a variant of A with all instances of p replaced by q (i.e., A q p ). The variable p is a rightvariable of A so q is similarly a right-variable of C. It follows by induction on A's syntax that A B p and C B q denote exactly the same PITL formula. Consequently, in our reasoning about A B p , we can assume without loss of generality that p itself does not occur in B. This is because we can view A B p as being C B q . Now suppose by contradiction that A B p is not valid. By our previous discussion, also assume that p does not occur in B. Then some interval σ satisfies ¬(A B p ). We construct a variant σ ′ in which the value of variable p in each state σ ′ i equals true iff the suffix subinterval σ i↑ satisfies B. Hence σ ′ |= ✷(p ≡ B) and σ ′ |= ¬(A B p ). It readily follows from this and p being a right-variable that σ ′ satisfies ¬A since A B p only examines B in suffix subintervals. From σ ′ |= ¬A we have that A is not valid.
Later in Section 6, our completeness proof will need a deductive analogue of the semantically oriented Lemma 3.1 to permit us to infer from a theorem A and right-variable p in RV (A) another theorem A B p . One way to achieve this is by adding the next inference rule to the PITL axiom system in Table 2 for any formula A and variable p in RV (A): Another possibility is an analogue of Inference Rule ✷ f Aux in Table 2: where the propositional variable p does not occur in A or B. However, it turns out that these are unnecessary since the axiom system in its current form is already sufficient to allow a suitable class of such substitutions. We now present a formal basis for this. A PITL formula A which is theorem (i.e., ⊢ A) is called a right-theorem (denoted ⊢ rt A) if there exists a deduction of A in which A's right-variables never occur on the left of chop or in chop-star in any proof steps. However, any of A's variables not in RV (A) as well as any subsequently introduced auxiliary variables in the deductions are permitted to appear in some deduction steps in the left of chops or chop-stars. For example, if p is a right-variable of A, then no proof step can use p with Axiom P7 (e.g., ⊢ p ⊃ ✷ f p) since p is not a right-variable here owing to ✷ f p.
The completeness proof for PITL will ensure that any valid PITL formula A is indeed deducible as a right-theorem. We will refer to this here as right-completeness. Below is our main theorem for axiomatic completeness of PITL using right-completeness: Theorem 3.2 (Right-Completeness of PITL Axiom System). Any valid PITL formula A is a right-theorem of the axiom system, that is, if |= A, then ⊢ rt A.
The proof of this, our main result, is described later and concludes in Section 9. Right-theoremhood naturally yields the dual notion of right-consistency of a PITL formula A, that is, not ⊢ rt ¬A. Our completeness proof for PITL can therefore be regarded as not only showing that valid PITL formulas are right-theorems but also that any rightconsistent PITL formula is satisfiable (compare with Lemma 2.1).
As already pointed out, the main reason we are interested in right-theorems is that suitable substitution instances of them are PITL theorems. Our need for this occurs when in Section 6 we reduce right-completeness for PITL to right-completeness for its subset PITL k without chop-omega. The lemma below formalises the substitution process: Lemma 3.3 (Substitution Instances of Right-Theorems). Let A and B 1 , . . . , B n be P IT L formulas and p 1 , . . . , p n be some of A's right-variables. If A is a right-theorem, then so is the substitution instance A B 1 ,...,Bn p 1 ,...,pn , that is, ⊢ rt A B 1 ,...,Bn p 1 ,...,pn . Proof.. We assume that auxiliary variables in A's proof (i.e., ones not in V A ) do not occur in B 1 , . . . , B n . In each step of A's proof, we replace each p i by B i to obtain ⊢ rt A B 1 ,...,Bn p 1 ,...,pn . Many PITL theorems in Appendix A can be checked to be right-theorems by inspection of the proof steps. For example, those with no right-variables are immediate right-theorems. We have not indicated in the appendix which theorems are right-theorems and will normally only designate formulas as right-theorems in the completeness proof when this is needed.
The next lemma concerns the relationship between derived rules and right-theorems: Lemma 3.4 (Right-Theorems from Some Derived Rules). Suppose the assumptions of a derived rule which deduces some PITL formula A are right-theorems. Furthermore, suppose that in the derived rule's own proof of A, none of A's right-variables occur on the left of chop or in chop-star (including in any nested deduced PITL theorems and derived rules). If A's right-variables are a subset of the union of the assumptions' right-variables, then A itself is a right-theorem.
We omit the proof. For example, Derived Rule DR13 in Appendix A (see also the abbreviated Table 4 found later in §7.4) lets us infer from the theorem ⊢ ✷A ⊃ B the theorem ✷A ⊃ ✷B. It only requires the kind of reasoning mentioned in Lemma 3.4. Consequently, from ⊢ rt ✷A ⊃ B we can infer ⊢ rt ✷A ⊃ ✷B.
Readers are strongly encouraged to initially try to understand our completeness proof without consideration of right-theoremhood by simply viewing it as ordinary theoremhood and ignoring the prefix "right-". This can even be rigorously done by assuming that the optional inference rule (3.2) is part of the PITL axiom system. A subsequent, more thorough study of the material can then better take right-theoremhood into account. Indeed, we can then regard our completeness proof as two parallel proofs, a simpler one with (3.2) and another more sophisticated one which is based on right-theoremhood and Lemma 3.3 and hence does not assume (3.2). Incidentally, our completeness proof ultimately ensures that (3.2) is obtainable as a derived inference rule even if it is not in the axiom system.
Some Lemmas for Replacement
We now consider some techniques used in the completeness proof to replace selected rightinstances in a PITL formula by other formulas.
Lemma 4.1. Let A 1 , A 2 , B 1 and B 2 be PITL formulas. If A 2 can be obtained from A 1 by replacing zero or more right-instances of B 1 in A 1 by B 2 , then the next implication is deducible as a right-theorem: Proof.. The proof involves induction on the syntax of formula A 1 , with each instance of B 1 regarded as atomic. We consider the cases when A 1 is B 1 itself, true, a propositional variable p, ¬C, C 1 ∨ C 2 , skip, C 1 ⌢ C 2 , and C ⋆ . The first three of these involve quite routine conventional propositional reasoning. The case for skip is trivial since A 1 and A 2 are identical. The case for chop-star is likewise trivial since this lemma does not permit replacement in its scope.
For the case for chop, assume A 1 and A 2 have the forms C 1 ⌢ C 2 and C 1 ⌢ C ′ 2 , respectively. Note that no replacements are done in the left of chop. By induction on A 1 's syntax, we deduce the next implication: 2 . This and PTL reasoning (see Derived Rule DR13 in Appendix A and also in the abbreviated Table 4 found later in §7.4) yields the implication below: . Lemma 3.4 ensures that our use here of Derived Rule DR13 indeed yields a right-theorem.
We can also deduce the next implication using Axiom P8 and some further temporal reasoning (see PITL Theorem T3 in Appendix A and also in Table 4 in §7.4): . These two implications together yield our goal below: . This concludes Lemma 4.1's proof.
Lemma 4.1 yields a derived inference rule for Right Replacement of formulas: Lemma 4.2 (Right Replacement Rule). Let A 1 , A 2 , B 1 and B 2 be PITL formulas. Suppose that A 2 can be obtained from A 1 by replacing zero or more right-instances of B 1 in A 1 by B 2 . If B 1 and B 2 are deducibly equivalent as a right-theorem (i.e., ⊢ rt B 1 ≡ B 2 ), then so are A 1 and A 2 .
Useful Subsets of PITL
We now describe five subsets of PITL and some associated properties which will be extensively used later on in different parts of the PITL completeness proof. We have chosen to collect material about the subsets here instead of introducing each subset as the need arises. This should make it easier for readers to review the definitions and features when required and also make the main steps of the completeness proof shorter and more focused. In addition, when taken as a whole, the combined presentation of the PITL subsets enables us to give a technical overview of some of the proof steps encountered. Table 5 later lists variables used for the subsets and other subsequently defined categories. 5.1. PTL with only Unnested Next Constructs. Let NL 1 denote the subset of PTL formulas in which the only temporal operators are unnested s (e.g., p ∨ ¬p but not p ∨ ¬p). It is not hard to see that NL 1 formulas only examine an interval's first two states. They are therefore useful for describing automata transitions from one state to the next. The variables T and T ′ denote formulas in NL 1 .
Below are some theorems which contain NL 1 formulas and are required in the completeness proof. None of these theorems are themselves in NL 1 . The proofs are in Appendix A. T62 Recall that for our purposes we define PTL to be the subset of PITL with just skip and the derived temporal operators and ✸ shown in Table 1.
We also use a more expressive version of PTL denoted here as PTL u with a strong version of the standard temporal operator until , derivable in PITL: We limit until 's lefthand operand to be a formula in NL 1 (defined previously in §5.1). Note that this definition of until using chop and chop-star results in any variable in the left operand of until not being a right-variable. Let Y and Y ′ denote PTL u formulas.
We establish right-completeness for PITL by a reduction to PTL u , instead of directly to PTL. It is not hard to show that our axiom system is complete for PTL u formulas. This is because we can deduce the next two PTL u axioms known to capture this kind of until 's behaviour (the PITL proofs are in Appendix A): Consequently, we can reduce completeness for PTL u to it for PTL. In fact every PTL u theorem is a right-theorem. This is because the right-variables in T until A remain so in T70 and T71, Hence, the two PTL u axioms ensure that these variables remain rightvariables in the proof steps for deducing a PTL u theorem in the PITL axiom system. See Kröger and Merz [KM08] for more about axioms for a variety of such binary temporal operators. Table 1) Here ∅ denotes the omega-language with no elements. Let PITL K denote the PITL subset in which chop-star only occurs on the left of chops (like (3) in Thomas' theorem above) and is therefore restricted to finite intervals. The K in PITL k stands for "Kleene star". For example, the next two formulas are in PITL k :
PITL without Omega-Iteration. Our completeness proof includes a step in which any chop-omega (defined in
In contrast, the two formulas below are not in PITL k : Observe that a PITL k formula can contain chop-star subformulas, which by the definition of PITL k are not themselves in it. An example is ( With just finite time, any PITL formula A is easily re-expressed in PITL k as A ⌢ empty (compare with Axiom P6 in Table 2). However this technique does not work for infinite time. We also need Thomas' theorem (Theorem 5.1) to ensure that any PITL formula A has a semantically equivalent PITL k formula K for both finite and infinite time (i.e., |= A ≡ K). For example, one way to re-express the PITL formula (skip ∧ p) ⋆ in PITL k is ✷(more ⊃ p). It follows that any chop-omega formula is re-expressible in PITL k . For instance, for any PITL formula B, the formula (skip ∧ B) ω is semantically equivalent to ✷ ✸ f (skip ∧ B).
Later on in Section 6 we employ Thomas' theorem to easily reduce axiomatic completeness for PITL to that for PITL k . More precisely, we will formally establish there that for any PITL formula A, there exists a semantically equivalent PITL k formula K such that the formula A ≡ K is deducible as a PITL theorem. Hence, by simple propositional reasoning, if A is consistent, so is K and any model for K is also one for A. The remainder of the overall completeness proof then reduces completeness for PITL k to it for PTL u .
Choueka and Peleg [CP83] give a simpler proof of Thomas' theorem using standard deterministic omega automata. Readers favouring an automata-theoretic perspective can therefore regard the theorem in the context of PITL as a basis for implicitly determinising the original PITL formula, resulting in a semantically equivalent one in PITL k .
Right-Chops and Chain
Formulas. For any PITL formula A, we call a chop formula in A a right-chop if it is not in another chop's left operand or in a chop-star. Rightchops help reduce PITL k to PTL u . We illustrate them with the formula below: (5.1) The following three formulas all occur as right-chops in this: Only the second instance of p ⌢ p ′ in formula (5.1) is a right-chop. In contrast, the first instance of p ⌢ p ′ is not a right-chop since it is within the left operand of another chop.
Observe that the right-chops of a PITL formula A are exactly those subformulas in A, including possibly A itself, which have chop as their main operator and are right-instances (previously defined in Section 3).
In addition to right-chops, the reduction of a PITL k formula to PTL u employs a class of PTL u formulas involving disjunctions and sequential chains of restricted constructs. Let a chain formula be any PTL u formula with the syntax below, where w is a state formula, T is an NL 1 formula and G and G ′ are themselves chain formulas: The operator until in chain formulas involves a quite limited version of the PITL operator chop-star which is much easier to reason about than full chop-star. The next lemma exploits this and shows that a chop in which the left operand is a chain formula and the right one is in PTL u can be re-expressed as a deducibly equivalent PTL u formula.
Lemma 5.2. For any chain formula
Proof.. We do induction on G's syntax using the deducible equivalences below in which w is a state formula, T is an NL 1 formula and G ′ and G ′′ are themselves chain formulas: The first of these is an instance of PITL Axiom P5. The second and third are respective instances of PITL Theorems T42 and T18 in Appendix A (see also the abbreviated Table 4 found later in §7.4). The fourth uses the earlier ITL-based definition of the temporal operator until in §5.2 and Axiom P2 which itself concerns chop's associativity.
For example, the left chop operand in the PITL formula p ∧ (q until empty)) ⌢ skip is a chain formula. The chop itself is deducibly equivalent to the PTL u formula p ∧ (q until skip).
Our completeness proof will ultimately apply Lemma 5.2 when in Section 9 we later replace the left operands of a consistent PITL k formula's right-chops with chain formulas. For this to work, we will also need auxiliary variables of the kind now described. 5.5. Auxiliary Temporal Assignments. When we later represent automata runs in PITL, it is convenient to generalise formulas of the form p ← B (the temporal assignment construct defined in Table 1) to conjunctions of several of these. Please refer back to Section 2 for a brief explanation about the meaning of temporal assignment. We call such a conjunction an Auxiliary Temporal Assignment (ATA). It has the form given below: for some n ≥ 0, where each A i is a PITL formula, there are n distinct auxiliary propositional variables q 1 , . . . q n and the only ones of them permitted in each A i are q 1 , . . . q i−1 . All other propositional variables are allowed in any A i . Here is a sample ATA with one nonauxiliary variable r and two auxiliary variables p and q: Variables such as D and D ′ denote ATAs. Two ATAs are disjoint if they have distinct auxiliary variables.
Let us now look at how to formally introduce ATAs containing auxiliary variable into deductions for later use within the completeness proof in §9.2.
Lemma 5.3 readily generalises to reduce a formula's right-consistency to that for a conjunction of it and a suitable ATA: Lemma 5.4 (The Temporal Operator ✷ f , ATAs and Right-Consistency). Let A be a PITL formula and D an ATA with no auxiliary variables in A.
Proof.. For some n ≥ 0, the ATA D contains n auxiliary variables and has the form 1≤i≤n (q i ← B i ). We first apply Lemma 5.3 n times to reduce the formula A's rightconsistency to that for the next formula: The conjunction of ✷ f -formulas is then re-expressed with a single ✷ f (see PITL Theorem T28 found in Appendix A and also included in the more abbreviated Table 4 later in §7.4) to obtain the formula below which is deducibly equivalent to (5.2): . This is the same as our goal A ∧ ✷ f D.
5.6. Overview of Role of PITL Subsets in Rest of Completeness Proof. The PITL completeness proof can now be summarised using the PITL subsets just presented. Some readers may prefer to skip this material and proceed directly to the proof which starts in Section 6. Our goal here is to show that any right-consistent PITL formula A is satisfiable.
Here is an informal sequence of the transformations involved: where K is a PITL k formula, K ′ is a PITL k formula in which the left operands of all right chops are chain formulas, D ′ is an ATA and Y and X are respectively in PTL u and PTL. If A is right-consistent, then so are the formulas in all steps. From the completeness of the PTL u axiom system as discussed in §5.2 we have that the conjunction Y ∧ X is satisfiable. Furthermore, our techniques ensure that the models of a formula obtained from one of the transformations also satisfy the immediately preceding formula and hence by transitivity the original PITL formula A as well.
Important automata-theoretic techniques presented in Sections 7 and 8 help with the reductions to K ′ ∧ ✷ f D ′ and Y ∧ X in Section 9. We show in Section 9 that the formulas Note that in the actual completeness proof (in Lemma 9.4 in §9.2), which for technical reasons involves a sequence of transformations from K to K ′ , we make use of a PITL k formula denoted K ′ m+1 rather than simply K ′ .
Reduction of Chop-Omega
If we assume right-completeness for PITL k (later proved as Lemma 9.4 in §9.2), then obtaining from a PITL formula a deducibly equivalent PITL k one is relatively easy. We first look at re-expressing chop-omega formulas in PITL k and then extend this to arbitrary PITL formulas.
Lemma 6.1 (Deducible Re-Expression of Chop-Omega in PITL k ). Suppose we have rightcompleteness for PITL k . Then for any PITL formula B, there exists a PITL k formula K with the same variables and no right-variables and for which the equivalence K ≡ B ω is a right-theorem (i.e., ⊢ rt K ≡ B ω ).
Proof.. Thomas' theorem (Theorem 5.1) ensures that there exists some PITL k formula which is semantically equivalent to B ω and contains the same variables. From that formula we obtain one denoted here as K which has no right-variables by conjoining a trivially true ✸ f -formula containing a disjunction of all of B's variables and their negations. We therefore The first step involves an instance of Axiom P10: In addition, the next formula is valid: From this and |= K ≡ B ω , we have |= K ⊃ (B ∧ more ) ⌢ K. We then use the assumed right-completeness of PITL k to deduce the implication as a right-theorem. Now invoke ✷-generalisation (Axiom ✷Gen) on this to obtain ⊢ rt ✷(K ⊃ (B ∧ more) ⌢ K). Simple propositional reasoning involving that and the earlier deduced implication (6.1) establishes our immediate goal ⊢ rt K ⊃ B ω . Case for showing ⊢ rt B ω ⊃ K: Let p be a propositional variable not in B ω or K. The next formula is valid (and an instance of Axiom P10): We then replace B ω by the semantically equivalent K: Now K is a PITL k formula and furthermore (B ∧ more) ⌢ p is as well since even if B does contain some chop-stars, B is located within the left of a chop. The valid formula (6.2) is in PITL k and hence a right-theorem by the assumed right-completeness for PITL k : Therefore, we can use Lemma 3.3 to obtain the theoremhood of the next PITL implication which has the formula B ω substituted into the right-variable p: We also deduce the following from the definition of chop-omega in terms of chop-star together with Axiom P9 and some simple temporal reasoning: We now do ✷-generalisation (Axiom ✷Gen) on this and then use propositional reasoning on it with the previous formula (6.3) to obtain the right-theorem ⊢ rt B ω ⊃ K, which is our immediate goal.
Lemma 6.2 (Reduction of PITL to PITL k ). If right-completeness holds for PITL k , then for any PITL formula A, there exists an equivalent PITL k formula K with exactly the same propositional variables and right-variables such that ⊢ rt A ≡ K.
Proof.. We first re-express each of A's chop-stars B ⋆ i not in the left of chop or another chopstar using the next deducible equivalence (see PITL Theorem T58 found in Appendix A and also included in the more abbreviated Table 4 in §7.4): This splits B ⋆ i into cases for finite and infinite time. Note that there there are no rightvariables in (6.4) since any variables occur in a chop-star. Hence the equivalence, once deduced, is trivially a right-theorem.
Lemma 6.1 ensures some PITL k formula K ′ i exists with the same variables as B i , no right-variables and the right-theorem ⊢ rt K ′ i ≡ B ω i . Hence like (6.4), the next equivalence is a right-theorem and both sides have the same variables and no right-variables: i yields a PITL k formula K which the same variables as A and equivalent to it (i.e., ⊢ rt A ≡ K). No rightvariables in A are in any replaced B ⋆ i . Hence A and K have the same right-variables.
Deterministic Finite-State Semi-Automata And Automata
The remainder of our axiomatic completeness proof for PITL mostly concerns reducing PITL k to PTL u . Now PITL with finite time expresses the regular languages and can readily encode regular expressions (see for example [Mos04] which reproduces our results with J. Halpern in [Mos83a]). We can therefore employ some kinds of deterministic finite-state semi-automata and automata which provide a convenient low-level framework for finite time to encode the behaviour of an arbitrary PITL formula. Our completeness proof utilises these semi-automata and automata to build a variant semi-automaton discussed in the next Section 8 to assist in reducing PITL formulas on the left of right-chops to chain formulas in PTL u . The reduction applying these techniques to go from PITL k to PTL u is presented in Section 9. After introducing the semi-automata and automata, we will consider various semantically equivalent ways to represent them in temporal logic, each with its benefits. Some require PITL and others just PTL. The representations in PITL are at a higher level and fit well with our proof system, especially since we can assume completeness for PITL with finite time. In some later sections, we consider deducing some of the properties as theorems.
In order to define an alphabet for our semi-automata and automata, we introduce a special kind of state formula which serves as a letter and is called here an atom. An atom is any finite conjunction in which each conjunct is some propositional variable or its negation and no two conjuncts share the same variable. The Greek letters α and β denote an individual atom. For any finite set of propositional variables V, let Σ V be some set of 2 |V | logically distinct atoms containing exactly the variables in V. For example, if V = {p, q}, we can let Σ V be the set of the four atoms shown below: One simple convention is to assume that the propositional variables in an atom occur from left to right in lexical order. If V is the empty set, then Σ V contains just the formula true.
A finite, nonempty sequence of atoms form a word. Each possible word corresponds to some collective state-by-state behaviour of the selected variables in a finite interval. For our interval-oriented application of words we never utilise the word containing no letters (commonly denoted ǫ in the literature).
7.1. Deterministic Finite-State Semi-Automata. We define a deterministic finitestate semi-automaton S to be a quadruple (V S , Q S , q I S , δ S ) consisting of a finite set of propositional variables V S , together with a finite, nonempty set of control states Q S = {q 1 , . . . , q m }, an initial control state q I S ∈ Q S and a deterministic transition function δ S : Q S × Σ V S → Q S . The sets V S and Q S must be disjoint, i.e., V S ∩ Q S = ∅. We use propositional variables q 1 , . . . , q m to denote control states since this helps when expressing the semi-automaton's behaviour in PITL. A run on a finite word for each i : 1 ≤ i < k. Hence the semi-automaton makes just k − 1 transitions and consequently ignores the details of the last atom α k . Therefore the semi-automaton differs from a conventional automaton which would have a run with k + 1 control states involving k transitions and the examination of all k atoms. Furthermore, the definition of a semi-automaton has no set of final control states and hence no acceptance condition. We abbreviate the set of atoms Σ V S as Σ S since the elements of Σ V S serve as S's letters. The semi-automaton S's behaviour is expressible in temporal logic by regarding each control state q i to be a propositional variable which is true when q i is S's current control state. Before showing how S's runs are expressed in PTL, we first define a state formula init S which ensures that the initial control state is q I S and also a transitional formula T S in NL 1 which captures the behaviour of δ S : If we assume finite time, then a run starting at S's initial control state is expressed as the PTL formula init S ∧ ✷(more ⊃ T S ) or alternatively as the chain formula init S ∧ (T S until empty) in PTL u .
7.2. Deterministic Finite-State Automata. Semi-automata do not have an acceptance test and hence do not have associated accepting runs. We therefore now define a deterministic finite-state automaton which includes an acceptance test. As we shortly illustrate, this can be constructed to recognise a given PITL formula in a finite interval. Let M be a quintuple (V M , Q M , q I M , δ M , τ M ). The first four entries are as for a semi-automaton. The last entry τ M : Q M → 2 Σ M is a conditional acceptance function from control states to sets of letters. A run is the same as for a semi-automaton. Our notion of acceptance of a word does not use a conventional set of final control states but instead has the function τ M make all control states conditionally final. An accepting run on a finite word α 1 . . . α k in Σ + M with k atoms is any run of k control states q ′ 1 . . . q ′ k with q ′ k ∈ τ M (α k ). Therefore, a control state q ∈ Q M is regarded as a final one only when the automaton sees an atom α with α ∈ τ M (q). A test for this is expressible as the state formula acc M defined below: If we assume finite time, an accepting run of M starting at M 's initial control state is expressed as the PTL formula init M ∧ ✷(more ⊃ T M ) ∧ fin acc M or alternatively as the chain formula init M ∧ (T M until (acc M ∧ empty)) in PTL u . As a result of our convention for runs and accepting runs, the automaton M 's operation requires one state less than a conventional one to accept a word. For example, it can accept one-letter words without the need for any state transitions. In fact, such an automaton M only recognises words with at least one letter (i.e., in Σ + M ). This is perfect when we utilise semi-automata and automata to mimic PITL formulas since ITL intervals have at least one state.
The regular expressiveness of PITL with finite time ensures that any PITL formula B can be recognised by some M . The set V B of propositional variables in B and the set Q M of M 's control states are assumed to be distinct. Formally, we have the next valid formula expressed in QPITL (defined in Section 2): For instance, below is a sample automaton M to recognise finite intervals satisfying the formula (skip ∧ p) ⌢ skip ⌢ skip ⋆⌢ (empty ∧ ¬p), which is semantically equivalent to the PTL formula p ∧ ✸(empty ∧ ¬p): Here is an accepting run for the 5-letter word p ¬p p p ¬p: q 1 q 2 q 3 q 3 q 3 : Below are the values of q 1 , . . . , q 4 over an associated 5-state interval in which p has the behaviour p ¬p p p ¬p: (q 1 , ¬q 2 , ¬q 3 , ¬q 4 ) (¬q 1 , q 2 , ¬q 3 , ¬q 4 ) (¬q 1 , ¬q 2 , q 3 , ¬q 4 ) (¬q 1 , ¬q 2 , q 3 , ¬q 4 ) (¬q 1 , ¬q 2 , q 3 , ¬q 4 ). (7.2) In each tuple, we show the unique active control state in boldface. For instance, q 2 is true in the second interval state since q 1 ∧ p is true in the first one.
7.3. ATAs for Semi-Automata and Automata. The runs of a deterministic semiautomaton or deterministic automaton from the initial control state can alternatively be expressed with an ATA (defined in §5.5). We will consider the case for a semi-automaton S, but the technique is identical for an automaton M . Now PITL with finite time can express all regular languages in Σ + S . For each control state q of S, the set of words in Σ + S for which S starts in the initial control state q I S and ends in q is regular. The regular expressiveness of PITL with finite time ensures that there exists some corresponding PITL formula C S,q which only has variables in the set V S and expresses this set of words. In principle, such a formula can be obtained by adapting standard techniques for constructing a regular expression from a conventional finite-state automaton. Now let the ATA D S denote the conjunction q∈Q S (q ← C S,q ). We express finite runs in PITL using finite ∧ ✷ f D S . Here is such an ATA for the earlier sample automaton in (7.1): Note that the case for q 3 simplifies to q 3 ← (p ∧ true). The 5-tuple sample run in (7.2) reflects behaviour in prefix subintervals for the previous illustrative word p ¬p p p ¬p. For example, q 2 is true in just the second interval state since the 2-state prefix subinterval is the only prefix subinterval satisfying the formula skip ∧ p.
For any deterministic automaton M , let D M denote some ATA obtained from M in exactly the same way as for a semi-automaton. 7.4. Formal Equivalence of the Two Representations of Runs. For finite time, the PITL formula ✷ f D S expresses all runs of S starting from its initial control state. Hence for finite time this formula is semantically equivalent to the previous formulas for this behaviour (e.g., the PTL formula init S ∧ ✷(more ⊃ T S )). Consequently, the next valid formula relates the two ways of expressing S's runs: The use of a single example (7.1) for both representations of S's runs can be justified from this. An automaton M 's accepting runs can be expressed with finite ∧ (✷ f D M ) ∧ fin acc M . The QPITL formula below is valid for any PITL formula B and automaton M which recognises B: The valid PITL k formula (7.3) just given relates two ways of representing in temporal logic the runs of a finite-state semi-automaton (that is, ✷ f D S and init S ∧ ✷(more ⊃ T S )). It includes an explicit assumption about finite time. The next Lemma 7.1 eliminates this requirement and provides a way to re-express ✷ f D S as an equivalent PTL formula in deductions concerning infinite time. The proof of Lemma 7.1 only involves temporal logic and requires no explicit knowledge about omega automata.
For the convenience of readers studying our deductions here and later on in Section 9, Table 4 lists every PITL theorem and derived rule explicitly mentioned somewhere prior to Appendix A. The appendix itself contains all needed PITL theorems and derived rules and as well as their individual proofs.
T1
⊢ Proof.. The validity of implication (7.3), together with completeness for PITL with finite time ensures that (7.3) is also a deducible theorem: We then deduce from that and Inference Rule ✷ f FGen the next theorem: From this and some interval-based temporal reasoning about ✷ f (using properties of the underlying modal system K -see Appendix A.2) we can then deduce the equivalence below: Let us now re-express ✷ f init S as the equivalent state formula init S (see PITL Theorem T37): We also want to re-express ✷ f ✷(more ⊃ T S ) as the PTL formula ✷(more ⊃ T S ). This can be done by first re-expressing ✷ f ✷ as ✷ ✷ f (see PITL Theorem T55) to yield the equivalence below: Let us now consider how to eliminate the operator ✷ f in the subformula ✷ ✷ f (more ⊃ T S ). The fact that any NL 1 formula T only sees an interval's first two states ensures that the next equivalence is valid and also deducible (see PITL Theorem T62): A dual form (see PITL Theorem T63) is readily deduced for use with T S : We employ this with Derived Rule DR12 to obtain an equivalence for eliminating the ✷ f operator in ✷ ✷ f (more ⊃ T S ): (7.6) Equivalence (7.4)'s theoremhood, which is our immediate goal, then readily follows by simple propositional reasoning from the deduced equivalences (7.5) and (7.6).
Compound Semi-Automata for Suffix Recognition
Let a compound semi-automaton R be a vector of semi-automata S 1 , . . . , S n for some n ≥ 1 with disjoint sets of control states. We take V R to be the set of propositional variables in the semi-automata S 1 , . . . , S n which are not also control states. The purpose of R is to perform what we call suffix recognition. This is a way to determine which of an finite interval's suffix subintervals satisfy some given PITL formula B. Suffix recognition is a stepping stone enabling us to subsequently perform the infix recognition already briefly mentioned in §5.6. Later on in Section 9 this feature of R ensures that for a given PITL k formula K with m right-chops (previously defined in §5.4), we can utilise m such compound semi-automata to obtain an ATA for infix recognition to replace the left sides of K's rightchops with PTL u chain formulas (also introduced in §5.4). The n individual semi-automata S 1 , . . . , S n in R are meant to operate lockstep in parallel and so simultaneously make state transitions. For each i : 1 ≤ i < n, we require for the set V S i+1 , which contains propositional variables examined by S i+1 , that V S i+1 ⊆ V S i ∪ Q S i . Hence the control states of S i are allowed occur within the letters for S i+1 and any semi-automata of higher index but not vice versa. This enables each semi-automaton to optionally observe control states of all semi-automata with lower index when it makes transitions. In our particular construction of R, the set V R simply equals the set V B of propositional variables in the PITL formula B and also equals the lowest-indexed semi-automata S 1 's set V S 1 of propositional variables used to form the atoms Σ S 1 . Let R's ATA D R be a conjunction of the ATAs for the semiautomata S 1 , . . . , S n . It is not hard to check that D R obeys the ATA requirement limiting where auxiliary variables can occur (as specified in the definition of ATAs in §5.5) and is therefore well-formed. We perform suffix recognition by exploiting standard techniques originally developed by McNaughton [McN66] to construct deterministic omega automata. Choueka [Cho74] later applied McNaughton's insights to some constructions for automata on finite words. Our discussion here likewise concerns finite-time behaviour and avoids omega automata. Furthermore, this section deals with semantic issues but not deductions. wil 24 B. MOSZKOWSKI 8.1. Overview of Construction of Compound Semi-Automaton. The compound semi-automaton R to suffix recognise B is built from several modified copies of a deterministic automaton running lockstep in parallel. We also define an associated chain formula G R . Here is a summary: • We initially construct R and G R to just check whether B is true in any given finite suffix subinterval of the overall finite interval in which R is run. Consequently, G R can be used to mimic B. • We first construct a deterministic finite-state automaton M (discussed in §7.2) to recognise the regular language associated with B in finite time. Let n be the number of control states, that is, n = |Q M |. • We do not use M directly but instead construct n + 1 semi-automata S 1 , . . . , S n+1 based on M . The compound semi-automaton R is a vector of them. • Our construction ensures that always at least one semi-automaton is in (its copy of) M 's initial control state and so available to start testing for B in the suffix subinterval commencing at the current state.
• A suffix subinterval satisfies B iff there is exists a simulation of an accepting run of M which starts in the subinterval's first state, ends in its last one (the same as the overall interval's final state) and is formed by combining up to n + 1 pieces of runs of the semiautomata S 1 , . . . , S n+1 . The successive partial runs are performed on semi-automata of decreasing index.
Construction of the Individual Semi-Automata.
Let us now consider the details of the n + 1 semi-automata variants S 1 , . . . , S n+1 of M . A semi-automaton S k has its own disjoint set Q S k = {q S k 1 , . . . , q S k n } of copies of the n control states in M and is initialised exactly as M would be and hence starts in (its copy of) M 's initial control state. We let S k examine the control states of semi-automata with lower index (i.e., S 1 , . . . , S k−1 ) when it makes its transitions in lockstep with them. Hence, the set of propositional variables V S k is the union of V M and 1≤j<k Q S j and all propositional variables in an atom α in Σ S k are therefore either in V M or are control states in the semi-automata S 1 , . . . , S k−1 .
We now define the transition function δ S k of each semi-automaton S k in R for use when all of the semi-automata operate in lockstep. The transition function δ S k : Q S k ×Σ S k → Q S k is deterministic like M 's, but more complicated. For each pair q S k i , α in Q S k × Σ S k , there are two distinct possible cases based on the values of q S k i and α. We now define these cases and the associated transitions: • The pair q S k i , α is active: This occurs when for every j < k, the pair's atom α assigns the control variable q S j i to be false. It corresponds to a situation where S k is the semi-automaton of lowest index in R currently in (its own copy q S k i of) M 's control state q M i and itself also called active. Let β ∈ Σ M be the atom in Σ M obtained from α by only using the propositional variables in V M and thereby ignoring the control variables in α. Now we have that δ M (q M i , β) = q M j for some q M j ∈ Q M . Define the transition δ S k (q S k i , α) to be the corresponding q S k j ∈ Q S k . • The pair q S k i , α is inactive: If the first case does not apply, then S k shares (its copy of) M 's control state q M i with some semi-automaton of lesser index as seen by S k via the atom α. We define the transition δ S k (q S k i , α) to equal the initial control state of S k . Hence S k makes a transition from its current control state to (its copy of) M 's initial control state so in effect reinitialises itself. Our construction of R ensures that some other semi-automaton with lower index which is both active and presently in (its own copy of) the same control state q M i of M now indeed takes over from S k . We also say that S k is inactive and that the two semi-automata merge. Figure 1 gives an example of an deterministic automaton M with four states and a run of an associated compound semi-automaton with five semi-automata S 1 , . . . , S 5 .
Recall that our representation of M 's n control states using n propositional variables q M 1 , . . . , q M n has exactly one of the variables being true at any time. Hence we represent the n control states for a semi-automata S k using n propositional variables q k 1 , . . . , q k n . Therefore the subset of atoms in Σ S k extracted from R's composite runs always have exactly one variable q j i true for each semi-automaton S j with j < k. This property of the runs follows by induction on k. In contrast, the full set of atoms for Σ S k includes for each index j with j < k some pathological atoms in which none or more than one of the q j i are true. Nevertheless, actual runs of S k in R never encounter such atoms so we need not concern ourselves with the precise way δ S k is defined to handle them in transitions.
Formalisation of Suffix Recognition in PITL.
The following lemma formalises the finite-time behaviour of the compound semi-automaton R in PITL and uses an associated chain formula G R in PTL u which we construct in the proof: Lemma 8.1. For any PITL formula B, there exists a compound semi-automaton R with V R = V B and associated ATA D R and chain formula G R such that R's control variables are not in B and the next implication is valid: This lemma provides a way to replace right-instances of a PITL formula B by a chain formula G R in formulas restricted to finite time. However, it serves as basis for later replacing lefthand sides of chops with chain formulas. The lemma is entirely semantic and so does not depend on any particular axiom system or deductions. We will later readily deduce the lemma's implication (8.1) by invoking the completeness for PITL with finite time to obtain immediate theoremhood of the implication and some valid variants of it. Hence, from the standpoint of axiom systems and deductions, there is no need to know Lemma 8.1's proof or even any further details of R, D R and G R .
Proof of Lemma 8.1.. The construction for R ensures that the set union Q S 1 ∪· · ·∪Q S n+1 of control variables of the semi-automata S 1 , . . . , S n+1 contains no elements of the set V B of propositional variables occurring in B.
We will obtain the chain formula G R by mimicking an accepting run of M . This involves combining together pieces of runs from the some of the semi-automata S 1 , . . . , S n+1 . It needs at most n merges since when two semi-automata merge, only the one of lesser index continues testing. The chain formula G R , when suitably combined with the compound semiautomaton R's ATA, will capture the needed behaviour which we previously formalised in the implication (8.1).
We first define state formulas to test for active and merging semi-automata and also introduce a modified acceptance test: Sample automaton M for B (already presented in (7.1)): Control state behaviour of each S k in sample 8-state interval σ: Value of acc ′ k for each S k at end in state σ 7 : false true false false false Some explanations about the sample 8-state interval σ 0 . . . σ 7 : Only control states' indices are shown (e.g., 1 for q 1 ). Active semi-automata are shown in boldface. All control states used in any accepting runs of M are underlined. "S 2 ←" shows merge into semi-automaton S 2 in accepting run for M .
It follows from the definition of an active semi-automaton that j < k. • acc ′ k : Let us also define a propositional test acc ′ k based on the state formula acc M for checking M 's conditional acceptance test τ M . We use a substitution instance of acc M to adapt it to S k and its own copies of M 's control states.
Note that a semi-automaton S has no conditional acceptance test τ S and indeed the role of acc ′ k here somewhat differs from that of acc M . As usual, for an individual semi-automaton S k in the compound semi-automaton R, the state formula init S k tests for the initial control state of S k and the NL 1 formula T S k expresses the transition function δ S k of S k in temporal logic.
Let us now inductively define for each pair j, k : 1 ≤ j ≤ k ≤ n + 1 a chain formula G ′ k,j to be true iff a run segment starts with currently active semi-automaton S k in some unspecified control state, involves exactly j active automata (i.e., j − 1 mergers) and ends with acceptance of the word seen.
For example, the chain formula init S 1 ∧ active 1 ∧ G ′ 1,1 corresponds to an accepting run of M in which the semi-automaton S 1 recognises B on its own. The conjunction init S 2 ∧ active 2 ∧ G ′ 2,2 corresponds to an accepting run of M involving first semi-automaton S 2 and then semi-automaton S 1 . The semi-automaton S 2 starts recognising B and eventually merges with semi-automaton S 1 which completes the accepting run. Now let us construct from the chain formulas G ′ k,j the chain formula G R specifying an accepting run involving some of the n + 1 semi-automata to recognise the PITL formula B. Like in the examples, we start in some active copy of M 's initial control state: The construction of the compound semi-automaton R together with D R and G R ensures the desired validity of implication (8.1).
To assist readers, we list in Table 5 a variety of variables and where they are introduced.
Reduction of PITL to PTL with Until
Most of the remaining part of the PITL completeness proof concerns using compound semiautomata to show right-completeness for PITL k by reduction to PTL u . Recall from §5.4 that any chop construct in a formula A is a right-chop iff it does not occur in another chop's left operand or in a chop-star.
The PITL theorems mentioned here in proofs are found in Table 4 in §7.4 and also Appendix A. .1, which employs the compound semi-automaton R, generalises suffix recognition to infix recognition for checking which of a (possibly infinite-time) interval's finitetime infix subintervals satisfy some given PITL formula by instead using a chain formula.
Lemma 9.1. For any PITL formula B, there exists a compound semi-automaton R with V R = V B , associated ATA D R and chain formula G R such that R's control variables are not in B and the next formula is a PITL theorem: Proof.. Lemma 8.1 ensures the validity of the implication below for some compound semiautomaton R, associated ATA D R and chain formula G R : This and completeness for PITL with finite time (Theorem 2.2) ensures the next implication's theoremhood: This and Inference Rule ✷ f FGen yield the next formula: Simple reasoning about ✷ f (see PITL Theorem T25) results in the following: We re-express ✷ f ✷ f D R as ✷ f D R and commute ✷ f ✷ (see PITL Theorems T46 and T55) to obtain our goal (9.1).
The lemma below later plays a key role in reducing right-chops in a PITL k formula to PTL u formulas by first replacing their left sides with chain formulas in PTL u : Lemma 9.2. For any PITL formulas B and C, there exists a compound semi-automaton R with V R = V B , associated ATA D R and chain formula G R such that R's control variables are not in B or C and the next formula is deducible as a right-theorem: Proof.. Lemma 9.1 yields R, D R , G R and the next implication for infix recognition of B: Note that this has no right variables. We also employ the next implication which is an instance of PITL Theorem T30 and concerns interval-based reasoning about the left of chop: (9.4) Inference Rule ✷Gen then obtains from implication (9.4) the formula below: This with PTL-based reasoning involving the valid PTL formula ✷(p ⊃ q) ⊃ (✷ p) ⊃ (✷ q) with Axiom VPTL, where p is replaced by ✷ f (B ≡ G R ) and q by (B ⌢ C) ≡ (G R ⌢ C), together with modus ponens results in the following: (9.5) Implications (9.3) and (9.5) and simple propositional reasoning yield our goal (9.2).
Lemma 9.3. Any PITL k formula K in which the left sides of all right chops are chain formulas is deducibly equivalent to some PTL u formula Y , that is, Proof.. Starting with K's right-chops not nested in other right-chops, we inductively replace them by equivalent PTL u formulas. More precisely, if n is the number of K's right chops, then we use n applications of Lemma 5.2 and the Right Replacement Rule (Lemma 4.2) to show that K is deducibly equivalent to some PTL u formula Y (i.e., ⊢ rt K ≡ Y ).
B. MOSZKOWSKI
For example, suppose K is (G 1 ⌢ skip) ∨ G 2 ⌢ (G 3 ⌢ w) and hence has 3 right-chops. We could start by first re-expressing either G 1 ⌢ skip or G 3 ⌢ w by an equivalent PTL u formula. For instance, if G 2 is the chain formula p until empty and G 3 is the chain formula q until empty, then G 3 ⌢ w will be replaced by the equivalent PTL u formula q until w. After this, G 2 ⌢ (G 3 ⌢ w) will first reduce to G 2 ⌢ (q until w) and finally to the PTL u formula p until (q until w).
9.2.
Proof of the Main Completeness Theorem. We now establish right-completeness for PITL k and then use this to obtain right-completeness for PITL.
Lemma 9.4. Any valid PITL k formula can be deduced as a right-theorem.
Proof.. We show that a right-consistent PITL k formula K is satisfiable. Our proof transforms K to a PTL u formula. Let m equal the number of K's right-chops. We employ m compound semi-automata to obtain ATAs for systematically replacing the left operands of K's right-chops by PTL u chain formulas. Note that if m = 0, then K has no chops but perhaps skip so K itself is in PTL. We will construct a sequence of m + 1 PITL k formulas K ′ 1 , . . . , K ′ m+1 . In the final one K ′ m+1 , left operands of all right-chops are chain formulas so K ′ m+1 is deducibly equivalent to some PTL u formula by Lemma 9.3. For example, suppose K has the form ( . Then K has 3 right-chops so m equals 3 and This has the form B i ⌢ K ′′ i for some PITL formula B i and PITL k formula K ′′ i . Lemma 9.2 yields a compound semi-automaton R ′ i , ATA D R ′ i and a chain formula G R ′ i for which the next right-theorem is deducible: We employ Lemma 4.1 concerning replacement of right-instances to relate K ′ i and K ′ i+1 by replacing the selected . This and implication (9.6) together ensure the right-theorem . Without loss of generality, assume the control variables in the compound semi-automata ) just mentioned the next right-theorem: The left operand of each right-chop in K ′ m+1 is a chain formula. Hence by Lemma 9.3, we can deduce the equivalence of K ′ m+1 and some PTL u formula Y to obtain the PITL right-theorem ⊢ rt K ′ m+1 ≡ Y . By this and implication (9.7), the next implication is a right-theorem: (9.8) Right-variables in the original formula K do not occur in any D R ′ i since the construction of each D R ′ i only involves the left sides of K's right-chops. The right-variables in K are still right-variables in Y and implication (9.8). Now K's right-consistency and m applications of Lemma 5.4 ensure the right-consistency of K ∧ 1≤i≤m (✷ f D R ′ i ). This is re-expressible as K ∧ ✷ f D ′ , where the ATA D ′ is the conjunction of the ATAs D R ′ 1 , . . . , D R ′ m (we use PITL Theorem T28). Hence the formula K ∧ ✷ f D ′ is right-consistent. We deduce the equivalence of ✷ f D ′ and some PTL formula X as ⊢ X ≡ ✷ f D ′ by invoking Lemma 7.1 on the individual basic semi-automata in each R ′ i to re-express each one's runs in PTL and then forming the conjunction of results. Now D ′ and X have the same variables. Hence the equivalence X ≡ ✷ f D ′ has no right-variables because of ✷ f D ′ and is a right-theorem (i.e., ⊢ rt X ≡ ✷ f D ′ ). This with the equivalence ⊢ rt ✷ f D ′ ≡ 1≤i≤m (✷ f D R ′ i ) and implication (9.8) then yield the equivalence of formulas K ∧ ✷ f D ′ and Y ∧ X as a right-theorem. Therefore the PTL u formula Y ∧ X, like K ∧ ✷ f D ′ , is right-consistent and by right-completeness for PTL u (discussed in §5.2) is satisfiable as is K.
We now prove our main result Theorem 3.2 about right-completeness for PITL: Proof of Theorem 3.2.. Let A be a right-consistent PITL formula. Lemma 9.4 ensures right-completeness for PITL k . Hence by this and Lemma 6.2, there exists some PITL k formula K having the same variables and right-variables as A and with the deducible equivalence ⊢ rt A ≡ K. Now K like A is right-consistent and so satisfiable by right-completeness for PITL k (Lemma 9.4). Hence A is satisfiable.
As we already remarked in Section 3, the completeness proof can be regarded as two parallel proofs. The simpler one uses the extra inference rule (3.2) mentioned there to avoid right-theorems and right-completeness. The more sophisticated proof uses righttheoremhood instead of the inference rule and ensures that any valid PITL formula is not just a theorem but a right-theorem.
This concludes the PITL completeness proof.
Some Observations about the Completeness Proof
We now consider various issues concerning the new PITL axiom system and techniques employed in the completeness proof. Most of the points address questions previously raised by others.
10.1. Alternative Axioms for PTL. Axiom VPTL in Table 2 can optionally be replaced by four lower level axioms. Readers may wish to skip over the details now given. One of the lower level axioms is Taut in Table 3 permitting PITL formulas which are substitution instances of conventional (nonmodal) tautologies. For example, from the valid propositional formula p ⊃ (p ∨ q) follows ⊢ A ⊃ (A ∨ B), for any PITL formulas A and B. The other three axioms involve PTL. These are Axioms F10 and F11 found in Table 3 and also ⊢ skip ⊃ finite. The three Axioms Taut, F10 and F11 together with the remaining PITL axioms and inference rules in Table 2 then suffice to derive a slight variant proposed by us in [Mos04] of the complete PTL axiom system D 0 X for and ✸ (and ✷) of Gabbay et al. [GPSS80], itself based on an earlier one DX of Pnueli [Pnu77]. We denote our D 0 X variant here as D 0 X ′ . It permits both finite and infinite time, whereas D 0 X assumes infinite time. We previously did an explicit deduction of D 0 X ′ in our completeness proof for PITL with just finite time as described in [Mos04]. However, for infinite time we need the additional axiom ⊢ skip ⊃ finite because Axiom P6 (unlike Axiom F6 in Table 3) does not suffice on its own to deduce ⊢ skip ≡ empty to re-express skip using . Without ⊢ skip ⊃ finite, we can only deduce the PITL theorem ⊢ finite ⊃ (skip ≡ empty) from Axiom P6 together with the definition of in terms of skip and chop. In addition, from D 0 X ′ (once deduced), we can obtain ⊢ ( empty) ⊃ finite. These two implications combined with ⊢ skip ⊃ finite and simple propositional reasoning (involving Axiom Taut and modus ponens) yield our goal ⊢ skip ≡ empty.
10.2. Feasibility of Reduction from PITL to PTL. Some people have expressed serious doubts about our proof's technical feasibility owing to the significant gap in expressiveness between PITL and PTL. We therefore believe it is worthwhile to emphasis that in spite of this gap, any PITL formula can be represented by some PTL formula containing auxiliary variables. This is because conventional semantic reasoning about omega-regular languages and omega automata ensures that for any PITL formula A, there exist conventional nondeterministic omega automata (such as Büchi automata) which recognise A. For example, we present in [Mos00] a decidable version of quantified ITL which includes QPITL (defined earlier in Section 2) as a subset and then show how to encode formulas in Büchi automata. Various deterministic omega automata (e.g., with Muller, Rabin and Streett acceptance conditions) are also suitable for this. Such an automaton's accepting runs can be trivially encoded by some PTL formula X with auxiliary variables p 1 , . . ., p n representing the automaton's control state. Hence the PITL formula A and the QPTL formula ∃p 1 . . . p n . X are semantically equivalent, where ∃ is defined earlier in Section 2. Furthermore, the (quantifier-free) PITL implication X ⊃ A is valid and consequently any model of X can also serve as one for A. Indeed the technique of re-expressing formulas in omegaregular logics by means of nondeterministic and deterministic omega automata expressed in versions of PTL (subsequently enclosed in a simple sequence of existential quantifiers) is central to the completeness proofs for QPTL variants by Kesten and Pnueli [KP02] and French and Reynolds [FR03]. A related approach can be used to reduce decidability of PTL with the (full) until operator to PTL without until . This works in spite of the fact that PTL with until is strictly more expressive as proved by Kamp [Kam68] (see also Kröger and Merz [KM08]). We replace each until in a formula with an auxiliary variable which mimics its behaviour along the lines of the two axioms for until previously mentioned in §5.2. For example, when testing the satisfiability of the formula p ∧ (p until q) ∧ ¬(p until q), we transform it into the formula below with an extra auxiliary variable r:
Benefits of Restricted Chop-Stars in Chain
Formulas. Lemma 9.4 states that any valid PITL k formula can be deduced as a right-theorem. Within the proof of this lemma, all chop-star formulas found in the PITL k formula K ′ m+1 only occur in chain formulas. Such chop-star formulas therefore have the very restricted form (skip ∧ T ) ⋆ for expressing the PITL-based version of until defined earlier in §5.2 for PTL u . The simplicity of these chop-star constructs greatly helps us to reduce K ′ m+1 to the semantically equivalent PTL u formula Y and show that their equivalence is a deducible theorem. Incidentally, in [Mos07] we prove that any PITL formula (skip ∧ T ) ⋆ can be expressed in PTL as ✷(more ⊃ T ) and make extensive use of this equivalence. In contrast, arbitrary chop-star formulas cannot necessarily be re-expressed as semantically equivalent PTL formulas. 10.4. Thomas' Theorem and the Size of Deductions. Section 6 uses Thomas' theorem to re-express a PITL formula A as a semantically equivalent PITL k formula K. The two known proofs of Thomas' theorem by Thomas himself [Tho79] and Choueka and Peleg [CP83] unfortunately do not ensure that K is in some sense natural and succinct or even obtainable in a computationally feasible way. Therefore our completeness proof does not guarantee simple deductions. The main problem concerns the difficulties in nontrivial transformations on the underlying omega automata representing PITL formulas. Other established completeness proofs for comparable omega-regular logics with nonelementary complexity such as QPTL [KP95,KP02,FR03] currently share a similar fate. However, our proof bypasses an explicit embedding of the intricate process of complementing nondeterministic omega automata.
10.5. Justification for Using ATAs in the Completeness Proof. Some readers will wonder why we need ATAs introduced in §7.3 and do not just use the PTL-based representation of semi-automata and automata presented in §7.1 and §7.2. The main reason is that, as far as we currently know, this requires a more intricate inference rule than our PITLbased one ✷ f Aux. In particular, a PTL-based rule suitable for our purposes must permit the simultaneous introduction of multiple auxiliary propositional variables analogous to the one French and Reynolds [FR03] were compelled to employ for QPTL without past time (see also [KM08]).
Existing Completeness Proofs for Omega-Regular Logics
We now compare our axiomatic completeness proof with related ones for other omegaregular logics. Here is a list of a number of such formalisms: [Eme90]. Like S1S and QPTL, PITL has nonelementary complexity (e.g., see our results in collaboration with J. Halpern in [Mos83a] (reproduced in [Mos04])). In contrast, ETL and νTL have only elementary complexity. 11.1. Omega-Regular Logics with Nonelementary Complexity. Let us consider axiomatic completeness for omega-regular logics which, like PITL, have nonelementary complexity. We later discuss some with elementary complexity in §11.2.
We are not the first to consider a version of quantifier-free PITL with infinite time. Paech [Pae89] in a workshop paper presents completeness proofs for Gentzen-style axiom systems for versions of a Regular Logic with branching-time and linear-time and both finite and infinite time (see also [Pae88]). The linear-time variant LRL can be regarded as PITL with the addition of a binary temporal operator unless. Paech's framework is presented in a rather different way from ours to accommodate both branching-time and linear-time models of time, with the overwhelming emphasis on the branching-time one. Perhaps more significantly, the chop-star operator A * in LRL is limited, like Kleene star, to finitely many iterations (we look at a closely related PITL subset, called by us PITL k , in §5.3). Due to a theorem of Thomas [Tho79] (which we discuss and use in §5.3 and Section 6), LRL has omega-regular expressiveness, although it is less succinct than full PITL. Paech's restricted chop-star does not support chop-omega's infinite iteration. Indeed, Thomas' theorem is not at all mentioned in the completeness proof and does not serve as a bridge in the way we apply it in Section 6. Paech's stimulating and valuable presentation is quite detailed, especially in the extended version [Pae88]. Nevertheless, in our opinion (based on many years of experience with doing proofs in ITL), its treatment of LRL needs some clarification, as the following points demonstrate: • The unwinding of chop-star does not take into account that for induction over time to work in PITL, individual iterations need to take at least two states. This contrasts with our Axioms P9 and P10 in Table 2 and an analogous one which Bowman and Thompson use in [BT03]. Kono's tableaux-based decision procedure for PITL [Kon95] likewise ensures that iterations have more than one state. • The proof system includes nonconventional rules requiring some temporal formulas to be in a form analogous to regular expressions. • The main proof concerns a branching-time semantics. In contrast, only a couple of sentences are devoted to extending the proof to a linear-time interval framework appropriate for LRL. • The completeness proof uses constructions involving deterministic automata for finite words. It also mentions Thomas' theorem which ensures omega-regular expressiveness of LRL. Now the proof by Choueka and Peleg [CP83] of Thomas' theorem using standard deterministic omega automata quite clearly shows the link between LRL and these automata. However Paech does not discuss how the LRL completeness proof relates to techniques previously developed by McNaughton [McN66] and others for building deterministic omega automata from deterministic automata for finite words in order to recognise omega-regular languages. Some kind of explicitly described adaptation of such methods seems to us practically unavoidable. In contrast, our proof quite clearly benefits from this work as we discuss in detail in §8. • Except for the LRL construct L 0 (the same as empty in PITL), no derived intervaloriented operators are defined (e.g, to examine prefix subintervals or to perform a test in a finite interval's final state). Moreover, it does not appear that the LRL proof system was ever used for anything. • One minor puzzling feature of the LRL axiom system is that in its stated form, the linear-time proof rules for Paech's unary construct A (which is actually the weak-next operator w mentioned by us in Table 3) ensure that every state has a successor state.
This clearly forces the linear-time variant to be limited to infinite state sequences. In practice, such a requirement is counterproductive for LRL, which permits finite time and in particular has a primitive finite-time construct L 1 that is identical to our own construct skip for two-state intervals. The LRL formula L * 1 is used in rules to force finite intervals. The LRL proof rules for which impose infinite time clash with rules containing the formula L * 1 and likewise with rules having L 0 to specify one-state intervals. However, the difficulty with the LRL operator and infinite intervals seems to be an easily correctable oversight. Unfortunately, no subsequent versions of Paech's completeness proof for LRL with more explanations and clarifications have been published. Indeed, the difficulties faced at the time by Paech and others such as Rosner and Pnueli [RP86] (discussed below) when attempting to develop complete axiomatisations of versions of ITL with infinite time were such that subsequent published work in this area did not appear until over ten years later. Incidentally, the manner of Paech's proof based on Propositional Dynamic Logic (PDL) [FL79,HKT00] and the associated Fischer-Ladner closures suggests that it could have connections with much later research by Henriksen and Thiagarajan [HT99] on axiomatising Dynamic Linear Time Temporal Logic, a formalism combining PTL and PDL which we shortly mention in §11.2. On the other hand, our own PITL completeness proof here and our earlier one for PITL with just finite time [Mos04] do not involve Fischer-Ladner closures.
Completeness proofs for logics such as S1S [Sie70], QPTL with past time [KP95,KP02] and without past time [FR03] and one by us for quantified ITL with finite domains [Mos00] use quantified formulas encoding omega automata and explicit deductions involving nontrivial techniques to complement them. As we already noted in Section 1, our earlier axiomatic completeness proof [Mos00] for quantified ITL with finite domains requires the use of quantifiers and does not work when formulas were limited to have just propositional variables. French and Reynold's [FR03] axiom system for QPTL without past time contains a nontrivial inference rule for introducing a variable number of auxiliary variables. This inference rule is required by the automata-based completeness proof.
The axiomatic completeness proofs for the logics with quantification just mentioned with nonelementary complexity involve using quantified auxiliary variables to re-express a formula A as another semantically equivalent formula ∃p 1 . . . p n . X, where ∃ for QPITL and QPTL is defined earlier in Section 2. Here p 1 , . . . , p n are the auxiliary variables and X is a formula in a much simpler logical subset, such as some version of (quantifier-free) PTL. Axiomatic completeness for the subset is much easier to show than for the original logic. Completeness is then proved by the standard technique of demonstrating that any consistent formula A (i.e., not deducibly false) in the full logic is also satisfiable. In particular, we deduce as a theorem the equivalence A ≡ ∃p 1 . . . p n . X. Now from this, the assumed logical consistency of A and simple propositional reasoning, we readily obtain consistency for ∃p 1 . . . p n . X. Standard reasoning about quantifiers then ensures X is consistent. Completeness for the logical subset yields a model for X which can also serve as one for A. Normally in such completeness proofs, the formula X encodes some kind of omega automaton such as a nondeterministic Büchi automata. The details are not relevant for our purposes here. The deduction of the equivalence A ≡ ∃p 1 . . . p n . X in these proofs has always involved explicitly embedding nontrivial techniques for manipulating such omega automata.
In contrast to our approach, most of the established axiomatic completeness proofs for logics with nonelementary complexity need quantifiers. The one exception is Paech's Regular Logic, which does not have quantifiers and in linear time is like our PITL k , the subset of PITL without chop-omega defined earlier in §5.3. Our quantifier-free proof also benefits from the hierarchical application of some previously obtained semantic theorems and related techniques expressible as valid formulas in restricted versions of PITL (such as PITL with just finite time). This largely spares us from explicit, tricky reasoning about complementing omega automata. Once we have ensured axiomatic completeness for these versions of PITL, valid formulas in them can be immediately deduced as theorems. For example, we invoke (without proof) the theorem of Thomas at the end of [Tho79] to show that PITL k has the same expressiveness as full PITL. Our completeness proof then combines this result with completeness for PITL k to demonstrate that any PITL formula is deducibly equivalent to one in PITL k .
Our completeness proof for PITL with both finite and infinite time does not actually require a proof of the axiomatic completeness of a version of PTL with this time model because Axiom VPTL in Table 2 includes all substitution instances of valid PTL formulas. For our purposes, even axiomatic completeness for PTL u can be based on a reduction to PTL which invokes Axiom VPTL. However, as we noted in §10.1, some alternative, lower level axioms for the PITL axiom system can be used which would actually involve the reliance on a complete PTL axiom system. Our older axiom system for PITL with just finite time in Table 3 includes explicit axioms of this sort but of course can be readily modified to similarly use just a version of Axiom VPTL for finite time.
Even if we choose to use the alternative axioms and therefore explicitly rely on some provably complete PTL axiom system, the proofs are fairly easy to obtain via tableaux and other means (e.g., see Gabbay et al. [GPSS80], Lichtenstein and Pnueli [LP00], Kröger and Merz [KM08] and Moszkowski [Mos07]). Such methods often have associated practical decision procedures which in many cases are not so hard to implement. This contrasts with the explicit encoding in deductions of much more difficult automata-theoretic and combinatorical techniques to complement omega-regular languages in completeness proofs for other omega-regular logics with nonelementary complexity such as S1S [Sie70] and two versions of QPTL [KP02,FR03]. Furthermore, the completeness proofs for QPTL in any case also rely on reductions to some form of axiomatic completeness for PTL (which, like in our presentation, can be used without reproving it). Those QPTL axiom systems could alternatively be modified to include a suitable version of our Axiom VPTL. So even if we add a few extra axioms for PTL, we still feel justified in regarding our approach, which is partly based on invoking Thomas' theorem without having to encode a proof of it in deductions, as indeed being much more implicit than previous completeness proofs for omega-regular logics with nonelementary complexity such as S1S and QPTL.
Remark 11.1. As noted above, unlike previous automata-based approaches, ours avoids explicitly defining omega automata and embedding various associated explicit deductions concerning complicated proofs of some known results about them. Nevertheless, omega automata can be used in a simple semantic argument ensuring that for any satisfiable PITL formula, there exists some satisfiable PTL formula which implies it. This is because any omega-regular language can be recognised by such an automaton which itself is encodable in a QPTL formula of the form ∃p 1 . . . p n . X ′ , for some PTL formula X ′ . So for any PITL formula, there is some semantically equivalent QPTL formula of this kind and its quantifierfree part therefore implies the PITL formula. Clearly, the PITL formula is satisfiable iff the PTL subformula is.
Rosner and Pnueli's version of PITL [RP86] with infinite time and without chop-star is not an omega-regular logic since it has the (more limited) expressiveness of conventional PTL. Nevertheless, it in common with S1S, QPTL and PITL has nonelementary computational complexity. Rosner and Pnueli's complete axiom system includes a complicated inference rule which requires the construction of a [Kai95] subsequent less complicated completeness proof for just νTL uses a partially semantic approach which has some similar aims to ours for PITL, but is nevertheless technically quite different. It involves a clever normal form and tableaux. Every formula is shown to be deducibly equivalent to one in the normal form. We believe that our proof, although longer, is in certain respects more natural and straightforward than even Kaivola's at the deductive level.
Dynamic Linear Time Temporal Logic combines PTL and Propositional Dynamic Logic (PDL) [FL79,HKT00] in a linear-time framework with infinite time. The axiom system for this formalism has axioms concerning a variety of transitions [HT99]. The completeness proof is an adaptation of an earlier one for PDL by Kozen and Parikh [KP81]. It uses consistent sets of formulas.
Future Work
Our plans include using the axiom system as a hierarchical basis for completeness of PITL variants with weak chop and chop-star taken as primitives as well as quantification. Further possibilities include multiple time granularities (see our work [Mos95] for finite time), a temporal Hoare logic and also logics such as QPTL (by encoding within QPTL a complete axiom system for quantified PITL instead using of omega automata). The last would show interval logics can be applied to point-based ones.
In [Mos04], we used semantic techniques to prove axiomatic completeness for PITL with finite time by a simple reduction to an equally expressiveness subset called by us Fusion Logic and closely related to Propositional Dynamic Logic (PDL) [FL79,HKT00]. Fusion Logic, like some variants of PDL, uses discrete linear sequences of states instead of binary relations as its semantic basis. Some of the semantic techniques we presented in Section 6 for reducing PITL to its expressively equivalent subset PITL k by eliminating instances of chop-omega could shorten the completeness proof for Fusion Logic in [Mos04], since that proof contains a similar elimination of chop-star by reduction down to PTL. Furthermore, our completeness proof for PITL with just finite time in [Mos04] uses a separate complete axiom system for Fusion Logic. This now seems unnecessary for the overall completeness proof for PITL with finite time. Instead, the PITL axiom system should also suffice for Fusion Logic in view of our positive experiences with the current much more streamlined approach for PITL with infinite time.
The PITL operators ✸ f and ✷ f for finite prefix subintervals play a major role in our new completeness proof and appear worthy of more consideration. For example, we have recently studied techniques for reasoning about them with time reversal [Mos11]. This is a natural mathematical way to exploit the symmetry of time in finite intervals. We can show the validity of suitable finite-time formulas concerning ✷ f and prefix subintervals from the validity of analogous ones for ✷ and suffix subintervals which themselves might even be in conventional PTL with the operator until . The time symmetry considered here only applies to finite intervals. However, a valid finite-time formula obtained in this way can sometimes then be generalised to infinite intervals. One potential use of time reversal is to provide an algorithmic reduction of suitable higher-level PITL formulas to lower-level PTL ones for model checking. It also helps extend compositional techniques we described in [Mos94,Mos96,Mos98].
Conclusions
We have presented a simple axiom system for PITL with infinite time and proved completeness using a semantic framework and reductions to finite time and PTL. Our axiom system is demonstrably simpler than the one which Paech presents for LRL, even though we support omega-iteration and LRL does not. Moreover, the explicitly stated deductions in our proof can be regarded as being technically less complex then others for quantified omega-regular logics with nonelementary complexity such as S1S and QPTL. This is because known completeness proofs for those logics involve an explicit deductive embedding of proofs of theorems about complementing omega-regular languages and require reasoning about nontrivial algorithms (typically utilising quantifier-based encodings of omega automata). Such completeness proofs therefore do not merely use one such theorem but incorporate significant aspects of its complicated proof, in effect reproving it. In contrast, we simply invoke Thomas' theorem without referring to how it is proved. In our opinion, this conforms much more to the conventional mathematical practice of using previously established theorems, even hard-to-prove ones, as modular "black boxes". However, we appreciate that some readers will argue about the significance of this technical point.
The overall results we have described in our new completeness proof seem to complement our recent analysis of PTL using PITL [Mos07]. One surprise during the development of our completeness proof concerned how much explicit deductions could be minimised by application of valid properties proved with semi-automata and automata on finite words. Another unexpected benefit arose from the insights into time reversal.
In principle, ⊃-chain and ≡-chain are subsumed by Prop but are used here to make the reasoning more explicit.
• PITLF: Our assumption of axiomatic completeness for PITL with just finite time permits any valid implication of the form finite ⊃ A.
A.1. Some Basic Properties of Chop.
We now consider deducing various simple properties of chop and the associated operators ✸ f , ✷ f , ✸ and ✷ which have a wide range of uses.
The following derived variant of Inference Rule ✷ f FGen omits the subformula finite: The derived inference rule DR4 can also be referred to as ✷ f Gen (analogous to the inference rule ✷Gen).
DR5
⊢ The proof for ⊃ is immediate from axiom P3. Here is the proof for ⊂: Some Properties of ✷ f involving the Modal System K and Axiom D.
The two pairs of operators ✷ and ✸ and ✷ f and ✸ f obey various standard properties of modal logics. Axiom VPTL helps streamline reasoning involving ✷ and ✸. The situation with ✷ f and ✸ f is quite different since they lack a comparable axiom. Therefore, it is especially beneficial to review some conventional modal systems which assist in organising various useful deductions involving ✷ f and ✸ f . Table 6 summarises some relevant modal systems, various associated axioms and inference rules. Chellas [Che80] and Hughes and Cresswell [HC96] give more details.
Within PITL, as in PTL, the operator ✷ can be regarded as the conventional unary necessity modality L and the operator ✸ as the dual possibility operator M . The two operators together fulfil the requirements of the modal system S4. We do not need to explicitly prove versions of the S4 axioms in Table 6 for ✷ and ✸. Rather, any PITL formula which is a substitution instance of a valid S4 formula involving ✷ and ✸ can be readily deduced using the PITL proof system's Axiom VPTL. Similarly, inference rules
System
Axiom or inference rule Axiom or rule name K plus 4 and ⊢ L A ⊃ M A D Table 6: Some standard modal systems based on S4 can be obtained with Axiom VPTL, Inference Rule ✷Gen (which corresponds to the inference rule N of S4 ) and modus ponens. Moreover, the PITL proof system's Axiom VPTL permits using any PITL formula which is a substitution instance of some valid PTL formula which can also contain the PTL operator . In view of all this, we do not give much further consideration to aspects of S4 with ✷ and ✸.
In contrast to ✷, the PITL operator ✷ f does not have a comprehensive axiom analogous to VPTL. Therefore, we need to explicitly prove in the PITL axiom system various modal properties of ✷ f and its dual ✸ f . If only finite time is allowed, then ✷ f and ✸ f act as an S4 system. However, ✷ f with infinite time permitted does not fulfil the requirements of S4, or even those of the weaker modal system T, because Axiom T fails. Instead, ✷ f with infinite time fulfils the requirements of the modal system KD4 which is strictly weaker than S4.
Here is a list of KD4 's axioms and inference rules and related PITL proofs for ✷ f : If only finite time is allowed, then the implication D does not need to be regarded as an explicit axiom since it can be inferred from any proof system for S4.
Remark A.1. It is also worth noting that the related operators ✷ i and ✸ i (defined using weak chop in Table 1 in Section 2) obey the modal system S4 even when infinite time is permitted. However, we prefer to work with ✷ f and ✸ f since the use of strong chop simplifies the overall PITL completeness proof.
Conventional model logics usually take L, not M , to be primitive. When we deduce standard modal properties for ✷ f and ✸ f in our PITL axiom system, we let M , which corresponds to ✸ f , be primitive and define L to be M 's dual (i.e., L A def ≡ ¬M ¬A). This M -based approach goes well with the PITL axioms for chop. Chellas [Che80] discusses some alternative axiomatisations of modal systems with M as the primitive although none correspond directly to ours. For the system K, we can deduce implication (A.1) below for ✷ f and ✸ f (see Theorem T23 later on) and then obtain from it together some other reasoning the more standard axiom K just presented which only mentions L: (A.1) The operators ✷ and ✷ f together yield a multi-modal logic with two necessity constructs L and L ′ which are commutative: This corresponds to our Theorem T55 given later on.
Below are various theorems and derived inference rules about ✷ f and ✸ f for obtaining the axioms M-def (Theorem T22) and K (Theorem T25) found in the modal system K. The associated inference rule N was already proved above as Derived Inference Rule DR4. We also prove the modal axiom D (Theorem T33).
In the next proof's final step, recall that ≡-chain indicates a chain of equivalences: A.3. Some Properties of Chop, ✸ f and ✷ f with State Formulas. T36 The following lets us move a state formula into the left side of chop: 2,3,⊃-chain We can easily combine this with theorem T39 to deduce the equivalence below: Below is a useful corollary of T41 used in decomposing the left side of chop: A.4. Some Properties of ✷ f involving the Modal System K4.
We now consider how to establish for the PITL operator ✷ f the axiom "4" (PITL Theorem T47) found in the modal systems K4 and S4.
We make use of the following analogue of Theorem T44 for ✸ and ✷: A.7. Some Properties of Chop-Star.
We now consider some theorems and derived rules concerning chop-star. We now present some derived inference rules which come in useful when completeness for PITL with finite time is assumed (see Theorem 2.2). Recall that any valid implication of the form finite ⊃ A is allowed and that we designate such a step by using PITLF. PITL Theorem T61 below illustrates this technique.
DR59
⊢ The next theorem's proof involves the application of the previous derived inference rule together with completeness for PITL with just finite time: 2,4,5,7,≡-chain An alternative proof of Theorem T61 can be given without PITLF by first deducing the dual equivalence ✸ f ✸(empty ∧ w) ≡ ✸ w, for any state formula w.
A.9. Some Properties of Skip, Next And Until. Recall from §5.1 that NL 1 formulas are exactly those PTL formulas in which the only temporal operators are unnested s (e.g., p ∨ ¬p but not p ∨ ¬p). The next theorem holds for any NL 1 formula T : T62 ⊢ ✸ f (more ∧ T ) ≡ more ∧ T Proof.. We use Axiom VPTL to re-express more ∧ T as a logically equivalent disjunction 1≤i≤n (w i ∧ w ′ i ) for some natural number n ≥ 1 and n pairs of state formulas w i and w ′ i : Now by Theorem T50 any conjunction w ∧ w ′ is deducibly equivalent to ✸ f (w ∧ w ′ ). Therefore the disjunction in (A.2) can be re-expressed as Then by n − 1 applications of Theorem T34 and some simple propositional reasoning, the righthand operand of this equivalence is itself is deducibly equivalent to ✸ f 1≤i≤n (w The chain of the three equivalences (A.2)-(A.4) yields the following: We then apply Derived Rule DR8 to the first equivalence (A.2): The last two equivalences with simple propositional reasoning yield our goal T62. | 2012-08-10T02:46:21.000Z | 2012-07-16T00:00:00.000 | {
"year": 2012,
"sha1": "a77332f219c7832e11f1b130fccfddb8cf18a59b",
"oa_license": "CCBY",
"oa_url": "https://lmcs.episciences.org/759/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "db520385214c93bd8193cb1861a4b17391607274",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
239008122 | pes2o/s2orc | v3-fos-license | Pharmacological correction of technological stress in bulls and assessment of the influence of stress factors on semen quality
The current conditions for the intensification of animal husbandry dictate a significant increase in the physiological and functional load on the body of productive animals, resulting in a failure of adaptive capacity, which manifest themselves in the violation of reproduction function and the development of pathological states. Research by domestic and foreign scientists has shown that stress plays a leading role in the etiology and pathogenesis of diseases leading to a reduction in animal reproduction [3, 4, 9]. Although the topic of stress is often covered in the scientific literature, some of its features are not fully explored. This is particularly true for male producers whose genetic material has a direct impact on livestock productivity and livestock production [1, 3, 4, 11]. Therefore, the study of the influence of stress factors on the reproductive capacity of males and their pharmaco-rectification is relevant. The aim of our research was to assess the degree of influence of technological stress factors on the reproduction function of the bull producers and to develop a pharmaco-correction scheme to prevent stress effects on sperm quality. In relation to the objective, the following objectives were proposed: 1. Study the reaction of the ox producers to the effects of technological stress; 2. Determine the influence of drugs Amber biostimulator, Azoxyvet on the quality of bull sperm.
Introduction
No progress can be made in livestock production without the proper organization of animal reproduction [4]. Reproduction is an important biological process, which is a major component of profitable meat husbandry [11]. At present «BMK» have annual losses due to the loss of calves, infertility and strength of animals, low percentage of fruitful insemination of repair chicks, low quality of sperm of ox-producers. Technological stress is the sum of the stress factors accompanying the technology: hypodynamy, tissue hypoxia, dense planting, frequent movement, etc. [9.11]. The greatest disturbances in the animal are observed when the sum of several stress factors is applied to the animal. The effects of these stress factors result in metabolic disorders, reduced reproductive capacity, reduced productivity and reduced economic exploitation, resulting in economic loss of production [10]. The search for a solution to the problem of reproduction is ongoing. At the same time, the problem of determining the mechanisms for determining the effectiveness of insemination is most pressing. In our view, the pharmacological correction of technological stresses arising from the breeding of cattle of the Aberdeen-Angus breed is of key importance [5]. One of the priority areas in the prevention of technological stress is the creation or search for formulations with pronounced antioxidant effects for use in integrated therapy [3,6,12]. Research on the impact of vitamin supplements, adaptogens, immunomodulators and succinic acid preparations with a preventive purpose for stress correction has been the subject of research by a number of researchers [3.4.5.6.8.12].
Materials and methods
The work was completed in 2020 at Farm Krasny Yar «BMK» in the period from April to June. The research focused on Aberdeen-Angus bulls aged 22-24 months weighing 650-700 kg. Three groups of bulls were formed with 50 heads each. The control group underwent a procedure of testing the seeds used in the farm without any pharmacological means of correcting technological stress. The experimental group 1 was injected with Azoxivet intramuscularly at a dose of 0.1 mg / kg twice at 14-day intervals, the experimental group 2 was injected with Amber Biostimulator intramuscularly at a dose of 10 ml twice at 14-day intervals, Two injections were made seven days before the supposed sperm quality assessment. At the time of the studies, the bulls of all groups were under the same feeding and housing conditions. The contents were outdoor, in special pens, enclosed by pipes, feeding twice full of mixed rations (silage, hay, straw, corn spit, premix mineral), water production from wells. Fresh semen samples were taken during the planned testing of bulls for product quality by means of a transrectal electrostimulant. The quality of the sperm was evaluated by microscopy (microscope MICROMEDE): the general type of ejaculate, mobility, morphology and sperm concentration were evaluated, and the circumference of the scrotum and the state of the mouth were assessed visually. The following methods were used to examine the samples: -quality assessment of sperm by appearance; -microscopic examination of sperm for sperm activity in %; -determination of the sperm concentration in the Goryaev chamber; -morphology was determined by the number of sperm with a straight forward motion and the number of sperm with a pathological movement in %.
The degree of influence of stress factors on the clinical status of bulls was assessed according to the following criteria: temperature, pulse, respiration, appetite, motor activity, salivation, number of cutbacks of scar, number of chewing movements. The influence of technological stress on exchange processes in animals was evaluated on the basis of the total protein, urea and amine nitrogen content, also estimated testosterone and cholesterol content on the basis of the FGB of the Central Laboratory for Research and Development of the Tula Test Laboratory. Blood was taken from 15 bulls, 5 in each group from the tail vein twice, before exposure to technological stress and at the moment of exposure to technological stress, during the taking of semen.
Results and discussions
Following the impact of technological stress factors, changes in the clinical status of bull producers were observed. The body temperature of the control group increased by 3.1 per cent compared to the background, heart rate decreased by 7.4 per cent, respiratory rate increased by 46.1 per cent, appetite decreased in 60 of 150 bulls, 35 showed uncertain movements, 40 showed aggressions, Salivation was increased in 88 individuals, the number of chewing movements decreased by 7.8%, the number of cutbacks after feeding by 28.5%. Based on the data, it can be concluded that Azoxyvet and Amber biostimulator drugs influence the organism of the bull producers before taking sperm. The body temperature of the experimental group 1 was 2.0% lower than that of the control group 2 by 1.5%, heart rate decreased by 1.3% and 2.7%; respiratory rate decreased by 21.0% and 26%; appetite decreased by 30 and 10 bulls; the number of chewing movements was 1.07 and 3.1 per cent; number of cuts of scar after feeding by 33.0 and 20.0 per cent. Based on the data presented in Table 1, the sperm mobility of Experimental Group 1 was 13% higher than that of background 2 by 13.0%. In an assessment of morphology, 7 bulls with pathological sperm (with defects in the cervix and head) were identified A general pattern in groups after the use of drugs and without pharmacological correction of technological stress has not been identified. There was no significant impact on the estimation of the prostate and the circumference of the scrotum. It can be concluded that the average sperm concentration of the ejaculate-producing bulls of Aberdeen-Angus group 1 using Azoxyvet was 0,89 billion/ml, experimental group 2 with application of Amber biostimulator 0.88 billion/ml. This increased by 13.5% and 12.5% compared to control (0.77 billion/ml). Technological stress did not significantly affect the total protein content in the blood, with only a slight decrease in the control group by 5.6%; an experimental 1 by 4.1%; and an experimental 2 by 2.1%.The increased glucocotrticoid function of the adrenal cortex on protein exchange, due to technological stress factors, has contributed to the increase of urea and amine nitrogen in the blood of experimental animals: The concentration of urea increased by 42.9% in the control group; by 24.5% in the experimental 1; by 28.2%. Similar to the change in the concentration of urea, the concentration of amine nitrogen in animals changed. In the control group, 55.5%; in the experimental group, 1 by 49.1%; in the experimental group, 2 by 48.8%.
Conclusion
Based on the data obtained, it can be noted that the use of Azoxyvet and Amber biostimulator affects the quality of sperm produced by bull producers: -The movement of the sperm of Experimental Group 1 was 13% higher than the background; the mobility of Experimental Group 2 was 13.0%; -Sperm concentrations in ejaculate of bull producers in Test Group 1 increased by 13.5 and 12.5% in Test Group 2 compared to control (0.77 billion/ml).
The use of these drugs reduces the adverse effect of stress factors on the clinical status of bull producers and protein exchange: the body temperature of the experimental group 1 was lower by 2.0% compared to control, the experimental group 2 by 1.5% and the heart rate by 1.3 and 2.7 per cent; respiratory frequency 21.0 and 26 per cent; appetite reduced in 30 and 10 bulls; number of chewing movements increased by 1.7 and 3.1 per cent; number of scarring reductions after feeding by 33.0 and 20.0 per cent.
The concentration of urea in Experimental 1 was 18.4% lower than in the control group; in Experimental Group 2, 14.7%. The amine nitrogen content in the animal blood in Experimental Group 1 was 6.4% lower than in the background; in Experimental Group 2, it was 6.7%.
The serum testosterone content in Test Group 1 was 16.8% higher than that of Test Group 1; Test Group 2 was 12.9%; Cholesterol content was 19.4% lower than that of Test Group 1; and Test Group 2 was 16.9%.
Thus, it is possible to recommend preparations «Azoxyvet» and «Amber biostimulator» for pharmacocorrection of technological stress in bulls-producers. | 2021-08-27T17:04:27.576Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f937d75af4c85b4ba96bade0256d0163a101df4d",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/04/bioconf_ppsis2021_04008.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cf7c1ca07cd43e159a4b34ccc10b5ed7f2a238d0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
255862236 | pes2o/s2orc | v3-fos-license | Ex vivo infection of murine precision-cut lung tissue slices with Mycobacterium abscessus: a model to study antimycobacterial agents
Multidrug-resistant infections due to Mycobacterium abscessus often require complex and prolonged regimens for treatment. Here, we report the evaluation of a new ex vivo antimicrobial susceptibility testing model using organotypic cultures of murine precision-cut lung slices, an experimental model in which metabolic activity, and all the usual cell types of the organ are found while the tissue architecture and the interactions between the different cells are maintained. Precision cut lung slices (PCLS) were prepared from the lungs of wild type BALB/c mice using the Krumdieck® tissue slicer. Lung tissue slices were ex vivo infected with the virulent M. abscessus strain L948. Then, we tested the antimicrobial activity of two drugs: imipenem (4, 16 and 64 μg/mL) and tigecycline (0.25, 1 and 4 μg/mL), at 12, 24 and 48 h. Afterwards, CFUs were determined plating on blood agar to measure the surviving intracellular bacteria. The viability of PCLS was assessed by Alamar Blue assay and corroborated using histopathological analysis. PCLS were successfully infected with a virulent strain of M. abscessus as demonstrated by CFUs and detailed histopathological analysis. The time-course infection, including tissue damage, parallels in vivo findings reported in genetically modified murine models for M. abscessus infection. Tigecycline showed a bactericidal effect at 48 h that achieved a reduction of > 4log10 CFU/mL against the intracellular mycobacteria, while imipenem showed a bacteriostatic effect. The use of this new organotypic ex vivo model provides the opportunity to test new drugs against M. abscessus, decreasing the use of costly and tedious animal models.
Background
Mycobacterium abscessus is an important emerging pathogen responsible for a wide spectrum of diseases, including chronic pulmonary disease, and skin and soft tissue infections. M. abscessus is a nontuberculous mycobacteria (NTM) found in soil and water, including municipal and household water supply systems. This species is one of the most resistant organisms to chemotherapeutic agents [1] and is therefore often referred to as the "incurable nightmare" [2]. The treatment of M. abscessus infections usually consists of a mixture of a macrolide plus parenteral antimicrobials, that can be either an aminoglycoside or cefoxitin, imipenem 19:52 or tigecycline [3,4]. The cure rate achieved among patients with an M. abscessus pulmonary infection is typically between 25 and 88% [5][6][7], and therapy is usually given for as long as 18-24 months, with a minimum combination of three drugs [5,8]. In addition, these therapeutic schemes have a high cost, it has been estimated that a total of 1.4 billion dollars was spent on NTM-pulmonary disease in the USA in 2014 [9]. Therefore, there is an urgent need to develop safe and more effective drugs with anti-NTM activity. Currently, there are potential therapeutic agents in research and development for the treatment of NTM pulmonary disease, including M. abscessus infection. However, with the tuberculosis drug pipeline, where > 35 chemical entities are in the discovery stage, the NTM drug pipeline is nearly empty [10].
In addition, to develop more effective regimens against NTM diseases, it is necessary to implement models to help test novel drugs or compounds with potential antibiotic effects. The efficacy of drugs against M. abscessus, as well as other mycobacteria, is traditionally studied using in vitro and in vivo models [11][12][13]; however, in vitro studies cannot fully represent the complexity of the lung architecture and its impact on host-pathogen interactions, while animal models have their own limitations [14,15]. For example, animal experiments are often poorly designed and fail to provide the proper foundation for subsequent human studies [16], and they have issues related to reproducibility and translation into preclinical studies [17].
The necessity of experimental models that provide a more accurate representation of the in vivo 3D structure of the lung than cell lines, grown as monolayers, has increased the interest in ex vivo tissue cultures [18]. With this approach, tissue explants have been successfully infected with Mycobacterium tuberculosis, M. abscessus, and Mycobacterium avium [19]. Similarly, we have reported a M. tuberculosis infection model using precision-cut lung slices (PCLS) [20]. PCLS are an ex vivo system that reflects the 3D tissue architecture, cellular composition, matrix complexity, metabolic function and immune response of the lung [21,22]. PCLS have also been used as an infection model to study mycoplasma [23], viruses [24,25] and bacteria [26][27][28]. The characteristics of this ex vivo lung model system could offer some advantages when testing compounds directed against several pathogens of the respiratory tract.
In this work, we describe the evaluation of an infection model with a virulent strain of M. abscessus using murine precision-cut lung tissue slices. Once infection was established, we evaluated the antimicrobial activity of tigecycline and imipenem against the infected lung slices. This model will provide valuable information for the study of M. abscessus pathogenesis and in the search for novel drugs against mycobacteria.
Bacterial strains
M. abscessus virulent strain L948 (ATCC 19977) was grown in Middlebrook 7H9 broth and stored in vials at − 70 °C. Bacterial vials were thawed, and the colony forming units/mL (CFU/mL) were determined by serial dilution on blood agar.
Minimal inhibitory concentration
Tigecycline and imipenem stock solutions were prepared at a concentration of 1 mg/mL. The MIC value for each drug against M. abscessus L 948 (virulent strain) was determined as recommended by the Clinical and Laboratory Standards Institute (CLSI) document M24-A2 using a broth microdilution method [29]. The final drug concentration range was 0.25 to 64 μg/mL. The MIC values were determined after 72 h of incubation at 30 °C. Quality control testing was performed using Staphylococcus aureus ATCC 29213.
Precision-cut lung tissue slices preparation
The PCLS were prepared from 8 to 10-week-old male BALB/c mice (Harlan Laboratories SA de CV, México). The mice were euthanized with an overdose of sodium pentobarbital (6 mg/100 g) following institutional and international guidelines for the humanitarian care of animals used in experimental work. Afterwards, the pleural cavity was exposed under aseptic conditions, and the trachea was cannulated to infiltrate the lungs with 0.7% lowgelling temperature agarose in basal DMEM/F12 medium at 37 °C. The lungs were allowed to cool on ice to obtain a firm consistency and were then excised and immersed in sterile Krebs-Henseleit (KB) buffer (pH 7.4 at 4 °C). Cylindrical lung tissue cores of 5 mm diameter were obtained; from these, 350-400 µm thick tissue slices were prepared using a Krumdieck ® tissue slicer (Munford, AL, USA), with a constant flow of oxygenated KB buffer (4 °C, 95:5% O 2 :CO 2 ). The lung slices were placed in 24-well microplates (one per well) with 1 mL per well of DMEM/ F12 medium. The plates were pre-incubated for 4 h at 37 °C, 5% CO 2 , with a slow agitation at ~ 25 rpm, and the medium was changed four times every 30 min to remove the agarose. Afterwards, the basal viability of the lung tissue slices was determined by Alamar Blue ™ assay [30]. Fluorescence (at 530 nm excitation/590 nm emission wavelengths) was determined in the FLx800 Multi-detection Microplate Reader (Biotek Instruments, Winooski, VT, USA).
Infection of the PCLS and intracellular activity of the antibacterial drugs
After removing the agarose from the tissue, 250 μL of DMEM/F12 complete medium was added to the PCLS in the 24-well microplates and inoculated with M. abscessus ATCC 19977 (1.5 × 10 7 CFU in total, per slice). One group of slices was processed immediately for histopathological analysis. The remaining slices were inoculated and incubated at 37 °C with 5% CO 2 for 1 h without agitation. Then, 1 mL of complete DMEM/F12 was added, followed by incubation for 1 h. After removing the medium, the slices were washed twice with 500 μL of PBS buffer with amikacin (200 μg/mL) to eliminate any extracellular M. abscessus. The PCLS were washed again, and the antimicrobial compounds diluted in complete DMEM/ F12 medium were added by triplicate as follows: imipenem at 4, 16 and 64 μg/mL and tigecycline at 0.25, 1 and 4 μg/mL, followed by incubation for 12, 24 and 48 h. The CFUs were determined by transferring the PCLS to a microcentrifuge tube containing 1 mL of distilled water and sterile glass beads and washing twice. The intracellular bacteria were released using a sterile scalpel macerating the tissue until it had disintegrated, followed by vortexing with 1 mL of PBS-Tween 20 solution for 5 min; CFUs were determined by plating on blood agar. The experiments were performed in triplicate, and the data were expressed as the log 10 . In all cases, a control group, untreated with antibiotic, was prepared for each corresponding time point.
PCLS histopathological analysis
After incubation time the infected and control lung tissue slices were fixed in 10% neutral formalin for 24 h at room temperature and then embedded in paraffin using conventional histological techniques. Sections of 5 µm thickness were obtained from the embedded tissues using a microtome (American Optical, Buffalo NY, USA), mounted on glass slides, and stained with hematoxylin and eosin (H&E) or Ziehl-Neelsen (ZN) dyes. The Oil red-O stain kit KTORO (StatLab., McKinney TX, USA) was used to perform the Oil red-O classic lipid stain to confirm the presence of foamy macrophages. Frozeninfected slices were carefully embedded in Tissue-Tek ® on the mold of the cryostat. Sections of 10-12 μm-thick were prepared using a Leica CM1850 cryostat (Buffalo Grove, IL, USA); they were placed on glass slides and let them dry for 30 min at room temperature; the staining process was done according to the manufacturer instructions. Briefly: slides with the frozen sections were placed in 10% neutral formalin for 2-5 min then rinsed in tap water. Slides were subsequently placed in propylene glycol for 2 min, stained 6 min with oil red O working solution at 60 °C; placed 1 min in 85% propylene glycol, rinsed twice with distilled water; stained 1 min with modified Mayer's hematoxylin, rinsed twice with tap water followed by 2 changes of distilled water, and mounted with aqueous glycerol-jelly medium. All the stained sections were observed using a Zeiss Axiostar Plus Brightfield Microscope (Jena, Germany); photographs were obtained with a 5.0 Moticam camera (Richmond, BC, Canada). All sections were observed using a Zeiss Axiostar Plus Brightfield Microscope (Jena, Germany); photographs were obtained with a 5.0 Moticam camera (Richmond, BC, Canada).
Statistical analysis
For the descriptive analysis, all the values are presented as mean values and standard deviations (± SD). The data were compared by using the Student's t-test and Bonferroni's multiple comparison posttest, considering P < 0.05 as significant.
Results
PCLS of adequate quality were obtained to perform all the experiments. Lung slices with thickness, diameter, and macroscopic integrity were selected.
Histopathologic analysis
The morphology of freshly obtained lung tissue showed no differences with the uninfected PCLS incubated for 5 days, based on the histologic structure. Representative images of the cultured PCLS show their characteristic structural elements: typical bronchi, terminal and respiratory bronchioles, alveolar ducts, alveolar sacs, alveoli and septa. In addition to the structural elements, alveolar macrophages, and type I and II pneumocytes were observed (Fig. 1a-d). Thin blood vessels were seen in the alveolar septa, with little to no evidence of inflammatory cells in the alveolar space. After 5 days of incubation, the histologic integrity of the tissue was maintained. The viability of the lung tissue slices was 97% at 48 h and there were no significant differences in the viability of the freshly obtained (basal) and control (uninfected) slices (100 ± 5% and 116 ± 20%, respectively). In M. abscessus-infected PCLS for six to 24 h, we observed a mixed inflammatory reaction ( Fig. 1e-g) mainly composed of abundant foamy macrophages, lymphocytes, and plasmatic cells. Mycobacteria bacilli were found in the PCLS, particularly in the alveolar lumen, in close contact with the foamy macrophages, or intracellularly infecting these cells. At 6 h post-infection, the presence of a greater number of epithelioid cells in the alveoli, a few lymphocytes in the septa, and some septa with mild edema were observed, while the histologic architecture was still conserved. After 12 and 24 h post-infection, the alveolar septa showed more edema, vascular congestion, and extravasation of erythrocytes and lymphocytes in the septa and the alveolar spaces ( Fig. 1e-g). After 48 h of incubation with the mycobacteria (Fig. 2), we observed damage in the histologic structure: inflammatory infiltrates composed of histiocytes and aggregates of foamy macrophages, as well as nuclear fragmentation of the PMN cells ("nuclear dust"). Rupture of the alveolar septa, vascular congestion and thickening of the alveolar septa in other areas were also noticed ( Fig. 2a-g). Some of the foamy macrophages presented nuclear changes, such as pyknosis, karyolysis and karyorrhexis (Fig. 2). Abundant mycobacteria were observed in the alveolar septum infecting the type II pneumocytes and in the areas of confluence of foamy macrophages (Fig. 2). Langhans multinucleated cells were also observed ( Fig. 2e, g); mycobacterial fragments inside the multinucleated giant cells were sometimes observed.
The interactions of M. abscessus with inflammatory cells and pneumocytes in the alveolar septa are shown in Fig. 3a-f. M. abscessus interacts directly with type I and type II pneumocytes, neutrophils, lymphocytes, and macrophages. Bacilli were observed isolated in the alveoli and alveolar space but also with a tendency to form aggregates in the alveolar septum, near to or in close contact with the epithelial cells; they were also found between cellular debris in areas of inflammation and in close contact with macrophages. We observed the infection of macrophages and type I pneumocytes, as bacilli were seen inside of these cells. Foamy macrophages filled with lipid-containing bodies were frequently found in frozen sections from infected PCLS stained with Oil red-O (Fig. 4).
To confirm the usefulness of this model, we studied the intracellular effect of imipenem and tigecycline on the infected PCLS. The MIC values for tigecycline and imipenem were 1 and 16 μg/mL, respectively, for this M. abscessus strain. Bactericidal activity and bacteriostatic activity were defined as ≥ 3log 10 and < 3log 10 reduction of total count of CFU/mL, respectively, in comparison with the initial inoculum after 12, 24, 36 and 48 h of incubation according to standard guides [31]. Bacteriostatic intracellular activity on the infected slices was observed with imipenem. As shown in Fig. 5, the intracellular CFU count decreased only 1 log at 48 h at the highest drug concentration used. A Student's t-test showed no significant difference between the imipenem-treated slices and the nontreated control. A better result was obtained with tigecycline, which showed a dose-and time-dependent activity and a bactericidal effect at 48 h that reached a reduction of > 4log 10 CFU/mL (Fig. 5). A post hoc test showed a statistically significant difference between the control and those slices treated with tigecycline at 1× and 4× the MIC (P < 0.05).
We also corroborated that intracellular bacteria recovered from the infected PCLS at equivalent time points were viable M. abscessus by lysing and plating the tissue homogenate in blood agar, as described in "Materials and methods" section. After incubation, the growth of round bacterial colonies with the typical morphology of mycobacteria was observed (Fig. 6a). The subsequent ZN staining of the smears from these colonies showed the presence of the mycobacterial bacilli (Fig. 6b). These results and the previously described morphological findings support the utility of the experimental infection model.
Discussion
Due to the antimicrobial resistance shown by M. abscessus, it is necessary to study and analyze new antibacterial candidates using physiologically relevant models. Here, we evaluated a preclinical model for M. abscessus infection using PCLS and validated it by determining the antimicrobial effects of imipenem and tigecycline. A stable and sustained growth of the bacteria was observed inside the tissue up to 48 h post-infection, allowing us to assess the antimicrobial activity of these drugs. Other approaches to study the intracellular activity of drugs against M. abscessus have been performed in vitro using a variety of macrophages, including bone marrowderived macrophage, J774 and THP-1 cells [32][33][34].
Animal models have also been used to evaluate the antimycobacterial efficacy of drugs against M. abscessus infection; however, only severely immunocompromised strains of mice such as GKO or SCID have shown acceptable levels of infection [35][36][37]. The advantage of these mouse models is that M. abscessus progressively develops a high level of infection, allowing the detection of significant differences between the M. abscessus control and the drug-treated groups [35].
Nevertheless, in vivo studies have sometimes led to inconsistent results [10]; for instance, bedaquiline treatment reduced the CFUs by 2 logs in a SCID mouse model but was almost inactive in nude mice [35,37]. However, the main limitations of animal models in the search for antimicrobial activity against NTM are the large number of experimental animals regularly used, and the high costs of housing and handling the genetically modified mice [35,38,39]. Additionally, it has been reported that it takes up to 60 days for the study of the infection using the C57BL/6 and GKO mice [36]. These facts contrast sharply with the number of animals used in the present work and the time taken; here we used only ten mice, including those used for standardization, and the ex vivo time course of infection induced by M. abscessus was only a few days. In contrast, we were able to observe the histopathological characteristics of M. abscessus-induced damage and the pathogen's interactions with lung parenchyma cells after a short time post-infection (6-48 h), while animal models need 10 to 60 days to develop lung damage [35,36,38,40]. Studies with M. tuberculosis [19,20] and other pathogens [41][42][43][44][45] using PCLS or tissue explants, where tissue lesions or inflammatory infiltrates are observed at 24-48 h post-infection, support the advantage of these ex vivo models.
Findings such as the presence of foamy macrophages or nuclear fragmentation of neutrophils seen before 24 h in our model corroborate those results reported by Bernut et al. [46] who described the same results using zebrafish embryos.
At 48 h post-infection, we observed the presence of Langhans cells, which have not been previously reported in murine models of M. abscessus infection. These multinucleated giant cells are a histopathological landmark for mycobacterial infections, where they seem to have an important role in restricting mycobacterial growth [47,48], although they are not specific, as they are also present in numerous granulomatous diseases [49], including those caused by nontuberculous mycobacteria [50,51]. Langhans cells have been reported in granulomatous infiltrates in M. abscessus-induced cutaneous and pulmonary infections in immunocompromised patients [52][53][54][55].
At the histopathological level, in one of the most extensive studies where nine different transgenic murine models were used to analyze the ability of M. abscessus to induce infection, granuloma formation was reported, but multinucleated giant cells were not seen [35]. In contrast, granulomas with Langhans multinucleated cells were found in BALB/c infected with the vaccine strain of M. bovis (the Bacillus Calmette-Guérin vaccine, BCG-1). However, the granulomas containing these multinucleated Langhans cells were obtained from splenic tissues but not from the lungs of the infected mice [56]. In general terms, the histopathological findings that we describe here are akin to those reported by other investigators [35,36,40,57].
Ordway et al. [36] infected knockout mice with a high-dose aerosol of M. abscessus, and demonstrated peribronchiolar inflammatory infiltrates at 15 days; at day 30, they observed granulomas composed of aggregates of lymphocytes and foamy cells; and by day 60, the granulomas were larger, as well as the inflammatory infiltrates. When guinea pigs were infected in the same way, a more severe granulomatous inflammation in the lungs was observed at day 60, and it was characterized by sheets of epithelioid macrophages and organized aggregates of lymphocytes that infiltrated septal walls and filled alveolar spaces.
De Groote et al. [57] developed an animal model of M. abscessus chronic infection using granulocyte-macrophage colony-stimulating factor knockout (GM-CSF KO) mice. They reported inflammation of the bronchioles and alveolar architecture alterations in infected animals, with presence of macrophages and neutrophils. Acid-fast bacilli were observed inside macrophages, as well as some free bacilli in alveoli too, and after 4-months of chronic infection, large accumulations of foamy macrophages within the alveoli were observed. In our PCLS model, the uninfected slices kept their tissue architecture intact, while the histological analysis of PCLS infected with M. abscessus showed the presence of polymorphonuclear cells, epithelioid cells, foamy macrophages, multinucleated giant cells, and the signs of an early granulomatous inflammation after 6 and 12 h post-infection. We also observed acid-fast bacilli within macrophages, and isolated bacillus or forming aggregates in the alveolar spaces. The infected PCLS showed tissue destruction, causing the loss of the histological architecture, rupture of the alveolar septa, and areas of inflammatory aggregates with nuclear PMN fragmentation, as well as vascular congestion and thickening of the alveolar septa at 48 h post infection. The histological changes found in the in vivo models are similar to those found in the BALB/c mice PCLS model infected with M. abscessus, but at shorter infection time and it indicates that our model responds in a very similar way as the lung tissue does in an animal model.
An additional advantage of the ex vivo infection model using PCLS is that we used normal BALB/c mice, while murine infection models to study M. abscessus require expensive genetically modified mice, thus our methodology decreases the cost per experiment.
The development of our ex vivo 3D model enables the study of an M. abscessus experimental infection within a physiological milieu and provides the opportunity to study the infection process not only in lung tissue from laboratory animals but also in human lung tissue, as reported by Ganbat et al. [19] who used a similar ex vivo tissue culture model for mycobacterial infections. These authors infected the human lung tissues samples with two strains of each three different mycobacterial species, including M. abscessus, and focused on the very early onset of TB infection, and on the specific interactions between mycobacteria and the cells of the lung. The morphologic comparisons between freshly obtained and cultured ex vivo lung specimens showed no noticeable differences, like our results. The authors found, as we also did, that mycobacteria can infect different cell types, including macrophages, neutrophils, monocytes, and type II pneumocytes. The presence of foamy macrophages in PCLS infected with M. abscessus was clearly demonstrated (Fig. 4). Foamy macrophages are a distinctive characteristic of granulomas associated with virulent mycobacteria; these cells have been observed in different inflammatory conditions, including infectious and noninfectious diseases, such as natural and experimental TB in particular [58]. The formation of foamy macrophages has been described in models of infection with M. abscessus both, in vitro [59,60], and in vivo [61,62]. It has been suggested that the induction of foamy macrophages results in an intracellular microenvironment that allows the persistence of mycobacteria, and that the fatty acids that accumulated in their cytoplasmic vacuoles represent a source of nutrients for the bacilli [63].
Our findings on the ability of M. abscessus to infect lung cells in PCLS allowed us to evaluate the effects of two well-known antibacterial drugs. Imipenem, an antibiotic that is part of the multidrug therapy recommended for infections by M. abscessus, showed little effect against bacteria at a low MIC concentration, but a bacteriostatic effect at medium and high concentrations. This was an expected result, as other studies have shown the same behavior when intracellular activity was investigated [32,64]. We also evaluated tigecycline, which has previously demonstrated good in vitro MICs against M. abscessus isolates and has been used as a rescue treatment for M. abscessus and Mycobacterium chelonae complicated infections [8]. In this study, tigecycline was bactericidal after 24 and 48 h post-infection at 1× and 4× the MIC value (P < 0.05). Previously, this compound had already been demonstrated to have an observable bactericidal effect against resistant M. abscessus in a hollow-fiber model system for pulmonary disease [65], as well as intracellular activity in a THP-1 macrophage model [32].
Conclusions
In conclusion, PCLS represent a useful 3D model for the study of ex vivo infection with M. abscessus and the activity of new antimycobacterial drugs. This model has the simplicity and reproducibility of in vitro models, but its major advantage is the presence of more than forty differentiated cell types with metabolic capability, polarization, and extracellular elements found in in vivo models. Furthermore, this model complies with the 3R's Principle [66,67] and provides the opportunity for testing new drugs against M. abscessus, decreasing the use of costly and tedious animal models. | 2023-01-17T14:55:05.972Z | 2020-11-22T00:00:00.000 | {
"year": 2020,
"sha1": "6481f55669f5e7186a9eebd6da864b1ef385f21b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12941-020-00399-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6481f55669f5e7186a9eebd6da864b1ef385f21b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
4007756 | pes2o/s2orc | v3-fos-license | Clinicopathological characteristics and prognosis of adult ovarian granulosa cell tumor: a single-institution experience in China
Objectives We aimed to demonstrate the clinical characteristics and risk factors associated with recurrence of adult granulosa cell tumor (AGCT), as well as the pregnancy and long-term outcomes among patients in a single institution in China. Patients and methods We reviewed 141 patients with AGCT in Peking Union Medical College Hospital between January 1983 and September 2015. Results The mean patient age was 45.1 years (16–78 years), and the mean tumor size was 8.8 cm (1–40 cm). The most common symptom was irregular menstruation (31.9%, n=45). The disease distribution was stage I in 136 patients, stage II in three patients, and stage III in two patients. Eighty-seven patients (61.7%) underwent radical surgery, while 54 (38.3%) underwent fertility-sparing surgery, of whom five subsequently had a total of five pregnancies. Fifty-two patients underwent pelvic and/or paraaortic lymphadenectomy, and none of them showed lymph node metastasis. The median follow-up period was 72.7 months (8.9–344 months). Twenty-six patients (18.4%) developed recurrence during the study period, with a median time to recurrence of 68 months (7–312 months). Initial stage (stage IC vs IA) and nonstaging surgery were independent risk factors for recurrence in both univariate and multivariate analyses for stage I AGCT patients. Conclusion Tumor stage is an independent risk factor for recurrence in patients with AGCT. Staging surgery is recommended for patients with AGCT, though lymphadenectomy may be omitted. Complete tumor resection is important for patient survival in patients with AGCT recurrence. Long-term follow-up is required, even in early-stage AGCT patients.
Background
Ovarian granulosa cell tumor (GCT) is a rare ovarian neoplasm derived from ovarian sex-cord stromal cells in the ovaries. GCT comprises two kinds of histology: adult granulosa cell tumor (AGCT) and juvenile granulosa cell tumor, of which AGCT accounts for .95% of GCTs and 2%-5% of all ovarian malignancies. Its clinical characteristics include an indolent clinical course and later recurrence, with a better prognosis compared with ovarian epithelial cancers.
Complete tumor resection consisting of bilateral adnexectomy and hysterectomy is the standard treatment for AGCT, with adjuvant chemotherapy recommended in patients with advanced stage or stage I disease with high-risk factors (tumor rupture, high mitotic index). Fertility-sparing surgery with complete staging is recommended for young patients wishing to maintain fertility. However, experience and evidence for the optimal treatment of AGCT are limited. The rarity of AGCT means that the
1316
Wang et al incidence of lymph node metastasis is not well known and the need for lymphadenectomy is controversial. A few previous studies have considered factors associated with AGCT recurrence, such as International Federation of Gynecology and Obstetrics (FIGO) stage, tumor rupture, tumor diameter, age, menopause, staging surgery, adjuvant chemotherapy, nuclear atypia, and mitotic rate. [1][2][3] The purpose of the present study was to analyze the clinical characteristics and risk factors for recurrence of AGCT based on the long-term outcomes in a large series of patients treated at a single institution in People's Republic of China. In addition, we discuss the need for lymphadenectomy and the role of fertility-sparing surgery in AGCT in light of the previous studies.
Patients and methods
The study has been approved by the ethics committee of Peking Union Medical College Hospital and is in accordance with the Helsinki Declaration of 1975. Informed written consent from each patient was obtained. Medical records of all patients diagnosed with AGCT of the ovary in Peking Union Medical College Hospital from January 1983 to September 2015 were reviewed. Patients with juvenile GCT were excluded. The patients' medical records were reviewed and the following information was collected: age, menopause status, tumor diameter, serum CA125 before surgery, chief complaint, FIGO stage, type of surgery, adjuvant therapy, relapse characteristics and relapse treatment, and follow-up information. Follow-up information was obtained from outpatient files or by telephone interviews with the patients or their relatives.
Fertility-sparing surgery was defined as preservation of the uterus and at least one ovary. Total abdominal hysterectomy and bilateral salpingo-oophorectomy were classified as radical surgery. Staging surgery included peritoneal washing, peritoneal biopsy, omentectomy, pelvic and/or para-aortic lymphadenectomy, and appendectomy as optional procedures according to the surgeon's experience and the intraoperative findings.
Patients were staged according to the FIGO staging system in 2009. Patients with stage II-IV or presence of high-risk factors were given chemotherapy after surgery.
Patients were classified into a recurrence group and a nonrecurrence group. Disease-free survival was defined as the time from initial surgery to the first recurrence or censor date.
statistical analysis
Statistical analysis was performed using SPSS version 15 (SPSS, Inc., Chicago, IL, USA). Recurrence curves were calculated using the Kaplan-Meier method and compared with log-rank tests. Two-sided p-values were considered statistically significant at p,0.05. Multivariate analysis was conducted using a Cox regression model to identify independent factors associated with recurrence. Variables with p,0.05 in univariate analysis were selected for multivariate analysis.
Results
A total of 141 patients underwent surgical treatment for AGCT in Peking Union Medical College Hospital during the study period. The mean age of the patients was 45.1 years (16-78 years), and 46 patients (32.6%) were postmenopausal. The mean tumor size was 8.8 cm (1-40 cm). The most common symptoms included irregular menstruation (31.9%, n=45), postmenstrual bleeding (17.7%, n=25), and abdominal pain (16.3%, n=23). Preoperative serum CA125 levels were available for 98 patients and were elevated in 17 patients (17.3%). The patient characteristics are listed in Table 1.
Most patients had stage I disease (96.4%, n=136), three stage II, and two stage III. All patients underwent surgery, including fertility-sparing surgery in 54 (38.3%) and radical surgery in 87 (61.7%). Fifty-two patients underwent pelvic and/or para-aortic lymphadenectomy, but none showed lymph node metastasis. The surgical pathological features are given in Table 2.
Fifty-six patients (39.7%) received adjuvant chemotherapy after surgery, including bleomycin, etoposide, and cisplatin in 21 patients; cisplatin, vincristine, and bleomycin in nine; cisplatin and cyclophosphamide in nine; paclitaxel
1317
clinicopathological characteristics and prognosis of adult ovarian gcT and carboplatin in five; cisplatin, adriamycin, and cyclophosphamide in four; and other regimens in eight patients. Of the 54 patients who received fertility-sparing surgery, five underwent unilateral cystectomy, 40 underwent unilateral salpingo-oophorectomy, and nine underwent staging operations. Five of the 54 patients subsequently resulted in a total of five pregnancies. All patients had stage I disease and delivered healthy babies at term.
The mean age of the patients with recurrence was 40.2 years (27-58 years). The mean time from initial surgery to relapse was 68 months (7-312 months), including 12 patients (46.2%) who suffered recurrences .5 years after their initial diagnosis.
The most common location for recurrence was the pelvic cavity (69.2%, n=18). Fourteen patients suffered one recurrence and 12 patients suffered more than one recurrence, with a maximum of seven recurrences. The longest period from initial diagnosis to recurrence was 26 years.
Treatments for recurrence included surgery alone in five patients, surgery and chemotherapy in 18, surgery and chemoradiation in two, and surgery and radiofrequency ablation in one patient.
Sixteen patients were alive without evidence of disease at the last follow-up, four were alive with disease, two had died as a result of recurrence, three were lost to follow-up, and one patient was receiving therapy.
More than 95% of the patients had stage I disease, and we therefore defined relapse factors for stage I disease. The clinicopathological factors associated with disease-free survival in 136 patients with stage I AGCT are shown in Table 3. In univariate analysis, recurrence was associated with stage IC and nonstaging surgery (p=0.01, 0.041) ( Figure 1A and B), while multivariate analysis also identified stage IC and nonstaging surgery as independent risk factors for recurrence (hazard ratio =3.839, 95% confidence interval =1.430-10.309, hazard ratio =2.673, 95% confidence interval =1.092-6.543).
Discussion
We investigated the prognostic significance of risk factors including age, menopausal status, tumor size, surgical method, FIGO stage, and adjuvant chemotherapy for recurrence in patients with AGCT. Both univariate and multivariate analyses identified FIGO stage and surgical method as significant prognostic factors.
FIGO stage is the most widely accepted risk factor for recurrence in patients with AGCT, and several studies have shown correlations between higher disease stage and increased recurrence. 4,5 The 5-year survival was reported to be 75%-95% in patients with early-stage disease, and this fell to 25%-50% in patients with advanced-stage disease. In the present study, .95% of patients presented with stage I disease, and we therefore analyzed the risk factors for recurrence in stage I disease. Our results also identified initial FIGO stage at diagnosis as an independent risk factor for recurrence. Complete staging surgery is recommended for patients with early-stage AGCT, 5 and our results also suggested that staging surgery was beneficial in patients with early-stage AGCT. The recurrence rate was lower in patients with staging surgery compared with those without (12.5% vs 22.5%). Upstaging due to microscopic extraovarian disease has also been reported, 6 and two patients were upstaged after complete staging surgery, though the disease was confined to one ovary, which presented as apparent stage I disease.
No patients with complete staging in the current study experienced recurrence or death during the follow-up period, compared with recurrences in 9 of 63 (14.3%) patients who did not undergo complete staging. These results suggest that staging surgery is important in patients with presumed early-stage AGCT. Surgeons should also aim to identify and excise extraovarian disease in patients with presumed early-stage disease. 6 Surgery is the primary treatment for AGCT. GCT often affects younger patients, and fertility preservation is thus an important issue. However, the role of fertility-sparing surgery remains unclear. Some studies found that fertilitysparing management was associated with high recurrence and low survival rates, 7,8 while others found no difference in recurrence rates between conservative and radical surgery in patients with stage I disease. 2,9 In the current analysis, fertility-sparing surgery seemed to be a risk factor for recurrence in AGCT (32.6% vs 13.3%); however, further analysis of the 54 patients who received fertility-sparing surgery revealed that most recurrences occurred in patients who did not undergo staging surgery (28.9%, 13/45), while the recurrence rate in those who did undergo staging was only 11.1% (1/9). Eighty-three patients underwent radical surgery, with a recurrence rate of 13.3%. The recurrence rate of 11.1% in younger patients with fertility-sparing and staging surgery was thus acceptable compared with that of 13.3% in patients receiving radical surgery. These findings reinforce the importance of staging surgery in patients who want to retain fertility.
We also performed a literature search for information on pregnancy outcomes in patients received fertility-sparing surgery using PubMed, with the keywords: fertility sparing and pregnancy and granulosa cell tumor and ovary. We identified four articles that met the inclusion criteria. 5,10-12 The cumulative data regarding pregnancy outcomes and recurrence rates in patients with AGCT after fertility-sparing surgery are summarized in Table 4. A total of 139 of 515 patients underwent fertility-sparing procedures, with recurrence rates of 0%-33.3%. Only 14.4% (20/139) of patients had pregnancies after fertility-sparing surgery. Both our results and those of the previous studies suggested that recurrence was common in patients with fertility-sparing surgery, and close monitoring is therefore needed in these patients. Furthermore, hysterectomy and salpingo-oophorectomy are strongly recommended after completion of family planning.
Previous studies reported incidences of retroperitoneal lymph involvement during initial surgery of 0%-12.5%. 6,13 A total of eight papers each reported more than 50 cases of AGCT, 1,2,4,6,11,[13][14][15] and the summarized results indicate that the incidence of lymph node metastasis was only 3.9% (Table 5). In accord with previous reports, we found no cases of lymph node metastasis among 52 patients with nodal tissue evaluation. Karalok et al 4 reported the highest rate of lymph node dissection in patients with AGCT to date, and showed that among 121 of 158 (76.6%) patients with systematic lymph node dissection, only three had lymph node metastasis. In addition, lymphadenectomy was not associated with recurrence. These findings suggest that the incidence of lymph node metastasis is extremely low in AGCT at primary surgery, and lymphadenectomy may thus be omitted during staging surgery.
The patients may develop hyperplasia or endometrial cancer as a result of prolonged exposure to high levels of estradiol secreted by the GCT, with reported incidences
1320
Wang et al of endometrial hyperplasia and endometrial cancer of 21.5%-71% and 1.3%-13.2%, respectively. 16 Sixteen patients (11.3%) in the current study had endometrial hyperplasia and two had endometrial cancer (1.4%) at diagnosis of GCT, which was similar to previous report. 16 Current guidelines for the treatment of GCT recommend comprehensive staging surgery including total hysterectomy and bilateral salpingooophorectomy in postmenopausal/postmenstrual women with AGCT. However, conservative surgery is always recommended in younger patients with a wish to maintain fertility if the tumor is confined to one ovary and the endometrium is normal. van Meurs et al 17 studied endometrial abnormalities in 1,031 patients with GCT during long-term follow-up in a population-based cohort study. Among 490 patients who did not undergo hysterectomy at the time of GCT diagnosis, eight patients (1.6%) developed hyperplasia and two (0.4%) developed endometrial cancer. They concluded that the development of endometrial abnormalities after surgical removal of GCT was extremely rare and lower than the risk of endometrial cancer in the normal population. Furthermore, the endometrial abnormalities were accompanied with recurrence of GCT in 8 of the 10 patients. In our study, 54 patients underwent fertility-sparing surgery and no endometrial abnormalities were observed during follow-up. Fourteen of these patients suffered recurrence, but none developed endometrial lesions. This suggests that spontaneous regression may occur following the discontinuation of estrogen exposure by removal of the GCT. In addition, most patients (90%) with endometrial hyperplasia or cancer were above 40 years old, consistent with a previous study 16 which reported that endometrial pathology was rarely observed in GCT patients under the age of 40 years. Overall, these findings support the safety of conservative surgery in young patients wishing to retain fertility. However, given the common coexistence of endometrial abnormalities, it is important to evaluate the endometrium using ultrasound or curettage when considering conservative surgery in young patients. In our study, 18.4% of patients with stage I disease had recurrence, with a mean time from initial surgery to relapse of 68 months (7-312 months). AGCT is characterized by slow, indolent growth with later recurrence. The longest diseasefree interval in the current study was 26 years, while the longest reported interval between initial diagnosis and recurrence was 37 years. 18 Moreover, almost half of our patients (46.2%, 12/26) had recurrence after more than 5 years. These findings highlight the importance of long-term follow-up of patients with AGCT, even those with early-stage disease.
The pelvis has been reported as the most common site of recurrence. 19 The same result was seen in our study, the pelvis being the most common site (69.2%, 18/26), followed by the abdomen (38.5%, 10/26), lung (7.7%, 2/26), and retroperitoneum (7.7%, 2/26). There is currently no standard management for relapsed GCT, and multiple approaches such as surgery, chemotherapy, radiotherapy, and hormone therapies have been reported. [20][21][22] All patients with recurrence in the current study received surgery, with or without other modalities: five patients underwent surgery alone, 18 underwent debulking surgery and chemotherapy, two received surgery and radiotherapy, and one received surgery and radiofrequency ablation. Mangili et al 19 suggested that optimal debulking surgery was the cornerstone treatment for relapse of GCT. However, the absence of residual disease remained a prognostic factor, even at recurrence, and the 5-year overall survival rates from first recurrence were 55.6% and 87.4% for patients with or without residual tumor at subsequent debulking surgery, respectively. Karalok et al 23 reported on 16 patients with relapsed AGCT and showed that maximal debulking could be achieved in all patients with unifocal recurrence, compared with only three patients (37%) with multifocal recurrence. They also found that multifocal recurrence and presence of disease were associated with poorer progression-free survival and overall survival, and concluded that maximum surgical effort is warranted for recurrent AGCT of the ovary. Chua et al 20 demonstrated the feasibility and safety of peritonectomy to achieve maximal cytoreduction in patients with recurrent AGCT. In the present study, all patients underwent surgical debulking using multiple surgical approaches, such as extensive peritonectomy, diaphragmatic resection, and partial hepatectomy to achieve optimal cytoreduction. Sixteen patients remained alive without disease at the end of the follow-up period, four were alive with disease, two had died of disease, three were lost to follow-up, and one was undergoing treatment. The outcome of patients with relapsed AGCT thus seems acceptable, compared to that of ovarian epithelial cancer. Complete tumor resection appears to provide the best chance of patient survival, and surgeons should aim to excise recurrent foci.
The present study had some limitations. First, the rarity of GCT makes it hard to carry out well-designed studies. Second, this was a retrospective study conducted over a long period, and some information was missing as a result of loss of follow-up, while changes in practice patterns over the course of the study may also have affected the outcome. However, the present study also had several strengths. Notably, it was the first single-center study conducted in China, with a long follow-up period (median follow-up: 72.7 months). Furthermore, the number of cases (n=141) represents one of the largest reported studies of patients with GCT, and all patients were handled by experienced gynecologic oncologists. Finally, in addition to AGCT outcomes, we also analyzed pregnancy results, which have rarely been reported in previous studies.
Conclusion
Most cases of AGCT are diagnosed at an early stage, but complete staging surgery is recommended for all patients with AGCT. Lymph node metastasis is rare among AGCT, and lymphadenectomy can thus be omitted from staging surgery. Unilateral salpingo-oophorectomy including staging surgery is the optimal treatment in younger patients who wish to retain fertility, with no compromise in terms of survival. In addition, it is important to carry out ultrasound or curettage to evaluate endometrial abnormalities in young patients considering conservative surgery. Maximal surgical resection is important for survival in patients with AGCT relapse, and lifetime follow-up is required, even in patients with early-stage disease, because of the risk of later recurrence. | 2018-04-03T02:37:29.324Z | 2018-03-07T00:00:00.000 | {
"year": 2018,
"sha1": "e749926fd14d23c28c9bd273e13523b56a7aac50",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=40842",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3aeee7fd9f43c2f0a344092806eb4afc7efe832",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67757348 | pes2o/s2orc | v3-fos-license | Comparing the operations and challenges of pig butchers in rural and peri-urban settings of western Kenya
The purpose of this cross-sectional, observational study was to describe the pig butcher enterprises in western Kenya; highlighting differences in the operational processes and challenges between rural and peri-urban settings. Fifty pig butchers were interviewed using questionnaires in two districts, Kakamega (peri-urban) and Busia (rural). Results showed that pig butchers were central to the coordination of activities required to connect pig farmers to pork consumers in their communities. Several differences between rural and peri-urban enterprises included use of agents to find pigs, average market weight of pigs, pig prices per kilogram, transport and marketing. Butchers were challenged by credit and capital constraints, seasonality, high pig prices and high search costs. Butchers should be encouraged to have pork inspected and should be included in outreach programs intended to prevent the spread of zoonotic pathogens since they are the last intervention point before pork is consumed. Use of the tape measure for estimating pig weight could help remove inequalities between farmers and butchers abilities to estimate pig weights and could help to reduce search costs for the butcher, thus increasing equity and efficiency of trade between farmers and pig butchers in western Kenya.
INTRODUCTION
In rural economies of many tropical countries, pigs are an important livelihood activity (Mutua et al., 2011;Lekule and Kyvsgaard, 2003).In western Kenya, almost 90% of pigs are sold to local pig butchers who sell pork in their butcheries (butcher shops) (Kagira et al., 2010;FAO, 2012).The appreciation for pork as an animal food source in the Western Province has been recognized (Kagira et al., 2010b;Mutua et al., 2011) and the number of pigs slaughtered in Kenya has been steadily rising (FAOSTAT, 2009).Approximately 280,000 pigs were slaughtered in Kenya in 2009Kenya in , compared to 163,908 in 2000, representing an annual growth rate of 8% (FAOSTAT, 2009).As pig slaughter numbers increase there is value in furthering our understanding of pig marketing, particularly in rural areas where farmers often face challenging marketing conditions (Chamberlin and Jayne, 2013).The financial benefit to farmers for rearing pigs depends on remunerative marketing opportunities.*Corresponding author.E-mail: mlevy@uoguelph.ca.Fax: 519-763-8621.
Improvements to marketing systems not only increase the economic benefits of livestock to the individual producer but also reduce food costs to consumers and stabilize food supply for the communities which these markets serve (World Bank, 2008;Randolph et al., 2007).
The Western Province has a very high prevalence of poverty (Krishna et al., 2004), and has the 2nd highest population of pigs in Kenya (FAO, 2012), so studying the marketing opportunity for pig farmers in this location is important.In contrast, pig-rearing in the Central province is more intensive and farmers can market their pigs to butcheries in urban centers and to pork processing factories within their proximity (FAO, 2012).Smallholder farms in the Western Province range from 0.2 to 2.5 acres and average seven people per household (Rarieya and Fortun, 2010).Mixed crop and livestock farms are the most common and in farms with low acreage, chickens and pigs tend to be the most commonly chosen livestock (Kagira et al., 2010).Traditional pig management is the dominant pig rearing system in western Kenya, with 95% of the nearly 90,000 pigs raised in this manner (FAO, 2012).The pigs are native or crossbred species and are allowed to scavenge for food during non-harvest seasons to keep input costs low (Mutua et al., 2010;Lekule and Kyvsgaard, 2003).Farmers have been encouraged by researchers and local government staff to keep their pigs tethered during education workshops intended to reduce the transmission of the Taenia solium parasite (Wohlgemut et al., 2010).Farmers keep between 1 and 3 growing pigs on their farms and women are predominantly responsible for their care (Kagira et al., 2010;Mutua et al., 2010).The challenges commonly identified include feeding, breeding, diseases and low selling prices (Mutua et al., 2011(Mutua et al., , 2010;;Kagira et al., 2010).Strengthening extension services has been recommended to promote healthy pig production, improve breeding and increase farmers' knowledge of pig rearing (Mutua et al., 2011).The pig industry is monitored by the Kenyan government.The Pig Industry Act outlines the regulations for selling live pigs, the licenses required to slaughter pigs and the conditions upon which a pig butcher can sell pork (Anonymous, 2006).Although, the pig industry in Kenya is relatively small (0.3 million) compared to other livestock, the consumption of pig meat is anticipated to increase with urbanization and social views resulting from education (Wabacha et al., 2004;FAO, 2012).Pig marketing has been studied in Busia, western Kenya (Kagira et al., 2010b) where challenges and characteristics were highlighted.The challenges presented by Kagira et al. (2010b) included inter alia 'conflict with regulatory authorities', erratic pig supply particularly after an African swine fever (ASF) outbreak, excessive travelling distances to purchase pigs, seasonal fluctuations in the market, transport and competition.At the time our study began, there was a paucity of literature available on the subject of pig marketing in western Kenya.Our study provides a detailed description of the processes involved in getting a pig from the farm gate to the consumer which has not been previously documented for these locations.Our study is also an extension of the work by Kagira et al. (2010b) as it includes butchers from two districts; Busia which is rural, and Kakamega which is peri-urban, allowing us to compare the characteristics and challenges of butcher enterprises between the two districts.
The primary purposes of this research are to: 1) describe the pig butcher and his role in the process of marketing pork while assessing differences between rural and peri-urban settings; 2) assess the butchers' perspectives on the challenges facing their operations.Understanding the key differences will aid policy makers in addressing disadvantaged settings, or aid in prioritizing extension material and services for rural or peri-urban settings.A record of current pig marketing and the processes of pig butchers will allow for future monitoring of how the industry evolves.
Study area
This cross-sectional, observational study was conducted in the Busia and Kakamega Districts of western Kenya.Busia is a rural district bordering on Uganda, with a population of 488,075 (Anonymous, 2009).Kakamega, the capital of the Western Province, is surrounded by peri-urban farms and is situated in the Kakamega District, with a population of 1,660,651 (Anonymous, 2009).Two sub-locations in each district, Butula and Funyula in rural Busia, and Shinyalu and Ikolomani in peri-urban Kakamega, chosen out of convenience because of their large population of pigs, history of pig keeping, high prevalence of poverty, and because smallholder farmers in these locations had been previously studied (Mutua et al., 2011;Kagira et al., 2010b;Thornton et al., 2002).
Butcher selection
All butchers known to source pigs from the villages within the four sub-locations were enumerated in 2008 and 2009 by local government meat inspectors, pig farmers and village elders based on their personal recollection.The enumeration process was repeated in 2009 to ensure that new butchers, or those not enumerated in 2008, were invited to participate.Each enumerated butcher was invited to participate in the study either in June of 2008 or June of 2009.To fit the inclusion criteria for the study, butchers had to purchase pigs at least once every month for the purpose of butchering and selling the pork; middlemen who purchased pigs for the purpose of reselling to butchers were excluded.Un-licensed butchers were allowed to participate in the study.
Survey design, questions and beta test
A structured questionnaire was designed to capture information about butchers, their processes and their opinions on the challenges of pig butcher operations in the areas of procurement, transport, slaughter, marketing and government regulation.Questions about the butcher included age, education levels, how long the butcher had been in the business, and how the butcher got into the business.Questions about the procurement of pigs included who the butchers purchased pigs from, how many pigs were purchased weekly, whether or not they resold pigs they purchased, all of the methods they used to find pigs, and whether the butchers had contracts with farmers.Transport questions included methods of getting to the farm to see pigs, methods of transporting the pigs, how far the butcher typically travelled in a day searching for pigs, how much time the butcher spent in a day searching for pigs.Slaughter questions included how often the slaughter slab was used, what proportion of pigs were inspected by government inspectors, and the labour required for slaughter slab help.Questions about the marketing of pork included whether the butcher sold raw pork or both raw and cooked pork, the number of pigs purchased and sold in for the shop each month of the year, the number of employees in the shop, and whether ugali (staple food made with ground maize) was sold with cooked pork.Questions about government regulation included the costs of their license renewals, the nature of the licenses, and how often they were required to renew their licenses.
The survey also included a 5-point Likert scale rankings for a list of potential challenges in the areas of procurement, slaughter, inspection, transport, capital, marketing and regulation.In 2008, the pre-designed questionnaire was beta tested on one butcher in the field and then modified before other interviews were conducted.The questionnaires may be obtained by request to the authors.
Interview process
Pig butchers were initially contacted by telephone or in person by a village elder or a government inspector who described the research study.The butchers who were willing to participate provided a convenient time and location for an interview.An individual, face-toface interview was conducted with each butcher in either 2008 or 2009 by one of the researchers and a local villager who spoke both English and Swahili.The survey questions were asked in Swahili unless the butcher was comfortable responding in English.All answers were translated into English and transcribed by the researcher onto the data collection form.Neither the government inspectors nor the village elders were present for the interview.The butchers were assured that the information they provided was confidential and that only aggregated data would be used for the study.The butchers were interviewed at their shop or home, or while they were in transit searching for pigs.All butchers volunteered to be part of the survey and gave approximately 1.5 h of their time of each visit to complete a questionnaire.As a gesture of appreciation, butchers were given a package of 100 small bags which are commonly provided to customers to carry purchased pork.Research ethics approval was granted by the University of Guelph in Ontario, Canada and by the Veterinary Director General in Nairobi, Kenya before the interviews were conducted.
Data management and analysis
The data were entered into Microsoft Excel 2007 (Microsoft, Redmond, WA, USA) by one researcher and then validated independently by a second researcher.All analyses were conducted in SAS 9.1.(SAS Institute Inc. Cary, NC).
Describing butchers and assessing differences across district
Descriptive tables were created using means and standard deviations (SD) for continuous variables, and proportions for categorical variables.To assess differences experienced by butchers across districts (rural Busia or peri-urban Kakamega), Student's t-tests were used on continuous variables.To assess differences experienced by butchers across districts, chi-squared analysis was used and odds ratios were calculated on categorical variables.A Fisher's exact test was used rather than the chi-square test if an expected cell value for any categorical outcome was less than 5 (Davis, 2007).Where variables differed between districts, they were presented separately in the results section; otherwise the overall result was presented.
Assessing butcher challenges and seasonal variation
To assess the differences between butchers' scores given to the challenges between districts, a Kruskal-Wallis test was performed.Each challenge was then individually assessed using a Wilcoxin-Mann-Whitney test.Bonferroni and Sidak adjustments were performed on p-values to control for experiment-wise error rates.To assess the differences of monthly pig purchases between districts, the Kruskal-Wallis and Wilcoxin-Mann-Whiteney tests were performed as described earlier for the butchers' challenges.
The butchers
Table 1 provides the number of butchers who were enumerated and the number of butchers who participated in the study.In total, 51 pig butchers were enumerated, and 50 were studied; 25 from rural Busia and 25 from peri-urban Kakamega.The majority of butchers were interviewed in 2008 however additional butchers were added in 2009 because they were either missed in the 2008 enumeration or they were new to the business in 2009 (Table 1).One butcher that was enumerated in 2008 could not be reached in either year, and was not interviewed (Table 1).One farmer who purchased pigs only in the busy season and then slaughtered and sold the pork from his farm and one middleman were interviewed but excluded from the study.All butchers were male except one.The butchers were between the ages of 20 and 60 with a median age of 33 years [mean age of 36.5 years (sd = 10.71)].On average, the butchers had been in the business for 8.5 years (sd = 7.41).There were several new butchers in the business with 19% having less than 1 year, 12% between 1 and 2 years, 16% between 3 and 5 years, and 53% with more than 5 years experience.Twenty-six percent (26%) of the butchers also identified farming as another livelihood activity but none of the pig butchers butchered other livestock.Education levels varied: 10% had no education, 20% attended some primary school, 37% completed primary, 6% attended some secondary school, 25% completed secondary school and 2% completed college.Many butchers learned the butchering business from a family member (44%).Others learned on their own (19%), from working for another butcher (17%), from a friend (14%), from a farmer group, co-operative (3%) or in school (3%).The butcher business was sometimes Participating butchers responded to a questionnaire in either English or Swahili.The questionnaire was exploratory in nature, designed to capture information about the butchers, their processes, and their opinions on challenges of pig butcher operations in the areas of procurement, transport, slaughter, marketing and government regulation.
Agents Butcher
Cooker
Cutter Server Smallholder farmers (pig producer)
Procurement
Transporter or butcher
Legend:
communication path between various stakeholders flow of the pig moving chronologically through the marketing system generational as 30% of the butchers had fathers who butchered either cattle or pigs.
An overview of the pig marketing system
In the indigenous pig-marketing system being described, the pig butcher was responsible for the coordination of activities and people necessary to transform pigs into marketable pork.Figure 1 depicts the interactions, activities and stakeholders linked to the pig-butcher enterprise.The pigs were not purchased in a central market; instead, butchers purchased pigs directly from the smallholder farmer at the farm gate, sometimes using an agent to aid in finding pigs.A purchased pig was transported to the butcher's shop, the butcher's home, or directly to the slaughter slab (abattoir), depending on the time of day that the pig was purchased.From the butcher's shop or home, the pig was transported to the slaughter slab.Pigs were usually slaughtered in the morning.The pork was inspected at the slaughter slab before being transported back to the butcher shop to be sold to consumers either as raw or cooked pork.The butcher enterprise, slaughter slab and meat inspection were regulated by the government.
Procurement
Most market-weight pigs changed ownership only once between the farmer and the butcher before being sold for pork.Half of the butchers (53%) purchased live pigs and resold an average of 4.8 pigs per month (20% of the pigs they purchased) to other butchers.Butchers found pigs by having farmers coming to their shops to notify them (97% of respondents), using agents to find pigs (75%), going to farms to look (69%), calling a farmer on a cell phone (67%), or getting a call from a farmer on a cell phone (54%).Few butchers reported farmers bringing pigs to the shop (11%).Butchers discouraged people from bringing pigs to the shop to protect themselves from inadvertently purchasing a stolen pig.No butchers from Busia reported purchasing pigs from a supplier on a truck, whereas a small percentage of butchers from Kakamega (20%) did report that as a method for finding pigs.Table 2 presents the operational practices of pig butchers that differed significantly between districts.Busia butchers were 6.1 times (p ≤ 0.05) more likely to do repeat business with farmers than butchers in Kakamega (Table 2).Few (11%) of butchers said they had an agreement with farmers for purchasing pigs; however none of the agreements were financial in nature.The agreements were only verbal arrangements to do business in the future.Prices were never discussed until time of the transaction.All exchanges were completed using cash.Although, most butchers (75%) reported using agents to find pigs, the proportions of pigs purchased through an agent in rural Busia was significantly lower than that of peri-urban Kakamega (Table 2).Agents were more like informants in that they put the butcher and farmer into contact with one another for a flat fee.Whether informed by an agent or contacted by a farmer, the butcher always travelled to the farm to see the pig.Butchers reported travelling for 5.4 h (sd = 3.39) or 24.2 km (sd = 28.79) in a day to source pigs.Travel time and distances did not significantly differ between districts.None of the butchers had access to credit for the procurement of pigs.A few butchers explained that they often could not purchase their next pig until they had sold enough pork from the pig currently in their shop.Sometimes a butcher had an opportunity to purchase a pig, but by the time the capital was raised, the pig had been sold to another butcher.Butchers mentioned, informally, a desire to expand their inventory of pigs, seeing opportunity in buying young pigs to feed to market weight or to have pigs as a safety net for when they lacked capital or could not find a pig to purchase.Busia butchers were 11.1 times (p ≤ 0.10) more likely to keep pigs on their farm than Kakamega butchers (Table 2).Busia butchers also reported significantly lower average pig weights and pig purchase prices (per kg) than Kakamega butchers (Table 2).Butchers classified the factors affecting the price they were willing to pay for pigs on a 5-point Likert scale from most important (5) to least important (1) as follows: size of pig (4.91), health of pig (4.86), time of year (3.86), sex (3.29), breed (3.27) and age (2.73).The size and health of the pig were scored significantly higher than the time of year (p ≤ 0.09), sex, breed and age (p ≤ 0.0002).All butchers estimated the weight of the pigs without use of a weight scale.
Butchers reported that 29% of the farmers knew the weight of their pig.Butchers negotiated the price of the pig directly with the farmer.Negotiations began with a farmer stating the price.The gender of the farmer who bartered with the butcher was approximately evenly distributed between males (54%) and females (46%).Purchases were cash sales, so once a butcher 1).Odds ratio (OR) based on Fisher's exact test (Davis, 2007).n/a: Not applicable, -: not significantly different.
purchased the pig, he bore the cost of the pig if the carcass was condemned at the time of inspection, if the pig was stolen, or if the pig died during transport.
Transportation
Butchers required transportation in several situations; to get to farms to look at the pigs, to move the pig back to a temporary holding space such as the butcher shop or the butcher's home, to move the pig to the slaughter slab, and to transport the pork carcass in a transport box from the slaughter to the butcher shop.Butchers purchased 62% of their pigs from outside of their own village.The main reason cited by 97% of the butchers was that the farmers wanted too much money in their village.One butcher commented that neighbors could see him making progress and therefore thought they expected a higher price due to jealousy.Other reasons included; not enough pigs in their own village (83%), pigs too small in their own village (81%), and pigs not healthy in their own village (81%).Table 3 lists the methods of transport that butchers used for getting to farms to see pigs, transporting pigs from the farm, and transporting pork from the slaughter to the shop.A butcher walked to a farm to see a pig if the farm was close enough.However, pig butchers in Busia were more likely to use bicycles, whereas Kakamega butchers were more likely to use rented motorcycles for farms that were not within walking distance (Table 3).Purchased pigs were most often walked from the farm, but might have been tied to a bicycle in Busia or to a motorbike in Kakamega (Table 3).Some butchers mentioned that transporting a pig on a bicycle was illegal.If butchers found pigs close enough to their business, they did not incur transportation costs because they could walk the pigs.Butchers less frequently paid for transport in Busia than butchers in Kakamega (Table 2).Some problems associated with transport that were mentioned informally by the butchers included; during transport, authorities asked the butcher for a letter from the person who sold him the pig, pigs were too far away, cycling through rough terrain was very difficult, bicycles got damaged while looking for pigs, butchers were fined for allegedly purchasing a stolen pig.Some problems with travel that were mentioned informally by the butchers included; farmers not being at the farm when they arrived to purchase the pig (and therefore the butcher needing to make another visit), farmer had sold the pig by the time the butcher got to the farm, or the butcher got to the farm but the pig was not large enough to be slaughtered.
Slaughter
Slaughter slabs were privately run enterprises, and butchers were charged a fee for each pig slaughtered.A government inspector examined the carcass at the slaughter slab and condemned unsafe meat.After inspections, the butcher was provided with a ticket to display alongside the pork in the shop to show the inspection date.All but one butcher said that 100% of their pigs were slaughtered at the slab.However, some butchers (14%) admitted that not all pigs were inspected, especially in very busy seasons such as Christmas.Government inspectors were to be available at each slaughter slab for a short time each day, usually in the mornings.Two butchers mentioned informally that government inspectors did not come every day, or came later than expected on some days, resulting in some missed pork inspections.Overall, butchers reported that 1).,TheBusia butchers scored seasonal variation higher than Kakamega butchers (p ≤ 0.05).Kakamega butchers scored finding pigs higher than Busia butchers (p ≤ 0.05).
93% of the pigs they slaughtered were inspected.This did not differ by district.
Marketing
Pork was sold in local shops either as raw pork or as a plate of cooked pork, served optionally with ugali which was sold separately.Ugali, the staple food in the area, is maize flour cooked with water into a dough-like consistency.Butchers in rural Busia were 20 times (p ≤ 0.05) more likely to sell cooked pork than butchers in peri-urban Kakamega (Table 2).For butchers that sold cooked pork, less than half (41%) of the pork they sold was cooked, while 59% was sold raw.Butchers hired 2.8 employees (sd = 1.38) to help run their operations.Employees served many functions, including cutting pork, serving customers, cleaning, cooking and helping with slaughter (Figure 1).Some butchers relied on employees to look for and transport pigs.
Government regulation
Butchers were required to have health certificates for each employee who handled pork in their shops.A local business license was also a prerequisite for keeping a shop.One butcher admitted he was operating part-time without a license.Business licenses, health certificates and weigh-scale inspections were charged to butchers on an annual or semi-annual basis.
Butchers' perspectives on their challenges
Table 4 lists the challenges scored by butchers (using a Likert scale) in each district from highest (5) to lowest (1).
Busia butchers scored seasonal variation and access to capital as their highest challenges (Table 4).The Busia butchers scored seasonal variation higher than butchers in Kakamega (p ≤ 0.05).Seasonality reduced sales and forced butchers to lower prices.The reasons cited for lower sales were that people needed money for school fees, farm inputs, and planting, and therefore did not have extra money to buy pork.Kakamega butchers scored pig prices and finding pigs as their highest challenges.Kakamega butchers scored finding pigs as a higher challenge than butchers in Busia (p ≤ 0.05).
Selling the pork was the lowest scored challenge for butchers in Busia and Kakamega (Table 4).Figure 2 illustrates the seasonal variation of pig purchases by month.
From August through September, pig purchases increase, after the biggest harvest and the sale of crops which gave farmers disposable income to purchase pork.November and December were very busy months, attributed to people having money available from the second harvest.Also, family members came back to the villages for the December holidays, bringing money from their city jobs, and families were more likely to eat pork during the holidays.No significant differences in pig purchase counts were found between Busia and Kakamega butchers for any given month.
The butcher and the role of butcher enterprise in pork marketing
The butchers in the local pig-marketing system are central to the coordination of activities required to connect pig farmers to pork consumers in their communities.The butchers provide smallholder farmers 1).No significant differences in pig purchase counts were found between Busia and Kakamega butchers for any given month.
the only legal marketing outlet for inspected pork, as they are required to have health and business licenses to handle and sell pork.Butchers invest their own capital and assume the risks associated with purchasing pigs, transporting them, having them inspected, and selling the pork.They also create employment opportunities in their communities.Employment creation is an important benefit of local markets (Puskur et al., 2011).Many butchers, particularly in rural locations also cook the pork.Consumers rely on the butchers to have the pork inspected and to safely handle and cook the pork.It is important that butchers appreciate the need to have pigs inspected and to ensure that pork is properly cooked.Infection from zoonotic pathogens such as porcine cysticercosis, trichinellosis and toxoplasmosis can occur from the consumption of infected and undercooked pork (Thomas et al., 2012).Estimates on the prevalence of porcine cysticercosis in the study locations has been reported to be between 4 and 4.5% at the pig level, and between 9 and 15% at the farm level depending on the testing method and the study (Kagira et al., 2010c;Mutua et al., 2011).The outreach and training on prevention of zoonotic pathogens has tended to focus on farmers (Flisser et al., 2003;Ngowi et al., 2009;Wohlgemut et al., 2010).Educating butchers is also important as results from this research show that butchers do not have all their pork inspected and the butchers that sell cooked pork are the last prevention point before pork is consumed.Kagira et al. (2010b) reported that the District Veterinary Officer felt there was insufficient staffing and transport capacity to support the number of slaughter slabs in Busia District.The government should equip inspectors with the resources to travel to each slaughter slab more frequently than once a day during high seasons and ensure visits are consistent during lower seasons.In our study, the one unlicensed butcher that operated part-time did not use slaughter slab facilities, or have pork inspected, or have a license to handle and sell pork.Un-inspected pork increases the health risk to the community as discussed earlier and may compromise the reputation of the industry if people fall ill to un-inspected pork.Butchers that do not pay for slaughter, inspection or licenses have fewer expenses which make for illegal and unmerited competition.Further research should be completed to understand community perceptions of illegal pig slaughtering, and the impact unlicensed butchers have on the pork industry.
Procurement
Butchers spent a considerable time searching for pigs.The travel time and expense of travelling to farms only to find the pigs were not market weight, the farmer was not present at the farm to negotiate a price, or the pig had already been sold was challenging for butchers.It is also costly to have to visit each farm and negotiate each pig purchase given the distances that butchers must cover.The cash-only exchange of goods without any prearranged agreements or warranties has been characterized as a "flea market economy" by Fafchamps (2004).In the absence of weigh scales, grading systems, and contracts, butchers will not risk a pig acquisition Busia Kakamega without seeing the pig despite doing repeat business with farmers, and farmers will not allow the butcher to take the pig without cash payment.The lack of contractual enforcement (and therefore use of contracts) and grading systems has been recognized as costly to marketing systems in SSA (Poulton et al., 2010;Coulter et al., 2002;Kyeyamwa et al., 2008).The pig weight was the most important criteria for butchers in evaluating pig prices.Without weigh scales, butchers and farmers had to estimate the weight of the pig to negotiate the price.Since butchers reported that only 29% of farmers were able to estimate the weight of pigs, there was likely inequality of abilities and to estimate pig weights during price negotiations.Farmers who under-estimate their pigs' weight, may under-value the pig and consequently receive a poor pig price.Smallholder farmers generally only sell 1 pig per year (Kagira et al., 2010b) so low revenue from a poorly negotiated pig sale could have a substantial impact on annual income, and lower farmers' incentive to raise pigs.Kyeyamwa et al. (2008) identified a similar scenario in Ugandan cattle markets, where traders had the experience of negotiation, could better estimate cattle weights and knew prices in the various markets available to them.Busia farmers reported receiving low prices in the study by Kagira et al. (2010).To reduce a butcher's advantage of being able to better estimate pig weights, the use of tape measures should be encouraged.Weight charts have been produced for pigs in the area of study (Mutua et al., 2011).Tape measures have been used in the absence of weigh scales for sheep and other ruminants (Kunene et al., 2009).Better estimated weights could also improve communication with the butcher to further reduce search costs and increase information flow.Producer groups could also help increase the efficiency of pig exchanges between butchers and farmers by producing a set of standards for pricing pigs, and tracking the prices of pigs sold based on the standards applied.These pig sales could be shared on local marketing boards (Kyeyamwa et al., 2008;Shiferaw et al., 2011).Lack of market information was a reported challenge by pig farmers in Busia (Kagira et al., 2010).Shiferaw et al. (2011) suggested that collective action is not as important for local markets as it would be for upstream markets.However, farmer groups could help to reduce the transaction costs associated with the local marketing system described and could benefit and promote cooperation between all stakeholders in the market.
Leveraging technology such as SMS messaging for cell phones or other cellular communication protocols could also improve the information flow for pig exchange.Not all farmers have cell phones, but the results from this study have shown that farmers and butchers do communicate about potential pig exchanges using cell phones.Use of electronic media has been acknowledged as a potential solution for increasing information access Levy et al. 133 (Kyeyamwa et al., 2008;Poulton et al., 2010).Electronic solutions have already seen traction in larger markets.Kenya does have an electronic commodity exchange board called Kenya Agricultural Commodity Exchange (KACE), which is a privately operated and facilitates commodity exchanges.A simplified, localized messaging system (either electronic or a simple bulletin board) could effectively service rural markets as well if it could be made cost effective and sustainable.A lack of sustainable financing has hindered the potential benefit that could be achieved with market information systems in SSA (Tollens, 2006).
Slaughter and transport
The challenges associated with slaughter and transport included condemned carcasses and death or loss of the pig in transit.Butchers assumed the risks of loss of the pig from the moment the cash was exchanged with the farmer.Loss of one pig could result in a complete depletion of capital, which could force the butcher out of business.The risk of purchasing a pig and not being able to receive revenue from it due to loss is amplified in a marketplace where butchers are constrained by capital.
Insurance programs to protect butchers from losses due to condemned carcasses or transport should be researched for feasibility and discussed with butchers to evaluate uptake of such programs.Currently, if a butcher has the carcass condemned, he may not have enough capital to purchase another pig for his shop, so we feel that butchers would take an interest in insurance products to back their pig purchases.Targeting butchers for insurance programs could reduce the costs associated with monitoring, moral hazard, and adverse weather conditions that have disrupted farmer insurance programs in the past (Poulton et al., 2010).
Marketing
Selling the pork was not a highly scored challenge for the butchers.An evaluation of cost structures, and marketing margins was beyond the scope of this paper; however, detailed net income statements should be assessed to understand the efficiency of butchers in rural and periurban settings, and the potential profitability of pigs for farmers and pig butchers in these markets.
Challenges and rural and peri-urban differences
Butchers faced a myriad of challenges in the day-to-day functioning of their enterprises.The rural Busia butchers scored seasonal variation and capital as their highest challenges (Table 4).The two challenges are likely related.In Busia, farmers are more dependent on farm income, so their disposable income fluctuates with harvest or wet and dry seasons.The dry seasons are difficult for marketing pigs, so butchers buy and market fewer pigs and lower their pork price, which negatively impacts their income, and working capital.The effect of seasonal market fluctuations is not unique to pork demand and reflects the seasonal pricing challenges of many commodities in SSA (Williams et al., 2006;Michelson, 2012).Suggested approaches to consumption smoothing include the use of warehouse receipt systems (Coulter and Onumah, 2002), and better infrastructure to promote distance trading (Poulton, 2010).However, warehousing pork requires electricity and freezers, neither of which is available to these butchers.Kakamega butchers' greatest challenges were high pig prices and finding pigs.In turn, they relied more on agents to find pigs.Butchers used agents to find pigs, rather than middlemen, likely because it was less expensive to pay a search fee to an agent, than a mark-up fee to a middleman.Generally, as the number of exchanges increase, the farmer's share of the retail price tends to decrease and deters participation (Kyeyamwa et al., 2008).Researchers did not get the sense from butchers that there were many pig middlemen in the market, and only encountered and interviewed one middleman (excluded from the study), who claimed to purchase most of his pigs in Uganda (Busia boarders Uganda).However, Kagira et al. (2010) suggested that in Busia, "amorphous" middlemen did purchase pigs to resell them to butchers but these researchers could not quantify margins or numbers of pigs.The Kenyan Pig Industry Act discourages the activities of middlemen, as it is legal for a farmer to sell pigs only to other farmers, a licensed pig butcher, or a licensee of a bacon factory (Anonymous, 2006).Butchers were capital constrained, with no access to credit as is seen with many small enterprises in marketing chains in SSA (Atieno, 2001;Kyeyamwa et al., 2008;Ajala and Adesehinwa, 2007;Jabbar et al., 2008).Some butchers could not purchase their next pig until they had enough revenue from the pork currently being sold in their shop.During low seasons, when butchers are charging less per kg of pork, raising the capital for the next pig becomes even more difficult, and butchers in turn have to lower the price they offer farmers.Busia butchers were more likely to keep pigs as part of their own farm asset mix, likely, to ensure having a steady supply of pork for their shops, or a buffer from short-fall of capital to purchase another pig.It also may indicate that the transaction costs associated with purchasing pigs is higher in Busia, and therefore butchers have more incentive to integrate their operations vertically (Klein et al., 1978;Coase, 1937).The Busia butchers also purchased smaller pigs, which meant they had less marketable pork per pig, requiring them to purchase pigs more often.The challenges described in this paper agree with those presented by Kagira et al. (2010) who identified travel, pig transport and seasonality as challenges in Busia.Their research also mentioned police and authoritative conflicts and outbreaks of African swine fever causing pig shortages.Butchers in the current study were given the opportunity to add to the list of challenges provided by the researchers.However, they did not add the challenges mentioned by Kagira et al. (2010).Other researchers have more broadly attributed credit, transportation, communication and corruption as limitations to the effectiveness of agricultural markets in emerging economies (Barrett and Mutambatsere, 2005;Kydd and Dorward, 2004;Kyeyamwa et al., 2008).
Rural butchers rode bicycles and paid for transport less often, whereas peri-urban butchers relied more on motor transport.Most butchers in rural Busia but only a few butchers in Kakamega sold cooked pork.People in Busia may find it more challenging to cook meat because of firewood scarcity or the additional costs incurred.The demand for cooked pork in Kakamega may have been too low to make cooking pork worthwhile in most market places.Another possible explanation is that in Busia, there were two market days each week.Local farmers converged to the market to sell their wares on these two days.In contrast, Kakamega markets were established marketplaces that were always open.As there are differences between the rural and peri-urban markets, approaches to intervention, educational programming or regulatory policy should consider these differences.
Study limitations and challenges
This study used a convenience sample from 4 sublocations; Butula and Funyula in Busia, and Shinyalu and Ikolomani in Kakamega.The differences which contrasted Busia and Kakamega butchers were extensive.Our samples are therefore not likely representative of many markets in Kenya which differ in population density, pig rearing systems, infrastructure and consumer demand.Central Kenya and markets around Nairobi for example are much higher density areas, pig-rearing is more intensive, transport conditions are different and commercial pork processors such as 'farmers choice' operate large facilities likely offer a greater marketing opportunity to farmers in those areas (FAO, 2012;Wabacha et al., 2004).Having livestock officers and village elders enumerate and enroll pig butchers likely made butchers feel compelled to participate in the study, and butchers may have been reluctant to discuss some aspects of their business as a result.Un-licensed butchers may have been under represented as livestock officers were likely unaware of their operations to enumerate them.The proportion of pigs that were reported to be slaughtered may have been over-reported as a result.Surveying pig butchers was challenging as they are often in transit as searching for pigs.
Conclusions
Understanding the pig-butcher enterprise and the pork marketing system may lead to innovations, interventions, or education opportunities to increase marketing efficiencies and improve product quality, which ultimately should increase profitability for farmers and butchers and make safe protein sources more accessible to resourcepoor people.Several differences between rural and periurban market settings were identified in this study including pig sizes, pig prices, agent use, farmer-butcher relationships, methods of travel and transport, and the marketing of pork (cooked or raw), and should be given consideration when addressing policy issues or extension services.Butchers service a large number of smallholder farmers and are key to the marketing of pork.They also add employment opportunities to people in their communities.Further research is required in the areas of public health, innovation, profit margins, value-chain improvements and marketing approaches to ensure a sustainable indigenous pork market.For example, public health can be enhanced by ensuring inspectors are available more frequently during high consumption seasons.Market information can be improved by innovations such as a marketing board.Marketing efficiencies and profitability for farmers and butchers can be improved by promoting the tape measure as a tool to estimate the weight of the pigs.If farmers become more knowledgeable about the weight of the pigs they are selling, the communication between butchers and farmers will be better and the trade of pigs will be more equitable.Farmer groups could aid in reducing transaction costs incurred by both the farmer and the butcher, and should be further explored.
Figure 1 .
Figure 1.The communication of people, the movement of the pig, and the activities coordinated by pig butchers in getting pigs to local markets in the Busia District (rural) and Kakamega District (peri-urban) of western Kenya.
Figure 2 .
Figure 2. Mean number of pigs purchased per month by butchers in Busia and Kakamega Districts, western Kenya, 2008 to 2009.Source: Field data from survey of pig butchers taken in 2008 or 2009 (Table1).No significant differences in pig purchase counts were found between Busia and Kakamega butchers for any given month.
Table 1 .
Count of pig butchers who were enumerated and voluntarily participated in a cross-sectional observational study in Busia and Kakamega Districts of westernKenya, 2008Kenya, to 2009. .
Table 2 .
The operating practices of pig butchers found to be significantly different between rural (Busia) and peri-urban (Kakamega) Districts in westernKenya, 2008Kenya, to 2009..: Field data from survey of pig butchers taken in 2008 or 2009 (Table1).,Differences of proportions (%) across districts were assessed with a chi-squared analysis., a Differences in means across districts were assessed with Student's t-tests. Source
Table 3 .
Proportion of butchers using various modes of transportation to locate and transport pigs to the butcher shop and slaughter s lab in Busia (Bus) and Kakamega (Kak) Districts of westernKenya, 2008Kenya, to 2009. .
Source: Field data from survey of pig butchers taken in 2008 or 2009 (Table
Table 4 .
Challenges in operating a pig butcher enterprise as scored by the relative importance by butchers, illustrated by the mean value of a score from 1 (low) to 5 (high) challenges inBusia and Kakamega Districts, western Kenya, 2008 to 2009. | 2018-12-28T05:14:35.467Z | 2014-01-02T00:00:00.000 | {
"year": 2014,
"sha1": "4ec24bb16c8812706042c388846da3d880628efb",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/334C57B42541.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4ec24bb16c8812706042c388846da3d880628efb",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Biology"
]
} |
248147458 | pes2o/s2orc | v3-fos-license | A Survey of the Accounting Industry on Holdings of Cryptocurrencies in Xiamen City, China
: This is the first survey conducted in China on the holding of cryptocurrencies. Although cryptocurrencies have existed in the world for more than a decade, because the exchange of cryptocurrencies is banned in China, there is no guidance on the holding of cryptocurrencies in China’s accounting standards. Moreover, although the exchange of cryptocurrencies is prohibited by the Chinese government, holdings of cryptocurrencies by Chinese entities and individuals cannot be pre-vented. Thus, we conducted a survey in investors’ attitudes towards cryptocurrencies in Xiamen City, a special economic zone (SEZ) and a pilot free trade zone (FTZ) in China. The survey respondents commonly defined cryptocurrencies as investments (45%), inventories (19%), and intangible assets (36%). A total of 84% of respondents stated that the value of a cryptocurrency should be represented by a fair value. These results are similar to those obtained in a survey by The Digital Assets Accounting Consortium (DAAC), but different to the tentative agenda decision of the International Financial Reporting Standards Interpretations Committee (IFRSIC). Additionally, 65% of respondents stated that they prefer to accept cryptocurrencies as cash equivalent currencies, and these cash equivalent currencies were considered to have two main functions: a medium of exchange (56%) and a monetary unit for pricing goods and services (52%).
Introduction
Since Bitcoin was invented in 2008 (Nakamoto 2008), more than 10,707 new cryptocurrencies (Investing 2022) have been created (as of March 2022). Generally, a cryptocurrency is a digital currency that is designed to work as a medium of intermediate exchange. Cryptocurrencies are decentralized digital currencies that are not issued by any jurisdictional authority. All transactions of cryptocurrencies are written in a big, distributed ledger that can be copied by every participant assigned to a node of the blockchain network. Cryptocurrencies are secured by encryption hash algorithms, signed with a digital signature, timestamped, and verified by participants to ensure security and prevent fraud (Silvia 2019).
Blockchain technology is the supporting technology that underlies cryptocurrencies. Cryptocurrencies are crypto assets, which are digitized by blockchain technology. Crypto assets are the most important financial assets in modern financial markets. Crypto assets impact elements of modern finance including highways, mobile phones, and the Internet. Blockchain technology has led to a revolutionary change to potentially many service industries, including finance and banking. Blockchain technology has become an innovative medium and transaction system with high value (Boring 2019).
On 5-6 March 2019, in London, the International Financial Reporting Standards (IFRS) Interpretations Committee (IFRSIC) held a meeting and discussed how the IFRS standards could apply to the holding of cryptocurrencies. A tentative agenda decision on holdings of cryptocurrencies was published. According to the tentative agenda decision, the IFRSIC (2019a) noted that cryptocurrencies are crypto assets.
A cryptocurrency is a digital currency that is recorded on a distributed ledger. For security, it is encrypted by a mathematical cryptography algorithm. A cryptocurrency is not as legal as a fiat currency issued by a jurisdictional authority or a central bank. Most cryptocurrencies are issued by private companies, which do not have any issuing permissions from the central government. The holders of cryptocurrencies do not have any legal contracts as they usually do in traditional financial markets (IFRSIC 2019a).
Based on the tentative agenda decision, IFRSIC (2019a) proposed that the IAS 2 Inventories accounting standard is the accounting rule that best fits the holding of cryptocurrencies. As in the ordinary course of business, when holders want to sell their crypto assets, the best accounting rule is the IAS 2 Inventories. The IFRSIC (2019a) also proposed that if the IAS 2 Inventories accounting standard is not appropriate for holdings of cryptocurrencies, another good choice is the IAS 38 Intangible Assets accounting standard. In most cases, IAS 38 will be the best accounting standard for holdings of cryptocurrencies.
After the tentative agenda decision was published by the IFRSIC (2019a), many accounting bodies worldwide, including accounting firms and accountants, made comments to the IFRSIC. As of 15 May 2019, at least 20 comment letters had been received by the IFRSIC (2019b, Comment letters). These comments represent the major opinions of the accounting industry on holdings of cryptocurrencies. This was the first time that the issue of cryptocurrency holding had been a focus of accounting bodies.
However, there have not been any comments from Mainland China. Do the Chinese accounting bodies not care about the issues of cryptocurrency holding?
Actually, since Bitcoin was created in 2008, the trend of cryptocurrency holding by private companies and individuals has been rapidly increasing in China. However, in contrast to the enthusiasm from private companies and individuals about cryptocurrency holding, the Chinese government has set up a number of regulations to prevent the widespread purchase of cryptocurrencies in China. The conflict between the private holding of cryptocurrencies and the government regulations means that no Chinese accounting body has made a statement on the suggestions of the IFRSIC (2019a). Although the exchange of cryptocurrencies is prohibited by the Chinese government, holdings of cryptocurrencies by Chinese entities and individuals cannot be prevented. Although there are a number of governmental regulations to prevent the exchange of cryptocurrencies in China, when dealt with as a technical problem, it is necessary to consider how to record the holdings of cryptocurrencies in financial statements. As long as some Chinese entities and individuals still hold cryptocurrencies, this will continuously be a question related to accounting standards. On the other hand, even if Chinese accounting bodies do not care about this issue, many international accounting bodies do. If the Chinese accounting bodies do not consider the implications of this issue in China now, it will be a continuous issue in the future. Thus, the motivation of this study was to investigate this issue now in relation to the situation in China.
On 3 December 2013, five Chinese central government regulators, including The People's Bank of China (PBOC), jointly issued a governmental document named A Circular on Preventing Risks Related to Bitcoin (CSRC 2013) in which the Chinese central government regulators warn that no Chinese financial institution can offer its services to Bitcoin. The Chinese central government regulators noted that Bitcoin may be defined as a special virtual commodity in nature, but because it does not have any legal status as fiat currencies do, it must not be used or circulated in circulation markets as a currency in China. The Chinese central government regulators stated that Bitcoin has no central issuer, a limited total volume, no territory restrictions for its use, and anonymous users. In China, Bitcoin is not seen as a real currency although it is called a currency. Because it is not issued by monetary authorities, it does not have the characteristics of currency in terms of legal compensation and mandatory payment. For these reasons, the Chinese central government regulators have asked that the national financial institutions do not provide any bitcoin-related businesses or services. The Chinese central government regulators also require online Bitcoin exchanges to be filed as trading records, and measures must be taken to prevent speculative trading and money-laundering risks associated with bitcoin. The Chinese central government regulators have warned that if individuals use Bitcoin, then any risks related to Bitcoin will be taken by themselves (Zhu 2013).
Although Bitcoin-related business is prohibited by the Chinese regulators, many cryptocurrencies were used in China. Along with the development of cryptocurrencies in China, further regulations were published by the Chinese government.
On 4 September 2017, seven Chinese central government regulators, including the PBOC (CSRC 2017), again jointly issued a governmental document, named The Announcement on Preventing Financial Risks from Initial Coin Offerings (ICO Rules). Its purpose is to protect investors from financial risks. Under the ICO Rules, ICOs that raise cryptocurrencies are illegal in China. Cryptocurrencies, such as Bitcoin, Ethereum, and others, are considered illegal cryptocurrencies, and their issue and trading are prohibited in China by the Chinese government. If an investor tries to raise money by issuing cryptocurrencies on the black market or selling cryptocurrencies through an irregular trading channel, his or her illegal behavior is prohibited by the Chinese government. Because cryptocurrencies involved in ICOs are not issued by the Chinese official authorities, they are not legally accepted as a fiat currency in China (LLC 2018).
On 3 September 2021, eleven top Chinese economic regulators, including the National Development and Reform Commission (NDRC 2021), jointly issued a tightened regulation named the Notice on Regulating the Activities of Virtual Currency Mining. According to the notice, because virtual currency mining has many negative impacts on the economy, such as energy wastage and the creation of carbon emissions, which do not promote industrial development or technological improvements, industrial entities will be strictly punished if they engage in mining Bitcoin or other virtual currencies. The main punitive measures are the imposition of high electricity prices on state-owned and private companies undertaking virtual currency mining activities that would otherwise pay household electricity prices.
On 24 September 2021, ten Chinese central government regulators, including the PBOC (2021), jointly issued a more tightened regulation on virtual-currency trading and speculation, which is named The Notice on Further Preventing and Disposing of the Risk of Hype in Virtual Currency Trading. Based on the new notice, cryptocurrency trading in China was totally cracked down on. Because there was a concern that financial transactions of cryptocurrencies will have big negative impacts on the Chinese economic and financial order, these kinds of activities were made illegal and were banned in the country. According to the notice of the PBOC (2021), the main negative activities that may result from financial transactions of cryptocurrencies include financial gambling, illegal fund-raising, commercial fraud, pyramid scheme investments, money laundering, and serious threats to the safety of people's property. Immediately, all cryptocurrency-related business activities were defined as illegal and strictly banned in China.
Although the mining of virtual currency and the trading of cryptocurrencies are banned based on the regulations announced by the Chinese government, in fact, some individuals and companies do hold cryptocurrencies in China. It is thus necessary to do a survey on holdings of cryptocurrencies in China. Except for the regulations, technically, it is necessary to discuss how transactions of cryptocurrencies should be recorded in financial statements. As China's central bank is developing a digital currency electronic payment (DCEP) system (Xinhua 2020), a survey of holdings of cryptocurrencies will provide some useful suggestions for policy makers.
Generally, under the reform and opening up policy, the Chinese government usually prefers to operate and test its new business policies in a special economic zone (SEZ) or in a pilot free trade zone (FTZ). For example, according to the 2020 annual report (Meitu 2021), during the year ended 31 December 2021, the company of Meitu 1 had invested 940.88523 units of Bitcoin and 31,000 units of Ethereum, and were accounted as intangible assets approximately US$45.1 million and US$117.3 million when revaluated by using the prevailing market prices of fair values (Meitu 2022). Moreover, on 12 October 2020, the PBOC issued CNY 10 million (about USD 1.47 million) of its first digital currency, known as Digital Renminbi, in Shenzhen (Xinhua 2020). These were the first SEZs and FTZs in China. Because the Digital Renminbi is not a cryptocurrency like Bitcoin, we do not discuss it in this paper. However, if a cryptocurrency is used as a reference to the Digital Renminbi, the survey on cryptocurrencies can provide some reference suggestions for the government.
Since the first pilot free trade zone was set up in 2013, China has established 18 pilot FTZs (CGTN 2019). Setting up new pilot FTZs is a strategic policy to improve the degrees of the Chinese reform and opening up in the new era. In China, the pilot FTZs are meant to serve as pioneers of the country's reform and opening up. For better integration of the domestic economy with international practices, the Chinese government has given some special policies to the pilot FTZs. Before a new opening up policy is implemented in the country, it can be firstly implemented and tested in the pilot FTZs.
Xiamen City became a special economic zone (SEZ) in 1980 and a pilot free trade zone (FTZ) in 2015. Based on the support from the Xiamen City Federation of Social Science Associations (XMSK 2020), we conducted survey focusing on the issue of whether it would be possible for the Chinese government to allow people within the SEZ and FTZ of Xiamen City to trade cryptocurrencies. If this is possible, how do we deal with holdings of cryptocurrencies in accounting? To answer these questions, we conducted a survey in Xiamen City. We discuss the majority opinions from the global accounting industry and present the results of the survey conducted in Xiamen City.
Are Cryptocurrencies Intangible Assets, as Defined in IAS 38?
The IFRSIC (2019a) stated that the cryptocurrencies are intangible assets, as defined in IAS 38. If a cryptocurrency is an intangible asset, the basic assessment is that it must meet the requirements of an intangible asset defined in IAS 38. An asset is intangible if it can be separated, sold, or transferred from the holder individually, and it cannot be seen as a monetary item that can give the holder a contractual right to obtain a number of units of currency.
Some accounting bodies have noted that the application of IAS 38 may not be relevant for investors. Because a cryptocurrency is mostly held by investors for the purpose of investment, they have suggested that it is inappropriate to apply IAS 38 to holdings of cryptocurrencies.
Although a cryptocurrency meets the definition in paragraph 8 of IAS 38 for classification as an intangible asset, the purposes of holding a traditional intangible asset and holding a crypto asset for investment are totally different (Kim 2019).
Regarding the tentative agenda decision made by the IFRSIC (2019a) on the application of IAS 38 to holdings of cryptocurrencies, only a few accounting bodies agreed with this conclusion. Many other accounting bodies did not agree with the conclusion of the IFRSIC (2019a). For example, The Digital Assets Accounting Consortium (DAAC) conducted an industry survey for the period from February to April 2019 and found that only 19% of respondents carrying cryptocurrencies answered that holdings of cryptocurrencies are considered intangible assets, whereas as many as 64% of respondents answered no to this question (Boring 2019).
The Taiwan Accounting Research and Development Foundation (ARDF Taiwan) stated that the application of IAS 38 to holdings of cryptocurrencies may not generate relevant information for the investors. When comparing the characteristics and nature of cryptocurrencies and intangible assets, they are not exactly as the same as defined in IAS 38, because cryptocurrencies produce economic benefits through being sold or invested, while general intangible assets produce economic benefits through business operation (Liu 2019).
The Securities and Exchange Commission of Brazil (CVM) stated that if a cryptocurrency is bought for the purposes of investment, trading, or use as a medium of exchange, it can clearly not be considered within the scope of IAS 38 Intangible assets, because the nature of an intangible asset is related to the maintenance of operational activities (Ferreira and Silva 2019).
Generally, intangible assets are usually defined as goodwill or non-liquid assets. When a cryptocurrency is treated as an intangible asset, its true nature may not be easily separated from other intangible assets in financial statements (Boring 2019).
The information produced based on IAS 38 may not be the most useful information for investors, because under IAS 38, the value of holdings of cryptocurrencies might be estimated by a cost-based valuation method that measures the crypto assets' value in financial statements. If the cost-based method is implemented, then values may be assigned to holdings of cryptocurrencies and recorded in financial statements as historical costs, but this will not provide relevant information about their current market value to investors. If the cost-based revaluation method is implemented, the real value of holding the cryptocurrency in terms of profit or loss will not be reflected when an active market exists and the cost-based revaluation method will need to change (Hait et al. 2019).
The Mexican Financial Reporting Standards Board (CINIF) stated that based on IAS 38, a cryptocurrency might be revalued at its purchased cost or at its fair value, because the purchased cost does not reflect the economic value of a cryptocurrency. If the revaluation method is applied to cryptocurrencies, which should be revaluated at fair value, the result of the revaluation should be confirmed as an integrated income. Because the purpose of cryptocurrency holding is speculative in nature, for revaluating in the short term, it might be inappropriate to revalue cryptocurrencies in terms of historical costs and the best revaluation method may be to measure the profit or loss using the fair value (Cervantes 2019).
The Canadian Accounting Standards Board (AcSB) noted that the IAS 38 was introduced much earlier than when cryptocurrencies were created. When the paragraphs of the IAS 38 were written, the nature of cryptocurrencies was never considered. When IAS 38 is applied to measure whether holdings of cryptocurrencies are intangible assets, the measurement result will be inappropriate and a fair value will not be achieved (Mezon 2019).
The Canadian Securities Administrators Chief Accountants Committee (CSACAC) explained that intangible assets, as defined in paragraph 9 of IAS 38, are generally held to help an entity to operate its business. However, cryptocurrencies are generally held to help an entity with investment, mostly to produce future profits from sale. Again, the prices of cryptocurrencies are volatile in the market, and cryptocurrencies are often held for speculative purposes in the short-term when they are used for the exchange for goods or services. Based on this analysis, it is inappropriate to see holdings of cryptocurrencies as intangible assets (Hait et al. 2019).
The Chamber of Digital Commerce stated that, in general, the application of the IAS 38 Intangible Assets accounting standard for holdings of cryptocurrencies is not appropriate, because the purposes of cryptocurrency holding should be considered when assessing which IFRS accounting standards should be applied (Boring 2019). Usually, the purposes of cryptocurrency holding are very different depending on whether the holders are brokertraders or cryptocurrency miners. While a cryptocurrency miner may take out a loan to mine crypto assets and repay this through selling crypto assets, another company's treasury department may hold cryptocurrencies with a long-term investment objective and sell the remaining cryptocurrencies as a short-term liquid requirement (Boring 2019). Because the purposes are different, the accounting standard IAS 38 may not reflect the purposes appropriately.
Are Cryptocurrencies Inventories, as Defined in IAS 2?
The IFRSIC (2019a) concluded that if cryptocurrencies are held for sale during the ordinary course of business, it is appropriate to apply the IAS 2 Inventories accounting standard. If an entity holds cryptocurrencies for sale during the ordinary course of business, the held cryptocurrencies will be the same as inventories defined in the IAS 2. Conversely, if an entity holds cryptocurrencies that are not for sale in the ordinary course of business, the held cryptocurrencies can be considered intangible assets as described in IAS 38.
The IFRSIC (2019a) also concluded that when an entity acts as a broker-trader of cryptocurrencies and considers cryptocurrencies to be inventory assets similar to the description of commodities in the broker-traders act, it is appropriate to apply paragraph 3(b) of the IAS 2. In these circumstances, it is better to measure the value of inventories at fair value and calculate the profit less costs from the selling prices.
Many accounting bodies have agreed to accept the tentative agenda decision of the IFRSIC (2019a) by applying IAS 2 to holdings of cryptocurrencies so that they are considered inventories.
The Digital Assets Accounting Consortium (DAAC) conducted an industry survey from February to April 2019 and found that 39% of respondents who had crypto assets considered their cryptocurrencies to be inventories (Boring 2019).
The Accounting Standards Board of Japan (ASBJ) stated that because cryptocurrencies do not have any inherent value, their value usually comes from market exchange, if an entity wants to generate cash flow from its holdings of cryptocurrencies, the only way is to sell the cryptocurrencies on the cryptocurrency market. In this situation, it is appropriate to apply IAS 2 (Kogasaka 2019).
Hardidge (2019) suggested that cryptocurrencies held by broker-traders can be seen as inventories. Under IAS 2, a broker-trader holds cryptocurrencies for the purpose of generating a benefit by selling cryptocurrencies at high prices and buying them at low prices. The price fluctuations are the first consideration for holdings of cryptocurrencies.
Generally, a broker-trader holds cryptocurrencies in an ordinary business to sell them to customers. In some cases, these cryptocurrencies are bought from customers to satisfy the dealers' sell orders. In other cases, cryptocurrencies are bought from miners to meet customers' buy orders. In both cases, the cryptocurrencies are bought and sold to generate marginal profits. The characteristics of cryptocurrency holding in this case fits the definition of the IAS 2 inventory; accordingly, these digital assets are held as part of the inventory on the entities' distributed ledger account (Boring 2019).
Some accounting bodies have noted that it is inappropriate to accept the tentative agenda decision of the IFRSIC (2019a) by applying IAS 2 to the holdings of cryptocurrencies as part of the inventory.
The Taiwan Accounting Research and Development Foundation (ARDF Taiwan) stated that the application of IAS 2 to holdings of cryptocurrencies may not provide relevant information to investors, because when cryptocurrencies are seen as inventory, their value may be estimated based on the purchased cost, and when they are valued using their historical cost, this will not exactly reflect their market economic value (Liu 2019). Different to cryptocurrency holding by broker-traders, cryptocurrencies held by miners cannot be seen as inventories. When miners get a cryptocurrency reward from mining, the cryptocurrency cannot be seen as inventories under IAS 2 (Hardidge 2019).
Are Cryptocurrencies Cash, as Defined in IAS 32?
The IFRSIC (2019a) concluded that cryptocurrencies are not cash, because the nature of cryptocurrencies is not currently the same as the nature of cash.
Some accounting bodies prefer to accept that cryptocurrencies are not cash, as concluded by IFRSIC.
Although a cryptocurrency can be used as a medium for exchanging particular goods and services, it is not widely accepted as cash, as defined in the accounting standard IAS 32 Financial Instruments: Presentation. Additionally, although a cryptocurrency can be used as a monetary unit to price goods or services, it is also not widely accepted as cash in terms of the measurement and recording of transactions in financial statements, as defined in IAS 32.
Why are cryptocurrencies not accepted as cash, as defined in IAS 32? Many people are concerned that the prices of cryptocurrencies are highly volatile. Because the prices of cryptocurrencies undergo large fluctuations, they cannot be accepted as a medium or a monetary unit to measure the prices of other goods and services on the market. Conversely, the prices of cryptocurrencies have to be measured by other fiat currencies (Blockchain 2020). Based on this, cryptocurrencies' functions are not the same as those of fiat currencies, because fiat currencies are usually used to measure the prices of goods and services. Moreover, considering their volatile pricing, cryptocurrencies are poorly stored as store value (Silvia 2019).
Some other accounting bodies prefer to accept that cryptocurrencies can be considered cash, as defined in IAS 32.
In the first quarter of 2021, Tesla (2021) purchased an aggregate amount of USD 1.50 billion Bitcoin. This cryptocurrency was firstly accepted as a type of payment for sales and was considered non-cash in accordance with the non-cash consideration guidance included in The US Accounting Standards Codification (ASC) 606 and as intangible asset as defined in ASC 805. Later, in 2022, Tesla (2022) reassessed the aggregate amount of USD 1.50 billion Bitcoin purchased in 2021 and reclassified it as an investment and also as a liquid alternative to cash in the long term.
The Securities and Exchange Commission of Brazil (CVM) stated that cryptocurrencies are not currently accepted as currency because cryptocurrencies were not entirely considered when the AG3 of IAS 32 was created; however, in some transactions, cryptocurrencies have to be considered cash, because, in fact, cryptocurrencies have been implemented as a medium of exchange and used as monetary units for transactions in some markets (Ferreira and Silva 2019).
The Mexican Financial Reporting Standards Board (CINIF) stated that, in general terms, it is inappropriate for cryptocurrencies to receive accounting recognition according to IAS 2 Inventories or IAS 38 Intangible. Conversely, it is appropriate to define a cryptocurrency as cash, because a cryptocurrency is a digital record that is based on encrypted algorithms and used as a form of payment, and its transfer can only be carried out via electronic means (Cervantes 2019).
Bitcoin, Ethereum, and other cryptocurrencies, although not widely accepted as electronic cash, are accepted by many commercial entities as a payment tool and used to pay for exchanges worldwide. More and more people are preferring to use cryptocurrencies to exchange goods and services, and this trend will continue to accelerate in the future (Rowland 2019). As the most popular cryptocurrency today, Bitcoin was created in a peer-to-peer network as an electronic form of cash, which permits payments to be directly transferred from one party to another through the blockchain network. It is targeted to act as a medium of exchange and monetary unit for pricing goods and services and is defined as having the basic function of cash according to IAS 32.
The International Air Transport Association's (IATA) Industry Accounting Working Group (IAWG) suggested that cryptocurrencies should be treated as cash. Generally, cash has three basic functions: a medium of exchange, a monetary unit for pricing goods and services, and a store value of currency. Although the exchange medium function is an essential element for an asset that acts as cash, it is not essential for an asset acting as cash to have the other two basic functions. Although many sovereign currencies are widely accepted as cash, they cannot be converted into a normal fiat currency and are not able to be widely used as a medium of exchange in the international market. Functional currencies and foreign notes and coins held by an entity are generally reported as cash in accounting statements. Thus, if cryptocurrencies are widely used as a medium of exchange in the market by entities, they should be treated as cash (Nevo and Cahalan 2019).
The Taiwan Accounting Research and Development Foundation (ARDF Taiwan) stated that the fundamental function of a cryptocurrency is to act as a medium of exchange, usually for the purpose of exchanging goods, services, or fiat currencies. Although cryptocurrencies do not have any inherent or intrinsic value, entities that hold cryptocurrencies can receive market benefits from their subsequent exchange or sale. This is quite different to the description of intangible assets by IAS 38 or inventories by IAS 2 (Liu 2019).
Different to some accounting bodies that directly agree or disagree that cryptocurrencies are not cash, as concluded by the IFRSIC, some accounting bodies are focused on future trends.
Deloitte agrees that although the conclusion that cryptocurrencies are not cash is accepted now, this will not be the case in the future. While existent accounting standards, such as IAS 38, have been used to assess whether cryptocurrencies act as cash now, this conclusion will be reassessed if the accounting standards catch up with the development of cryptocurrencies in the future. Accordingly, in the future, it will be essential to develop a more robust definition for the accounting standards of cash (Poole 2019).
The Fintech company Brane stated that it is necessary to review and develop the definition of cash in the IFRS standards. There are five ways to assess whether an asset is as a financial asset according to IAS 32. Only Paragraph AG3 of IAS 32 defines the function of cash as being a medium of exchange. This is an incomplete definition for cash, because it does not sufficiently explain the widespread function of an exchange medium when assessing whether a given asset can be considered cash (Rowland 2019).
Are Cryptocurrencies a Financial Instrument, as Defined in IAS 32?
The IFRSIC (2019a) noted that cryptocurrencies are not monetary items and do not give the holder legal rights as monetary items usually do. Generally, a monetary item can give a holder a contractual right to get a fixed number of units of currency. Based on the tentative agenda decision on holdings of cryptocurrencies of the IFRSIC (2019a), a cryptocurrency is not a financial equity instrument because it cannot give the holder a legal contractual right to receive a fixed interest.
Some accounting bodies accept the conclusion of the IFRSIC (2019a) that a cryptocurrency is not a monetary item. Rowland (2019) noted that a smart contract is embedded on the blockchain network for cryptocurrencies. When assessing whether a cryptocurrency is a financial asset, it is very important to consider whether the contractual rights and obligations utilize a consensus protocol coordinated between the holder and the blockchain network of cryptocurrencies.
A financial instrument is defined in IAS 32 as a contract that can give a holder the right to receive a fixed benefit. Silvia (2019) stated that a cryptocurrency is not a financial instrument because the holders of cryptocurrencies generally do not have any legal contractual right to receive cash or another financial asset as occurs with a traditional financial instrument.
Some accounting bodies do not accept the conclusion of the IFRSIC (2019a) that a cryptocurrency is not a financial instrument, as defined in IAS 32, based on the issuance of cryptocurrencies, because cryptocurrencies do involve a contract between the holder and the blockchain network.
Actually, it is not true that the holder of a cryptocurrency does not have any contract. The truth is that the holder of a cryptocurrency has an electronic contract with the blockchain system through a distributed ledger. The difference between a cryptocurrency and a traditional financial instrument, as defined in IAS 32, is that the contract of the cryptocurrency holder does not contain a legal contractual right to receive a fixed unit of money. This means that the electronic contract does not have any guarantee from the jurisdictional authority.
The Fintech company Brane noted that it is necessary for the IFRSIC to consider the technical attributes of the proof-of-stake consensus protocol (PoS) in the blockchain network when assessing whether a cryptocurrency can or cannot be considered a financial asset (Rowland 2019). It is not correct for the IFRSIC to state that the holder of a cryptocurrency does not have any contractual right to receive a number of units of money (Rowland 2019). For example, the Bitcoin blockchain network is addressed in a proof-of-work (POW) consensus protocol. Similarly, the Ethereum blockchain network is addressed in a proof-of-stake (POS) consensus protocol. Under both the POW and POS consensus protocol networks, all participants in the blockchain networks are required to agree with these consensus protocols and the related rules when they intend to hold Bitcoin and Ethereum and conduct business on the networks. If an entity decides to participate in the Bitcoin or Ethereum blockchain networks, they accept the network consensus protocols and receive financial reward from holdings of Bitcoin and Ethereum. If an entity enters into the networks, the POW and POS consensus protocols become obligations that the entity has to obey. The POW and POS consensus protocols and their related rules are usually embedded in a smart contract. As soon as the entity accesses the typical networks, the smart contract between the entity and the network is automatically signed. Consequently, every participant's activities, responsibilities, and obligations on the network are regulated by the smart contract. Similar to the cryptocurrencies of Bitcoin and Ethereum, the consensus protocols of POW and POS are applied by more than 80% of all cryptocurrencies (Rowland 2019). It is significant that the smart contracts can be accepted as general contracts, as defined in IAS 32 for financial instruments.
Some accounting bodies suggested that different methods should be used to assess whether a cryptocurrency can be seen as an intangible asset, cash, inventory, or financial instrument in different situations.
The Chamber of Digital Commerce stated that it is more appropriate to use different accounting standards to assess the characteristics of cryptocurrencies based on the purpose of their holding. When the intent is to resell the cryptocurrency, it is appropriate to apply the inventory accounting standard, IAS 2; however, when the intent is to use a cryptocurrency as a financial instrument, then it is appropriate to apply both IAS 32 and IAS 39 (Boring 2019).
How to Disclose Holdings of Cryptocurrencies in Accounting?
The IFRSIC (2019a) concluded that an entity may apply the disclosure requirements of the IFRS standards to determine the amount to be recorded in accounting financial statements in three ways. If an entity holds cryptocurrencies for sale in the ordinary course of business, it is appropriate to apply paragraph 36-39 of the IAS 2 Inventory to determine the amount to be displayed in the financial statement. Otherwise, it is appropriate to apply paragraph 118-128 of IAS 38 intangible assets to determine the amount to be displayed in the financial statement. In both cases, because the cost-based method can only measure the historical value of holdings of cryptocurrencies but cannot provide any relevant information for making investment decisions, it is essential to apply paragraphs 91-99 of IFRS 13 Fair Value Measurement to disclose the value of holdings of cryptocurrencies.
The IFRSIC (2019a) noted that when an entity applies paragraph 122 of IAS 1 Presentation of Financial Statements to holdings of cryptocurrencies in accounting, it is necessary to disclose judgements that significantly affect the amounts confirmed in the financial statements.
The IFRSIC (2019a) also noted that if an entity is applying paragraph 21 of IAS 10 Events to holdings of cryptocurrencies in accounting after the reporting period, it is necessary to disclose any relevant non-adjusting events, including information related to the nature of the event and value of the financial effects. For example, if an entity holds cryptocurrencies and intends to sell them for liquidity, because the disclosed events in the financial statement may influence the decisions of investors, based on the IFRS 13 Fair Value Measurement requirement, it is necessary to disclose significant changes to the fair value that have occurred after the reporting period.
Many accounting bodies agree that the fair value is a good way to measure value when an entity holds cryptocurrencies.
Because a cryptocurrency is usually used as a payment tool or stored for sale, the fair value is the best way to reflect the economic value of holdings of cryptocurrencies (Cervantes 2019).
The Accounting Standards Board of Japan (ASBJ) stated that the best way of revaluating the holdings of cryptocurrencies is to use their fair value through profit or loss (FVTPL), because the FVTPL provides the most relevant information to investors in financial statements (Kogasaka 2019).
Grant Thornton International Ltd. noted that if an entity is not a broker-trader and its holdings of cryptocurrencies are not for sale in the short-run, because the accounting standard defined by the IFRSIC cannot sufficiently reveal the performance of its businesses, FVTPL is the best choice to provide relevant information for investors (Haygarth 2019).
The Mexican Financial Reporting Standards Board (CINIF) stated that the historical cost and net realizable value may not reflect the actual value of holdings of cryptocurrencies, because only the FVTPL can reflect the market value of holdings of cryptocurrencies (Cervantes 2019).
When Tesla (2021) revaluated its aggregate of USD 1.50 billion Bitcoin purchased in 2021, the sales revenue from contracts with customers was recorded based on the fair value according to current quoted market prices.
The Canadian Securities Administrators Chief Accountants Committee (CSACAC) conducted a survey of 41 Canadian entities with cryptocurrency holdings and/or undergoing related activities and summarized that although there are different accounting practices that can be applied to holdings of cryptocurrencies, most respondents (76%) stated that they prefer to disclose the values of cryptocurrencies in financial statements using fair value through profit and loss (Hait et al. 2019).
In most cases, if an entity holding cryptocurrencies is considered to be a commodity broker-trader, as defined in IAS 2, the alternative accounting standard that can be used is paragraph 11 of IAS 8, which may provide a framework for assessing the concepts of assets, liabilities, income, and expenses (Hait et al. 2019).
Similarly, the Digital Assets Accounting Consortium (DAAC) conducted an industry survey from February to April 2019 and found that 75% of respondents holding cryptocurrencies treated changes in events related to cryptocurrencies at fair value when revaluing their earnings or liquidity from holdings of cryptocurrencies (Boring 2019).
What Proposals Were Put Forward by Accounting Bodies?
Some accounting bodies suggested that it is essential to change the definition of cryptocurrency from that proposed by the IFRSIC (2019a).
As a Fintech company, Brane stated that the birth of cryptocurrencies was a shock to the traditional financial market. Clearly, cryptocurrencies emerged much later than the formation of IAS 38, and while the blockchain distributed ledger technology and encryption algorithms are rapidly evolving in the accounting area, the IAS 38 only provides a very limited solution to the estimation of the fair value of cryptocurrency holdings. When IAS 38 is applied to holdings of cryptocurrencies, considering the nature of cryptocurrencies, IAS 38 cannot provide a perfect solution and therefore does not appropriately represent the nature of the crypto assets when attempting to accounted for them in financial statements (Rowland 2019).
The International Air Transport Association's (IATA) Industry Accounting Working Group (IAWG) questioned the definition presented by the IFRSIC. Although cryptocurrencies are considered by the IFRSIC as not being issued by a jurisdictional authority, the IAWG confirmed that this is not true. Although cryptocurrencies are not issued by a jurisdictional authority at the moment, they can be transferred to a fiat currency and used as a medium of payment clearing. It is easy to solve this problem if a contract between the holder of cryptocurrencies and the clearing parties is created through the blockchain network. For this reason, the IAWG suggested that the definition of cryptocurrency should be changed (Nevo and Cahalan 2019).
Some accounting bodies suggested that it is essential to revise the current IFRS standards.
The IFRS Technical Committee of Chile (TCC) suggested that although holdings of cryptocurrencies are intangible assets, as defined in IAS 38, because this is an implicit assumption but not an explicit assumption, there is a requirement for the accounting standard of IAS 38 to be updated when explicitly defining holdings of cryptocurrencies as intangible assets (Torres 2019).
The Securities and Exchange Commission of Brazil (CVM) stated that an IFRS standard revision of cryptocurrencies is essential. As a new category of asset, when the majority of IFRS standards were created, no cryptocurrencies had been created. Cryptocurrencies are directly constrained by the scope of the current IFRS standards and explained by a tentative agenda decision by IFRSIC (2019a). If the standards of the IFRS are not revised and updated, some new characteristics of cryptocurrencies will probably be far beyond the scope of the current IFRS standards, meaning that the current IFRS standards will probably not be able to correctly reflect new financial trends related to cryptocurrencies (Ferreira and Silva 2019).
The Accounting Standards Committee of Germany (ASCG) noted that the outcome of the tentative agenda decision of the IFRSIC (2019a) on holdings of cryptocurrencies has led to inappropriate results under all facts and circumstances. Some cryptocurrencies, such as Bitcoin, are accepted as mediums of exchange, which implies that these cryptocurrencies have the basic function of cash. Some cryptocurrencies, such as utility tokens, have limited use within a very specific scope of service, which means that these cryptocurrencies may or may not have the basic function of cash. Some cryptocurrencies may not have any functions of cash at all, which means that these cryptocurrencies do not have any cash functions. To record the holdings of all categorized cryptocurrencies in financial statements appropriately, it is essential to revise and update the standards of the IFRS, including IAS 2 and IAS 38. It is necessary to consider all possible scenarios, as the standard make more sense in some scenarios than in others (Barckow 2019).
Some accounting bodies suggested that it is essential to add new projects to the current IFRS standards. Rowland (2019) suggested that the IFRSIC should create a new project and add some new paragraphs to the current IFRS standards for holdings of cryptocurrency, because the application of IAS 38 already lags the application of blockchain technology. Applying IAS 38 based on the tentative agenda decisions of the IFRSIC will only be a temporary measure before IFRS standards can catch up to the application of today's technologies. If the IFRS fails to provide more appropriate guidance to the holdings of cryptocurrencies and if no new IFRS standard is added to fit cryptocurrencies, the IFRS will not be considered just or fair for the profession. Consequently, the presentation of cryptocurrencies in financial statements will fall further behind and suffer from a lack of appropriateness in the IFRS standards.
The Mexican Financial Reporting Standards Board (CINIF) suggested that it is necessary to issue a new standard for cryptocurrencies. Cryptocurrencies are a new kind of asset. They are completely different to the traditional assets explained in the existing accounting standards of the IFRS, including IAS 2 and IAS 38. When the accounting standards of the IFRS were issued, cryptocurrencies had not been created. As cryptocurrencies were created a long time later, the existing accounting standards of the IFRS do not reflect the fair value of cryptocurrencies presented in financial statements. Accordingly, it is necessary to develop the standards of the IFRS and issue new specific paragraphs within the IFRS for cryptocurrencies (Cervantes 2019).
Survey Results from the Xiamen City, China
In order to provide meaningful feedback on how to deal with holdings of cryptocurrencies in accounting, we conducted a survey of the industry in Xiamen City, China, from April to September 2020.
First, basic information of the respondents was collected.
During the survey, a total of 1013 valid questionnaires were collected. About 60% of the respondents were male, and the other 40% of the respondents were female (Table A1).
The total of 1013 respondents could be distributed into six different areas (Figure 1): 13% were from government agencies and affiliated institutions, 19% were from state-owned companies, 19% were from foreign-owned companies, 18% were from China and foreign joint companies, 22% were from private companies, and 9% were from other organizations (Table A2).
Survey Results from the Xiamen City, China
In order to provide meaningful feedback on how to deal with holdings of cryptocurrencies in accounting, we conducted a survey of the industry in Xiamen City, China, from April to September 2020.
First, basic information of the respondents was collected. During the survey, a total of 1013 valid questionnaires were collected. About 60% of the respondents were male, and the other 40% of the respondents were female (Table A1).
The total of 1013 respondents could be distributed into six different areas (Figure 1): 13% were from government agencies and affiliated institutions, 19% were from stateowned companies, 19% were from foreign-owned companies, 18% were from China and foreign joint companies, 22% were from private companies, and 9% were from other organizations (Table A2). About 40% of the respondents were from financial-related industries, and the other 60% were from other industries (Figure 2, Table A3). A total of 67% of the respondents held a bachelor's degree or higher, while the other 33% held a lower degree (Figure 3, Table A4). About 40% of the respondents were from financial-related industries, and the other 60% were from other industries (Figure 2, Table A3).
Survey Results from the Xiamen City, China
In order to provide meaningful feedback on how to deal with holdings of cryptocurrencies in accounting, we conducted a survey of the industry in Xiamen City, China, from April to September 2020.
First, basic information of the respondents was collected. During the survey, a total of 1013 valid questionnaires were collected. About 60% of the respondents were male, and the other 40% of the respondents were female (Table A1).
The total of 1013 respondents could be distributed into six different areas (Figure 1): 13% were from government agencies and affiliated institutions, 19% were from stateowned companies, 19% were from foreign-owned companies, 18% were from China and foreign joint companies, 22% were from private companies, and 9% were from other organizations (Table A2). About 40% of the respondents were from financial-related industries, and the other 60% were from other industries (Figure 2, Table A3). A total of 67% of the respondents held a bachelor's degree or higher, while the other 33% held a lower degree (Figure 3, Table A4). A total of 67% of the respondents held a bachelor's degree or higher, while the other 33% held a lower degree (Figure 3, Table A4).
Survey Results from the Xiamen City, China
In order to provide meaningful feedback on how to deal with holdings of cryptocurrencies in accounting, we conducted a survey of the industry in Xiamen City, China, from April to September 2020.
First, basic information of the respondents was collected. During the survey, a total of 1013 valid questionnaires were collected. About 60% of the respondents were male, and the other 40% of the respondents were female (Table A1).
The total of 1013 respondents could be distributed into six different areas (Figure 1): 13% were from government agencies and affiliated institutions, 19% were from stateowned companies, 19% were from foreign-owned companies, 18% were from China and foreign joint companies, 22% were from private companies, and 9% were from other organizations (Table A2). About 40% of the respondents were from financial-related industries, and the other 60% were from other industries (Figure 2, Table A3). A total of 67% of the respondents held a bachelor's degree or higher, while the other 33% held a lower degree (Figure 3, Table A4). A total of 53% of the respondents have experience with operating financial derivatives, including stocks and options, while 47% of the respondents did not have this experience (Figure 4, Table A5).
A total of 53% of the respondents have experience with operating financial derivatives, including stocks and options, while 47% of the respondents did not have this experience ( Figure 4, Table A5). Only 31% of the respondents had experience with holdings of cryptocurrencies, while the other 69% did not ( Figure 5, Table A6). While 100% of the respondents already knew that Bitcoin is the first-ranking cryptocurrency in the world (Table A7), only 38% of the respondents knew that the Ethereum is the second-ranking cryptocurrency in the world (Table A8).
Second, the survey investigated the characteristics of holdings of cryptocurrencies in China.
In answer to the question of how companies currently account for holdings of cryptocurrencies, 45% of the respondents stated that entities carry crypto assets as investments (13% as cash, 20% as foreign currencies, and 12% as other financial instruments), 36% of the respondents stated that entities carry crypto assets as intangible assets (15% as intangible assets except for goodwill, and 21% as goodwill), and 19% of the respondents stated that entities carry crypto assets as inventories (Figures 6 and 7, Table A9). Only 31% of the respondents had experience with holdings of cryptocurrencies, while the other 69% did not ( Figure 5, Table A6).
A total of 53% of the respondents have experience with operating financial derivatives, including stocks and options, while 47% of the respondents did not have this experience ( Figure 4, Table A5). Only 31% of the respondents had experience with holdings of cryptocurrencies, while the other 69% did not ( Figure 5, Table A6). While 100% of the respondents already knew that Bitcoin is the first-ranking cryptocurrency in the world (Table A7), only 38% of the respondents knew that the Ethereum is the second-ranking cryptocurrency in the world (Table A8).
Second, the survey investigated the characteristics of holdings of cryptocurrencies in China.
In answer to the question of how companies currently account for holdings of cryptocurrencies, 45% of the respondents stated that entities carry crypto assets as investments (13% as cash, 20% as foreign currencies, and 12% as other financial instruments), 36% of the respondents stated that entities carry crypto assets as intangible assets (15% as intangible assets except for goodwill, and 21% as goodwill), and 19% of the respondents stated that entities carry crypto assets as inventories (Figures 6 and 7, Table A9). While 100% of the respondents already knew that Bitcoin is the first-ranking cryptocurrency in the world (Table A7), only 38% of the respondents knew that the Ethereum is the second-ranking cryptocurrency in the world (Table A8).
Second, the survey investigated the characteristics of holdings of cryptocurrencies in China.
In answer to the question of how companies currently account for holdings of cryptocurrencies, 45% of the respondents stated that entities carry crypto assets as investments (13% as cash, 20% as foreign currencies, and 12% as other financial instruments), 36% of the respondents stated that entities carry crypto assets as intangible assets (15% as intangible assets except for goodwill, and 21% as goodwill), and 19% of the respondents stated that entities carry crypto assets as inventories (Figures 6 and 7 A total of 53% of the respondents have experience with operating financial derivatives, including stocks and options, while 47% of the respondents did not have this experience ( Figure 4, Table A5). Only 31% of the respondents had experience with holdings of cryptocurrencies, while the other 69% did not ( Figure 5, Table A6). While 100% of the respondents already knew that Bitcoin is the first-ranking cryptocurrency in the world (Table A7), only 38% of the respondents knew that the Ethereum is the second-ranking cryptocurrency in the world (Table A8).
Second, the survey investigated the characteristics of holdings of cryptocurrencies in China.
In answer to the question of how companies currently account for holdings of cryptocurrencies, 45% of the respondents stated that entities carry crypto assets as investments (13% as cash, 20% as foreign currencies, and 12% as other financial instruments), 36% of the respondents stated that entities carry crypto assets as intangible assets (15% as intangible assets except for goodwill, and 21% as goodwill), and 19% of the respondents stated that entities carry crypto assets as inventories (Figures 6 and 7, Table A9). This survey result is similar to that of the DAAC, where 50% of the respondents answered that entities carry crypto assets as investments, 39% as inventories, and 19% as intangible assets (Boring 2019). However, this result differs from the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC does not include investments.
In answer to the question of how to disclose the value of the crypto assets held by entities in accounting, 84% of the respondents stated that entities holding cryptocurrencies should revaluate them at fair value through profit and loss (FVTPL) (Figure 8, Table A10). This survey result has also revealed that the 84% of fair value is composed by four different types of fair value, including 30% of current market exchange price, 11% of selling price, 26% of revaluation price and 17% or other weighted price (Figure 9, Table A10). This percentage of the respondents stated that entities holding cryptocurrencies should revaluate them at FVTPL is much higher than those obtained in the Canadian Securities Administrators Chief Accountants Committee (76%) (Hait et al. 2019) and the DAAC (75%) (Boring 2019).
In answer to the question of whether cryptocurrencies are considered cash (currencies), 65% of the respondents stated that cryptocurrencies are cash, whereas 35% of the respondents stated that cryptocurrencies are not cash (Figure 10, Table A11). This survey result is similar to that of the DAAC, where 50% of the respondents answered that entities carry crypto assets as investments, 39% as inventories, and 19% as intangible assets (Boring 2019). However, this result differs from the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC does not include investments.
In answer to the question of how to disclose the value of the crypto assets held by entities in accounting, 84% of the respondents stated that entities holding cryptocurrencies should revaluate them at fair value through profit and loss (FVTPL) (Figure 8, Table A10). This survey result is similar to that of the DAAC, where 50% of the respondents answered that entities carry crypto assets as investments, 39% as inventories, and 19% as intangible assets (Boring 2019). However, this result differs from the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC does not include investments.
In answer to the question of how to disclose the value of the crypto assets held by entities in accounting, 84% of the respondents stated that entities holding cryptocurrencies should revaluate them at fair value through profit and loss (FVTPL) (Figure 8, Table A10). This survey result has also revealed that the 84% of fair value is composed by four different types of fair value, including 30% of current market exchange price, 11% of selling price, 26% of revaluation price and 17% or other weighted price (Figure 9, Table A10). This percentage of the respondents stated that entities holding cryptocurrencies should revaluate them at FVTPL is much higher than those obtained in the Canadian Securities Administrators Chief Accountants Committee (76%) (Hait et al. 2019) and the DAAC (75%) (Boring 2019).
In answer to the question of whether cryptocurrencies are considered cash (currencies), 65% of the respondents stated that cryptocurrencies are cash, whereas 35% of the respondents stated that cryptocurrencies are not cash (Figure 10, Table A11). This survey result has also revealed that the 84% of fair value is composed by four different types of fair value, including 30% of current market exchange price, 11% of selling price, 26% of revaluation price and 17% or other weighted price (Figure 9, Table A10). This survey result is similar to that of the DAAC, where 50% of the respondents answered that entities carry crypto assets as investments, 39% as inventories, and 19% as intangible assets (Boring 2019). However, this result differs from the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC does not include investments.
In answer to the question of how to disclose the value of the crypto assets held by entities in accounting, 84% of the respondents stated that entities holding cryptocurrencies should revaluate them at fair value through profit and loss (FVTPL) (Figure 8, Table A10). This survey result has also revealed that the 84% of fair value is composed by four different types of fair value, including 30% of current market exchange price, 11% of selling price, 26% of revaluation price and 17% or other weighted price (Figure 9, Table A10). This percentage of the respondents stated that entities holding cryptocurrencies should revaluate them at FVTPL is much higher than those obtained in the Canadian Securities Administrators Chief Accountants Committee (76%) (Hait et al. 2019) and the DAAC (75%) (Boring 2019).
In answer to the question of whether cryptocurrencies are considered cash (currencies), 65% of the respondents stated that cryptocurrencies are cash, whereas 35% of the respondents stated that cryptocurrencies are not cash (Figure 10, Table A11). This percentage of the respondents stated that entities holding cryptocurrencies should revaluate them at FVTPL is much higher than those obtained in the Canadian Securities Administrators Chief Accountants Committee (76%) (Hait et al. 2019) and the DAAC (75%) (Boring 2019).
In answer to the question of whether cryptocurrencies are considered cash (currencies), 65% of the respondents stated that cryptocurrencies are cash, whereas 35% of the respondents stated that cryptocurrencies are not cash (Figure 10, Table A11). This result is different to that of the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC was that cryptocurrencies are not cash.
In answer to the question of what functions of a currency that cryptocurrencies have (if the respondent considers cryptocurrencies to be currencies), 56% of the 1013 respondents stated that cryptocurrencies can be used as a medium of exchange (Figure 11), 52% stated that cryptocurrencies can be used as a monetary unit for pricing goods or services (Figure 12), 36% stated that cryptocurrencies can be used to store currency value, and 18% stated that cryptocurrencies can be used as world currencies (Table A12). In answer to the question of whether the current accounting standards of the IFRS are appropriate for entities' holdings of cryptocurrencies, 74% of the respondents answered no, and only 26% of the respondents answered yes (Figure 13, Table A13). This result is different to that of the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC was that cryptocurrencies are not cash.
In answer to the question of what functions of a currency that cryptocurrencies have (if the respondent considers cryptocurrencies to be currencies), 56% of the 1013 respondents stated that cryptocurrencies can be used as a medium of exchange (Figure 11), 52% stated that cryptocurrencies can be used as a monetary unit for pricing goods or services (Figure 12), 36% stated that cryptocurrencies can be used to store currency value, and 18% stated that cryptocurrencies can be used as world currencies (Table A12). This result is different to that of the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC was that cryptocurrencies are not cash.
In answer to the question of what functions of a currency that cryptocurrencies have (if the respondent considers cryptocurrencies to be currencies), 56% of the 1013 respondents stated that cryptocurrencies can be used as a medium of exchange (Figure 11), 52% stated that cryptocurrencies can be used as a monetary unit for pricing goods or services (Figure 12), 36% stated that cryptocurrencies can be used to store currency value, and 18% stated that cryptocurrencies can be used as world currencies (Table A12). In answer to the question of whether the current accounting standards of the IFRS are appropriate for entities' holdings of cryptocurrencies, 74% of the respondents answered no, and only 26% of the respondents answered yes (Figure 13, Table A13). This result is different to that of the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC was that cryptocurrencies are not cash.
In answer to the question of what functions of a currency that cryptocurrencies have (if the respondent considers cryptocurrencies to be currencies), 56% of the 1013 respondents stated that cryptocurrencies can be used as a medium of exchange (Figure 11), 52% stated that cryptocurrencies can be used as a monetary unit for pricing goods or services (Figure 12), 36% stated that cryptocurrencies can be used to store currency value, and 18% stated that cryptocurrencies can be used as world currencies (Table A12). In answer to the question of whether the current accounting standards of the IFRS are appropriate for entities' holdings of cryptocurrencies, 74% of the respondents answered no, and only 26% of the respondents answered yes (Figure 13, Table A13). In answer to the question of whether the current accounting standards of the IFRS are appropriate for entities' holdings of cryptocurrencies, 74% of the respondents answered no, and only 26% of the respondents answered yes (Figure 13, Table A13). This result is different to that of the tentative agenda decision of the IFRSIC, because the conclusion of the IFRSIC was that cryptocurrencies are not cash.
In answer to the question of what functions of a currency that cryptocurrencies have (if the respondent considers cryptocurrencies to be currencies), 56% of the 1013 respondents stated that cryptocurrencies can be used as a medium of exchange (Figure 11), 52% stated that cryptocurrencies can be used as a monetary unit for pricing goods or services (Figure 12), 36% stated that cryptocurrencies can be used to store currency value, and 18% stated that cryptocurrencies can be used as world currencies (Table A12). In answer to the question of whether the current accounting standards of the IFRS are appropriate for entities' holdings of cryptocurrencies, 74% of the respondents answered no, and only 26% of the respondents answered yes (Figure 13, Table A13). In answer to the question of whether it is essential to make additions to the current standards of the IFRS for holdings of cryptocurrencies, 64% of the respondents answered yes, and 36% of the respondents answered no (Figure 14, Table A14). J. Risk Financial Manag. 2022, 15, x FOR PEER REVIEW 16 of 25 In answer to the question of whether it is essential to make additions to the current standards of the IFRS for holdings of cryptocurrencies, 64% of the respondents answered yes, and 36% of the respondents answered no (Figure 14, Table A14). Figure 14. Answers for question of is it necessary to add new projects to current IFRS standards for holdings of cryptocurrencies?
In answer to the question of whether the distribution ledger recording the trading of cryptocurrencies has greater advantages than the central ledger recording the trading of traditional currencies, 61% of the respondents answered yes, and 39% of the respondents answered no (Table A15).
In answer to the question of whether the distribution ledger in the blockchain will become a trend that substitutes the central ledger in the future, 54% of the respondents answered yes, and 46% of the respondents answered no (Table A16).
In answer to the question of whether the respondent would accept the use of cryptocurrencies by their partners when doing business with them, 58% of the respondents answered yes, while 42% of the respondents answered no (Table A17).
Third, the survey asked questions about the future trends for holdings of cryptocurrencies in China.
In answer to the question of whether the respondent considered exchange platforms of cryptocurrencies on the Internet to be legal in China, 37% answered yes and 63% answered no (Table A18).
In answer to the question of whether the respondent considered that the trading of cryptocurrencies, including Bitcoin, should be legally permitted in China, 52% answered yes and 48% answered no (Table A19).
In answer to the question of whether the respondent thought holdings of cryptocurrencies would become legal in China in the future, 60% answered yes and 40% answered no ( Table A20).
In answer to the question of whether the respondent considered it essential for legal exchange platforms to be set for cryptocurrencies and the management of these platforms to be enhanced in China, 62% answered yes and 38% answered no ( Figure 15, Table A21). In answer to the question of whether the respondent thought that the legal trade of cryptocurrencies should first be operated and tested in China's pilot FTZs, 61% answered yes and 39% answered no (Figure 16, Table A22). In answer to the question of whether the distribution ledger recording the trading of cryptocurrencies has greater advantages than the central ledger recording the trading of traditional currencies, 61% of the respondents answered yes, and 39% of the respondents answered no (Table A15).
In answer to the question of whether the distribution ledger in the blockchain will become a trend that substitutes the central ledger in the future, 54% of the respondents answered yes, and 46% of the respondents answered no (Table A16).
In answer to the question of whether the respondent would accept the use of cryptocurrencies by their partners when doing business with them, 58% of the respondents answered yes, while 42% of the respondents answered no (Table A17).
Third, the survey asked questions about the future trends for holdings of cryptocurrencies in China.
In answer to the question of whether the respondent considered exchange platforms of cryptocurrencies on the Internet to be legal in China, 37% answered yes and 63% answered no (Table A18).
In answer to the question of whether the respondent considered that the trading of cryptocurrencies, including Bitcoin, should be legally permitted in China, 52% answered yes and 48% answered no (Table A19).
In answer to the question of whether the respondent thought holdings of cryptocurrencies would become legal in China in the future, 60% answered yes and 40% answered no (Table A20).
In answer to the question of whether the respondent considered it essential for legal exchange platforms to be set for cryptocurrencies and the management of these platforms to be enhanced in China, 62% answered yes and 38% answered no ( Figure 15, Table A21). J. Risk Financial Manag. 2022, 15, x FOR PEER REVIEW 16 of 25 In answer to the question of whether it is essential to make additions to the current standards of the IFRS for holdings of cryptocurrencies, 64% of the respondents answered yes, and 36% of the respondents answered no (Figure 14, Table A14). In answer to the question of whether the distribution ledger recording the trading of cryptocurrencies has greater advantages than the central ledger recording the trading of traditional currencies, 61% of the respondents answered yes, and 39% of the respondents answered no (Table A15).
In answer to the question of whether the distribution ledger in the blockchain will become a trend that substitutes the central ledger in the future, 54% of the respondents answered yes, and 46% of the respondents answered no (Table A16).
In answer to the question of whether the respondent would accept the use of cryptocurrencies by their partners when doing business with them, 58% of the respondents answered yes, while 42% of the respondents answered no (Table A17).
Third, the survey asked questions about the future trends for holdings of cryptocurrencies in China.
In answer to the question of whether the respondent considered exchange platforms of cryptocurrencies on the Internet to be legal in China, 37% answered yes and 63% answered no (Table A18).
In answer to the question of whether the respondent considered that the trading of cryptocurrencies, including Bitcoin, should be legally permitted in China, 52% answered yes and 48% answered no (Table A19).
In answer to the question of whether the respondent thought holdings of cryptocurrencies would become legal in China in the future, 60% answered yes and 40% answered no ( Table A20).
In answer to the question of whether the respondent considered it essential for legal exchange platforms to be set for cryptocurrencies and the management of these platforms to be enhanced in China, 62% answered yes and 38% answered no ( Figure 15, Table A21). Fourth, the survey asked questions about future trends for holdings of cryptocurrencies in China's Xiamen pilot FTZs.
In answer to the question of whether the respondent thought that the legal trade of cryptocurrencies should first be operated and tested in China's pilot FTZs, 61% answered yes and 39% answered no ( Figure 16, Table A22). Fourth, the survey asked questions about future trends for holdings of cryptocurrencies in China's Xiamen pilot FTZs.
In answer to the question of whether the respondent thought that the legal trade of cryptocurrencies should first be operated and tested in China's pilot FTZs, 61% answered yes and 39% answered no ( Figure 16, Table A22). In answer to the question about the areas that the legal trade of cryptocurrencies should be first operated and tested in China's Xiamen pilot FTZs, 59% stated that this should occur and agreed to encourage the setup of platforms for the trading of cryptocurrencies, 55% suggested that the government should give permission to entities to hold and use cryptocurrencies, 51% stated that cryptocurrencies should become legal, and 50% stated that cryptocurrencies should be freely traded (Table A23).
In answer to the question of which industry should be selected as the first to use cryptocurrencies in the pilot FTZs, 58% of the respondents chose the financial industry (Table A24).
Analysis of the Survey Results and Policy Suggestions
When considering all respondents, this survey tended to collect questionnaires from people with higher educational degrees who had already learned about cryptocurrencies, particularly about the first-ranking cryptocurrency, Bitcoin. This survey is tended to collect questionnaires from people who had work experience in government agencies, stateowned companies, financial organizations, and Internet-related companies.
Because the first contractors and holders of cryptocurrencies preferred to access them via the Internet, and because the questionnaire was focused on the concepts of currency and accounting, it was reasonable to target the questionnaire to people with Internet experience and higher educational degrees in computing, banking, accounting, and some other majors.
Regarding the total number questionnaires received and the industries of the respondents, we consider the survey results to be representative in terms of both quantity and quality. Because the respondents were from many different industries, held higher educational degrees, and had good understanding of cryptocurrencies, including Bitcoin, the survey results give a good idea of the real situation of cryptocurrencies in China.
The survey results fit with the enthusiasms of private companies and individuals with holdings of cryptocurrencies in China. Most respondents responded positively when asked about the developing trend of cryptocurrencies and supported the legal operation and testing of holdings in China's Xiamen pilot free trade zone.
Nearly one-third of the respondents had experience with holdings of cryptocurrencies, despite there being no legal exchange market or policy support in China.
Most of the respondents stated that they define holdings of cryptocurrencies as investments, inventories, or intangible assets and believe that the value of a cryptocurrency holding is best represented by its market fair value. Most of the respondents stated that they consider cryptocurrencies are currencies with two main functions: a medium of exchange and a monetary unit for pricing goods and services. Most of the respondents confirmed that the current IFRS standards do not satisfy the accounting requirements for holdings of cryptocurrencies. Thus, it is necessary to add to current IFRS standards to make them appropriate for holdings of cryptocurrencies. Most respondents stated the distributed decentralized ledger based on blockchain technology recording the transactions of cryptocurrencies has more advantages than the centralized ledger, which records the transactions of traditional currencies and believe that, in the future, the distributed ledger will replace the centralized ledger. In answer to the question about the areas that the legal trade of cryptocurrencies should be first operated and tested in China's Xiamen pilot FTZs, 59% stated that this should occur and agreed to encourage the setup of platforms for the trading of cryptocurrencies, 55% suggested that the government should give permission to entities to hold and use cryptocurrencies, 51% stated that cryptocurrencies should become legal, and 50% stated that cryptocurrencies should be freely traded (Table A23).
In answer to the question of which industry should be selected as the first to use cryptocurrencies in the pilot FTZs, 58% of the respondents chose the financial industry (Table A24).
Analysis of the Survey Results and Policy Suggestions
When considering all respondents, this survey tended to collect questionnaires from people with higher educational degrees who had already learned about cryptocurrencies, particularly about the first-ranking cryptocurrency, Bitcoin. This survey is tended to collect questionnaires from people who had work experience in government agencies, state-owned companies, financial organizations, and Internet-related companies.
Because the first contractors and holders of cryptocurrencies preferred to access them via the Internet, and because the questionnaire was focused on the concepts of currency and accounting, it was reasonable to target the questionnaire to people with Internet experience and higher educational degrees in computing, banking, accounting, and some other majors.
Regarding the total number questionnaires received and the industries of the respondents, we consider the survey results to be representative in terms of both quantity and quality. Because the respondents were from many different industries, held higher educational degrees, and had good understanding of cryptocurrencies, including Bitcoin, the survey results give a good idea of the real situation of cryptocurrencies in China.
The survey results fit with the enthusiasms of private companies and individuals with holdings of cryptocurrencies in China. Most respondents responded positively when asked about the developing trend of cryptocurrencies and supported the legal operation and testing of holdings in China's Xiamen pilot free trade zone.
Nearly one-third of the respondents had experience with holdings of cryptocurrencies, despite there being no legal exchange market or policy support in China.
Most of the respondents stated that they define holdings of cryptocurrencies as investments, inventories, or intangible assets and believe that the value of a cryptocurrency holding is best represented by its market fair value. Most of the respondents stated that they consider cryptocurrencies are currencies with two main functions: a medium of exchange and a monetary unit for pricing goods and services. Most of the respondents confirmed that the current IFRS standards do not satisfy the accounting requirements for holdings of cryptocurrencies. Thus, it is necessary to add to current IFRS standards to make them appropriate for holdings of cryptocurrencies. Most respondents stated the distributed decentralized ledger based on blockchain technology recording the transactions of cryptocurrencies has more advantages than the centralized ledger, which records the transactions of traditional currencies and believe that, in the future, the distributed ledger will replace the centralized ledger.
Although most of the respondents already knew that the trade of cryptocurrencies in China is illegal and strictly prohibited by the Chinese government, they still stated that, in the future, the trade of cryptocurrencies is likely to become legal and be permitted by the government; platforms for the exchange of cryptocurrencies will be set up and regulated by the government; and cryptocurrencies will be accepted and used by firms for business.
One-third of the respondents stated that, in Xiamen city, there are a few firms that record cryptocurrencies as assets in financial statements and use them as monetary units in business contracts to price goods and services. Most respondents estimated that, in Xiamen city, there are about 50-100 firms that are focused on doing business related to the development of cryptocurrencies.
Most respondents supported the setting up of exchange platforms for cryptocurrencies in the Xiamen pilot free trade zone, the holding and use of cryptocurrencies by entities, the trading of cryptocurrencies in a legal mode, and the initial operation and testing of legal trading in the financial industry.
From the survey results, we can see that although most respondents presented an optimistic attitude toward the holding, use, and trading of cryptocurrencies, a few respondents presented a negative attitude. This means that for an opening up policy for cryptocurrencies to be developed, namely, the operation and test exchange of cryptocurrencies in the Xiamen pilot FTZ, it is necessary to maintain a prudent attitude and conduct a complete analysis of policies, environments, and risks to avoid financial risks from the trading of cryptocurrencies.
Summary
According to the conclusion of the IFRSIC in March 2019, cryptocurrencies can be seen as inventories, as defined in IAS 2, when an entity holds them for sale in the ordinary course of business; otherwise, cryptocurrencies can be seen as intangible assets, as defined in IAS 38. Because the trade of cryptocurrencies is strictly prohibited by the Chinese government, there have been no comments from Mainland China. However, according to our survey, many private companies and individuals are very keen to do business in the area of cryptocurrencies. Generally, under the reform and opening up policy, the Chinese government has preferred to operate and test its new business policies in a special economic zone (SEZ) or a pilot free trade zone (FTZ). Xiamen City became a special economic zone in 1980 and a pilot free trade zone in 2015. Based on support from the Xiamen City Federation of Social Science Associations (XMSK 2020), we conducted a survey in Xiamen City on holdings of cryptocurrencies in China.
The results represents that, the respondents defined holdings of cryptocurrencies as investments (45%), inventories (36%), or intangible assets (19%) and stated that the value of cryptocurrencies can be better represented by the fair value (Tables A9 and A10). This result is similar to that of DAAC (Boring 2019), but different to the tentative agenda decision of the IFRSIC, as the conclusion of the IFRSIC does not include investment in its definition. More than half of respondents stated that cryptocurrencies are currencies with two main functions: a medium of exchange (56%) and a monetary unit for pricing goods and services (52%) ( Table A12). This differs from the tentative agenda decision of the IFRSIC, which concluded that cryptocurrencies can be considered cash. 74% of the respondents stated that the current IFRS standards do not satisfy the accounting requirements of cryptocurrency holding (Table A13). Thus, it is necessary to discuss the current IFRS standards for holdings of cryptocurrencies. 62% respondents stated that they support the setting up of legal exchange platforms for cryptocurrencies in the Xiamen pilot free trade zone in China (Table A21). Our suggestion is to initially support the operation and testing exchange of cryptocurrencies in the Xiamen pilot FTZ but to be careful to avoid financial risks associated with the trading of cryptocurrencies.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Statistics of the Valid Questionnaires on Holdings of Cryptocurrencies in Accounting
Appendix A.1. Statistics from the Questions Related to the Basic Information of the Respondents
Selections Number of the Respondents Percentages
To set up platforms for the trading of cryptocurrencies 599 59% To allow entities holding and using cryptocurrencies 554 55% To support that cryptocurrency will become legality 516 51% To support that cryptocurrencies will be freedom traded 505 50% Meitu Inc. is the biggest company that holds cryptocurrencies in Xiamen city. It is an artificial intelligence (AI) driven technology company with a total of 246 million monthly active users in the field of computer vision, deep learning, and computer graphics. The Company's two main subsidiaries are Xiamen Meitu Networks Technology Co., Ltd. and Xiamen MeituEve Networks Services Co., Ltd. From the example of Meitu Inc., we can see that the survey of holdings of cryptocurrencies in accounting is very important in Xiamen city, China. The survey results will really provide some supports to companies when they assess the holdings of cryptocurrencies in accounting. | 2022-04-14T15:26:50.666Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "7b2d2057f21098e385887b2cdb597a25a183d9fd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1911-8074/15/4/175/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b2e9e912bb1d0358012af1f95a83645b577244ed",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
55592932 | pes2o/s2orc | v3-fos-license | DESARROLLO DE COMPETENCIAS PARA LA PROMOCIÓN DE LA SALUD Y CAMBIO DE MODELO DE ATENCIÓN
Estudio de caso anclado en la dialectica, con el fin de analizar el desarrollo de competencias para la promocion de la salud, desde la perspectiva de egresos de un programa de enfermeria y su relacion con el cambio del modelo de atencion. Las entrevistas con enfoque en competencias con diez graduados de una licenciatura en enfermeria. Los datos fueron analizados por analisis critico del discurso. Los resultados muestran que competencias para promocion de la salud, definidas en dos consensos internacionales, se trabajaron durante todo el proceso de formacion. Los discursos producidos muestran que este desarrollo de competencias puede fomentar el cambio del modelo de atencion, a pesar de los desafios en este proceso. Se necesita mayor cambio de paradigma, ya que el suelo parece fecundo para este proceso. Las estrategias curriculares de contacto con la realidad, a traves de la inmersion del estudiante en el escenario de practica desde el inicio del curso, y las actividades de extension e investigacion pueden contribuir al desarrollo de competencias para la promocion de la salud en la formacion de enfermeria y fomentar el cambio del modelo de atencion.
INTRODUCTION
3][4] Its formulation as a dimension of a health policy has been part of ideological discourses since the 1970's, gaining form and expression in 1986 at the I International Conference of Health Promotion in Canada. 4he health promotion movement is intended to overcome the voids in the biomedical model, articulating the entire society to improve the quality of life of individuals and the group.Nevertheless, overcoming the traditional and hegemonic model represents a challenge in the construction of another health paradigm that takes into account the individuals and groups' daily events in the ways of life and in the determination of the health-disease process. 5or the health professionals to effectively act in the contemporary context, it is fundamental to define the necessary competencies, considering the complexity of the health policy and its implementation.Among these competencies, the specific competencies to work in health promotion are highlighted, which refer to a combination of knowledge, skills and essential values needed for the effective practice of individual or collective health promotion actions. 68] The remain-ing competencies, knowledge, values and skills to change the practices and policies in force will come from the continuing education process.
Discussions on nursing education to work in health promotion can be identified in the international literature, 6,[10][11] but gaps remain concerning the competencies developed in this process and the teaching strategies that result in professional standards appropriate to the transformation of health practices.
In that sense, the question is raised how competencies are built for health promotion in nursing education and what pedagogical strategies function as devices for this education?
Against this background, the objective in this study is to analyze competency building for health promotion from the perspective of graduates from a nursing course and its relation with change in the healthcare model.
METHOD
][14][15] Interviews were held with a focus on competencies involving ten graduates, out of 57 nurses graduated from the first two classes of a public higher education institution in the Minas Gerais State.
The graduates were contacted through a presentation and invitation by e-mail to participate in the research.The e-mail addresses registered at the student administration of the place of study were used.Ten graduates accepted the invitation.Despite the interest manifested in the research, the remainder declined the invitation because they were working or taking a master's/doctoral program in other cities or states distant from the place of study.The interviews were held between July and November 2014.
To collect the data, interviews were held with a competency or behavioral focus, an organizational psychology tool that has been used in human resource recruiting and selection processes.Through this technique, the interviewer intended to collect examples of situations the interviewee had experienced, trying to discover what he did, felt and thought and what the results of the action were in a certain situation.
The interview was guided by the following questions: tell me about a health promotion practice you developed as a student or professional; report on a moment during your undergraduate course that allowed you to develop health promotion in your work environment; or describe a health promotion practice for which you took the responsibility and how your undergraduate course in nursing prepared you.
All interviews were audio-recorded with the participants' permission, who signed the informed consent form after clarifications on the purpose of the study.To avoid identification, each graduate received a code, consisting of the letter G (graduate) followed by a sequential number.
To score the health promotion competencies, facilitate the consolidation of the large volume of empirical material and manage all data for analysis, the software webQDA * , version 2013 was used to support the qualitative data analysis in a collaborative environment. 16he researcher transcribed the interviewees' spoken and recorded discourse, maintaining oral registration elements like intonation, emphasis, pause, changes in the voice pitch and rhythm.Therefore, transcription conventions and models were considered, [17][18][19] concerning the guidelines for discourse analysis resources.Some signs, such as /, [...] and words in uppercase, between inverted commas, brackets, squared brackets and underlined words in the study participants' discourse refer to items in the transcription convention and models.These items correspond to interruptions in the discourse flow, pauses, silence, comments, literal citations and emphasis in the voice, which are relevant in the discourse analysis.
To assess the quality of the data, after the interview, the transcriptions were carefully read to verify whether the data in the material were sufficient for the analysis.The fieldwork was closed off when the empirical research context had been designed.
The analysis of the material from the interview was guided by the critical perspective, in view of the theoretical approach and the method to study the discourse. First, through a horizontal analysis of the narratives in the transcription of the individual in-terviews, the regularities and singular experiences were identified through the meanings underlying the ideas described in the discourse.Next, through a vertical analysis of the material obtained, the common themes were identified in the collected material which, by establishing mutual relationships, permitted the establishment of the empirical categories.Finally, in an interpretive synthesis, through a crossed analysis, the participants' viewpoints and singular expressions were discussed, confronting them, in a dialectic movement, with the authors' critical interpretation of the analytic categories. 18,20he research project underlying this study received approval from the Research Ethics Committee -COEP/UFMG (Opinion No. 694.248 -CAAE 08863612.0.0000.5149), on 06/24/2014 and all phases of this project comply with Resolution 466/2012/ MS on research involving human beings.
The communication of the research results followed the guidelines for qualitative research project results using interviews and focus groups, available in Consolidated criteria for reporting qualitative research (COREQ). 21
RESULTS
Among the ten graduates who were interviewed, eight were working in one or more sectors, including primary care (n=4), hospitals (n=4), emergency services (n=2) and teaching and research projects (n=1), totaling an average seven months of work.Eight graduates also take part in graduate programs, in residency and Master's programs.
The results were organized to demonstrate the findings in the dimensions: Competencies for health promotion and Change in the care model.In the first dimension, the participants revealed their understanding about the core attributes of health promotion competencies.In the second dimension, the practical aspects of these competencies were revealed, as well as the relation with the changes in the care model.
The understanding about the competencies was demonstrated through the knowledge, skills and attitudes that constitute health promotion practices.In that sense, the participants associate the knowledge and skills to be acquired over time.The textual element of temporality, manifested through expressions like "we develop that over time", "are * WebQDA is software developed by the Centro de Investigação Didática e Tecnologia na Formação de Formadores (CIDTFF) of the Departamento de Educação, Universidade de Aveiro, Portugal, as a tool in the organisation and analysis of qualitative data.
constructed along the way", "since the first period", indicates that competency building is something continuous, whose experience during the undergraduate program, including the early inclusion in professional practice, is but one of the moments.
[…] [takes a deep breath] Well, one positive point I see in my undergraduate course is that, since the first term, we were taking part in practicums, that's/ training, / so we gained a view of the context, of the reality of health, [...] And the skills I think we develop that over time (G 01).
The discourse indicates that, in competency building for health promotion, a dominant logic prevails in which knowledge is rated higher than skills and attitudes in the discourse.
What We need the knowledge to be able to pass, so we always need to study, gain updated knowledge ABOUT the theme we are aiming for; secondly we need an attitude, a desire to put that in practice, [...] (G 01).
The practitioners of health promotion should also take into account the determinants of health and the reality of the context the people are inserted in.According to the participants, they gained this view for the first time during the undergraduate course.The graduates also mentioned health as an unalienable right of human beings, referring to the duties of the governments in this process.
Eh [...] it's/ it was reality really./The reality of the service.That was very important./It's because [...] in my life I never used to attend the health service, we've always had a health insurance at my home and we never used to go to the emergency service, to the hospital.We never did.So I got to know that reality here [...], mainly in the poorest regions.Then I realized the population's difficulty, the importance of access for this population, of promoting activities, [...] Because it's really promoting, so as not to be a curative model.So, it's perceiving these people's difficulties [...] what is missing in their lives, the lack of information, the lack of education that also makes things difficult, which we notice in low-income people.
That's very important (G 09).
The government's need to promote a health insurance compatible with each region, each culture and population, and to keep in min that health is a right of all and a duty of the State to provide it, that's why we need to claim high-quality health for all (G 02).
The findings lead to the analysis of the challenges to develop health promotion actions as a social practice, acting on the lacks and threats to change the hegemonic health model.That responsibility is attributed to the State, as the actor responsible for conducting and guaranteeing the right to health in the country.
The graduates appoint the legal support and the defense of health as strategies that permit health promotion actions.The legislation is learned in the course of the education process and stimulates the participation in actions of civil activism.
It's, we / in the 6 th and 7 th term we have / we get knowledge of the laws / of health, [...] and that knowledge makes [(00:06:12)], it shows you the importance of / protesting on behalf of / , of that development of health.
Then, you can protest and develop criticism towards what needs to improve. […] (G 04).
In the same dimension, the ethical values were addressed, referring to the belief in equity and social justice, respecting autonomy and individual and group choice in a participatory and collaborative form of work.In that sense, the textual element of the metaphor was used to discuss how the relationships with people, families and the community should be: the interviewees mentioned that the professionals need skills to adapt their practices to the contexts, taking into account the characteristics of people, families and communities, avoiding generalizations, bundling.
[...] treat the patient as a whole, respecting is / the patient's decisions too, not wanting, it's / bundling the patient in a single place, in a package.Think that everyone's equal.He needs to respect the differences of each / to try and promote health, because that individual, he too,/ [pause to elaborate the response] he/ because he too is capable of receiving that moment (G 10).
The graduates mentioned that the health promotion actions in partnership with other social sectors positively influence people's lives and the health services.In that sense, the development of partnerships was mentioned as a strategy for competency building with a view to health promotion.
Hum, let me see […] Hum, hum/ no, yes we did / there was a group we worked with from the training program, [...] because we did / a / we collected funding to organize the actions on the square, we had the breastfeeding walk.So there was the partnership among the University, the health department, the municipal government / [(00:07:15)] (G 06).
So, at the service mainly, I've got several ideas [...] starting to develop a group of pregnant women too, to prevent a risky pregnancy, [...].We are constituting a group to work with the adolescents in the schools too, lectures, activities in school / this partnership is being 5/7 developed.So I think that everything you can do to reduce the service's demand, for the person to find out why he's got a disease, that's a health promotion activity (G 05).
The planning of health promotion actions was mentioned as a competency domain for health promotion developed during the undergraduate course.The graduates refer to the organization of the activities they accomplished throughout the course and in professional practice, such as actions on the squares, markets, encounters, in hospitals or primary health care services.
But, and / UNDERGRADUATE EDUCATION, why was it important?She has shown that since the first term.So, like, we / knew about the difficulties, [...] we knew that it is difficult, you need to plan, you need to ask the agent / for the agent to go there, call, recall, send the invitation [...] and / I remember that / in the first term we arrive totally immature, [...]You don't have much of that notion that you need to study in advance,/ but something always happened each term to contribute to the promotion of (health)/ for the proposed activity to be better than it was before/.That is beyond doubt (G 08).
In the social mobilization, the graduates discuss how they have stimulated the population to take part in the activities proposed, intended to improve the quality of life, such as walks and gymnastics.The mobilization is considered important to grant visibility to the social demands.
It was there at / in the rural practicum, in (name of a village near the city where the course is located)./I used to stay at the health service and / when I got there we had to do something, some movement for promotion.And / I had the idea of mobilizing the population to have gym class in the morning / (G 10).
DISCUSSION
3] There is evidence that signals this movement of rupture, represented by the awareness that the current practices have failed, which ignore the context and the subjects. 8he implementation of the new care model requires actions that are centered on the relation of trust among the professionals and between them and the people, the family and the community, promoting self-care, respecting people's dignity, sympathizing and supporting the citizens in conscious decision making, in the attempt to guarantee to right to individual responsibility over one's life and health. 22The valuation of the people, evidenced in the participants' discourse, is important for the new health care model. 24s evidenced, the new care model under construction tends to approach the recommendations of the World Health Organization and the Unified Health System (SUS), whose focused is centered on the health promotion of individuals, families and communities.This new model seeks to raise the citizens' awareness, trying to stimulate them to practice behaviors, to promote healthy lifestyles and to enable them for shared decision making in the complex situations of daily life, towards the exercise of citizenship and the strengthening of the community. 4,25hese aspects, defined since the Ottawa Charter 4,26 and strengthened in the National Policy for Health Promotion, 25 demand professionals with other competencies than those traditionally used to work in the medical-hegemonic model.
It is important to highlight that health advocacy, social mobilization, leadership and partnerships are competencies present in the discourse and indicate new modes of doing health.The participants acknowledge that the health promotion actions performed in partnership with other health professionals and other social sectors stimulate the leadership of community members, permit greater dissemination of the proposal and positively influence people's lives and the health services.
These competencies are mainly mobilized in health education and in social movements in defense of health. On the other hand, it is acknowledged that the change movement is dialectical and faces challenges in view of the old that insists in staying in place.In that sense, the marks of knowledge and information transmission, which go against the perspective of active subjects, and the actions on the social determinants are also mentioned in the discourse, especially concerning the communication domain and the execution of competencies for health promotion.
Hence, to advance in the construction process of the new care model, the discourse demonstrated that the nurses have mobilized health promotion competencies developed in their education process, in the academy as well as in life. 27From a temporal perspective, this competency building is ongoing and not limited to the undergraduate course.
In academic life, however, as a privileged space for education, the induction of the paradigmatic change in the care model is built on the approach, as early as possible, with the contexts of the health services' professional practices. 28o revert the biomedical rationality that rules in schools, some curricular strategies, such as student immersion in practicums with active learning since the start of the course, contribute to the contact with the reality of the population's life and health and its determinants. 28Living and living together in the service context, gaining know-how in practice favors reflection and competent action in the search for solutions for professional practice.Competency building is facilitated in this inclusion, as it permits the elaboration of intervention proposals and the development of effective and sustainable actions to promote health and reduce inequities, based on respect for human beings and their diversity. 22,29ence, at times of structural change in the health care models, the educative process is considered particularly important for core competency building for the new social practices, driven by other ideologies.In that sense, this study underlines the following thought, 30 that to fight dominant policies, one should encourage the dialogic democracy to guarantee the freedom of expression and recovery the true sense of being citizens.:50
FINAL CONSIDERATIONS
In this study, the social practice expresses ideas that remit to the overcoming of the hegemonic care model.Nevertheless, the reproductive nature of the practices is also maintained, including the transmission of knowledge and information presented as challenges for social transformation.
As characteristics of the new model, health promotion is presented as an important practice, learned in an integrated manner in the course of the education process.
Competency building for health promotion is learned in practice, continuously and daily, based on the students' inclusion in the health services, which favors their learning through contact with the population and the development of autonomy and critical sense.
The findings indicate that the curricula's innovative strategies can break with the social matrix of the reproductive, regulatory and conventional discourse and encourage creativity and inventions in nursing education, furthering the competencies of social mobilization, health advocacy and partnerships.These are changes that sustain the rupture and movement of transformation in the health care model.
Nurses with core competencies in health promotion play an important role to guarantee the integrated perspective and effectiveness of the actions in the new model, permitting reflections on the practices and contributing to the ongoing change process.
As study limitations, it can be mentioned that the results described are restricted to the context the study was developed in and the time period considered.The researchers attempted to guarantee the strict execution of the methodological process to minimize the expression of subjectivity implicit in qualitative research.These study results suggest further research in different contexts for the sake of comparisons with other national and international contexts. | 2018-12-07T22:19:33.369Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "b53695c7f92f22683a8b01c8e841c437e9647f55",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/tce/v25n2/0104-0707-tce-25-02-2150015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82c9fb765ced1cbae9c9275af1cf74b18ab98cbc",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Philosophy"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.