id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
10647306 | pes2o/s2orc | v3-fos-license | Saturation mutagenesis of selected residues of the α-peptide of the lantibiotic lacticin 3147 yields a derivative with enhanced antimicrobial activity
Summary The lantibiotic lacticin 3147 consists of two ribosomally synthesized and post-translationally modified antimicrobial peptides, Ltnα and Ltnβ, which act synergistically against a wide range of Gram-positive microorganisms. We performed saturation mutagenesis of specific residues of Ltnα to determine their functional importance. The results establish that Ltnα is more tolerant to change than previously suggested by alanine scanning mutagenesis. One substitution, LtnαH23S, was identified which improved the specific activity of lacticin 3147 against one pathogenic strain, Staphylococcus aureus NCDO1499. This represents the first occasion upon which the activity of a two peptide lantibiotic has been enhanced through bioengineering. Funding Information Work in the authors' laboratory is supported by the Irish Government under the National Development Plan; by the Irish Research Council for Science Engineering and Technology (IRCSET); by Enterprise Ireland; and by Science Foundation Ireland (SFI), through the Alimentary Pharmabiotic Centre (APC) at University College Cork, Ireland, which is supported by the SFI-funded Centre for Science, Engineering and Technology (SFI-CSET) and provided P.D.C., C.H and R.P.R. with SFI Principal Investigator funding.
Introduction
Lantibiotics [lanthionine-containing antibiotics (Schnell et al., 1988)] are a member of the family of antimicrobial peptides termed bacteriocins (Willey and van der Donk, 2007;Bierbaum and Sahl, 2009). In lantibiotics, dehydroalanine (Dha) and dehydrobutyrine (Dhb) residues are formed through the dehydration of serine and threonine respectively. The eponymous lanthionine (Lan) and b-methyllanthionine (MeLan) residues are enzymatically introduced when a covalent (thio-ether) bridge forms between a neighbouring cysteine and one of these unsaturated amino acids. These post-translational modifications confer structure and function to the previously inactive precursor peptide. Lantibiotics have been the subject of intensive studies as a result of their broad target range, potent activity and their potential as safe, natural food additives or as chemotherapeutic agents (Cotter et al., 2005b;Galvez et al., 2007;Piper et al., 2009a).
Lacticin 3147 is a lantibiotic produced by the foodgrade bacterium Lactococcus lactis spp. lactis DPC3147. It is active against a variety of clinically significant Gram-positive organisms, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus strains (VRE) and penicillin-resistant Pneumococcus, in addition to foodborne pathogens such as Listeria monocytogenes and Bacillus cereus (Lawton et al., 2007b;Piper et al., 2009b;Carroll et al., 2010). Lacticin 3147 is a two peptide lantibiotic and thus both peptides, Ltna and Ltnb, are required for full antimicrobial activity (Wiedemann et al., 2006). Lacticin 3147 is active at single nanomolar concentrations through a dual mechanism in which Ltna first interacts with the cell wall precursor lipid II to inhibit peptidoglycan synthesis. It is proposed that the resulting Ltna : lipid II complex then interacts with Ltnb to facilitate pore-formation (Wiedemann et al., 2006). An unusual feature of lacticin 3147 is the presence of three D-alanines (D-Ala) that are enzymatically derived from ribosomally introduced L-serines (Cotter et al., 2005c). Lacticin 3147 is one of only two examples of prokaryotic gene-encoded peptides in which such modified residues have been identified (Skaugen et al., 1994).
Two peptide lantibiotics remain relatively uncommon. To date only seven other lacticin 3147-like antimicrobials have been characterized; staphylococcin C55 (Navaratna et al., 1999), plantaricin W (Holo et al., 2001), Smb (Yonezawa and Kuramitsu, 2005), BHT-A (Hyink et al., 2005), haloduracin (McClerren et al., 2006;Lawton et al., 2007a), lichenicidin (Begley et al., 2009;Dischinger et al., 2009) and a predicted pneumococcin (Majchrzykiewicz et al., 2010). The a peptides of the two peptide lantibiotics cluster with the mersacidin-like peptides (Cotter et al., 2005a), with the exception of the enterococcal cytolysin (Cox et al., 2005). Despite their different overall structures, alignment of the mersacidin-like peptides with those from the more distantly related lacticin 481 subgroup ( Fig. 1) reveals the presence of a conserved stretch of amino acids, (C)TXS/TXD/EC, a motif encompassing a conserved ring that has been suggested to comprise the core site for lipid II binding (Breukink and de Kruijff, 2006;Cotter et al., 2006). Significantly, this region represents the first point of contact between mersacidin and the bacterial cell and is thus believed to contain core amino acids that are essential with regard to the functionality of these peptides (Hsu et al., 2003).
Lantibiotics represent one possible solution to combat the emergence of multi-drug resistant strains (Boucher et al., 2009). The fact that lantibiotics are gene-encoded and ribosomally synthesized has facilitated the development of versatile expression systems capable of producing novel derivatives. This approach has generated information about the structure-function relationships of lantibiotics, as well as allowing one to screen for the enhancement of chemical and antimicrobial properties (Field et al., 2010a). A description of the role of each individual amino acid or domain will be required for a rational approach to design new, improved lantibiotics. To this end, an attempt has been made to predict the identity of essential and variable residues across both peptides of lacticin 3147 using alanine scanning mutagenesis (Cotter et al., 2006). However, the consequences of changing a residue to alanine do not always reflect the overall tolerance/intolerance of a specific residue to change. To address this issue, saturation mutagenesis of the lipid II-binding component of lacticin 3147, i.e. Ltna, was performed. We focused on the residues of Ltna that are not involved in bridge formation and while this approach frequently confirmed the relevance of the alanine scanning approach with regard to the importance of specific amino acids, a limited number of substitutions were found to be tolerated at positions previously designated as being intolerant of change. We also noted that amino acids highly conserved across related lantibiotics are not necessarily inviolate. More significantly, we identified the first derivative of a two peptide lantibiotic with increased potency against a strain of clinical significance. Fig. 1. Alignment of the unmodified sequence of structural peptides of the mersacidin-like lantibiotic subgroup (Mersacidin Subgroup), which includes single-peptide members and the closely related a peptides of two peptide lantibiotics, and the lacticin 481-like lantibiotic subgroup (Lacticin 481 Subgroup). The sequence of Ltna is italicized. Dark grey boxes indicate residues that are fully conserved across both subgroups, medium grey indicates residues that are highly conserved across both subgroups, while light grey indicates partially conserved residues. The putative lipid II binding motif conserved among the mersacidin and lacticin 481 lantibiotic subgroups (with the exception of (C), which is not found in the lacticin 481 subgroup) is given at the bottom of the alignment.
Results and discussion
Saturation mutagenesis was performed using a PCRbased approach and a two-plasmid expression system was subsequently applied in generation of banks of Ltna mutants (Field et al., 2007), with a particular focus on mutants that retained at least some bioactivity against the sensitive indicator strain L. lactis HP (Table 1). As the aim of the current study was to confirm whether individual residues are tolerant or intolerant of change, no attempt was made to distinguish between mutations that impact on production and those that impact on specific activity.
Targeting of 'essential' residues in Ltna for site-saturation mutagenesis
The conversion of a number of Ltna residues to alanine resulted in the abolition of bioactivity (Cotter et al., 2006); as such, these residues were designated as being 'essential' for bioactivity of lacticin 3147. These include residues proposed to be involved in the interaction with Ltnb (F6, S7, W12, N14), putative lipid II binding residues (L21, E24) and two tryptophans (W18 and W28). Despite not being conserved (Fig. 1), the replacement of F6 and W12 with alanine was previously found to eliminate bioactivity (Cotter et al., 2006). Here saturation mutagenesis established that conservative substitutions are tolerated at position 12 (Fig. 2), with bioactivity decreasing relative to the size of the newly incorporated residue (Trp > Tyr > Phe; Table 1). However, a critical role was confirmed for F6 with respect to bioactivity (Cotter et al., 2006) (Table 1). Based on previous observations (Jing et al., 2003;Sanderson and Whelan, 2004), there is a likelihood that aromatic amino acids in membrane-acting peptides such as these are likely to be situated at the lipid-water interface and promote hydrophobic interaction with the cytoplasmic membrane. Thus, replacing the native residue with any amino acid other than another aromatic residue could be expected to have a detrimental effect on antimicrobial activity (Cotter et al., 2006). Position S7 is subject to a two-step post-translational modification to form D-alanine (Cotter et al., 2005c). Despite the natural presence of an alanine at the corresponding location in Blia and the fact that only Ltna, and potentially Saca (Suda et al., 2011), possess a D-alanine at this location ( Fig. 1), an S7A mutant was previously found to be inactive (Cotter et al., 2005c;Cotter et al., 2006). Here we confirm the negative consequences of such a S7A change and note that many other substitutions also result in the elimination of bioactivity. However, in line with previous investigations, S7T (dehydrated to Dhb, data not shown) and S7G substitutions (Cotter et al., 2005c) were both found to result in active mutants (Table 1), confirming a limited tolerance to change (Fig. 2).
The previously generated N14A mutant lacked bioactivity (Cotter et al., 2006), in accordance with the complete conservation of N14 among lantibiotic a peptides (Fig. 1). This suggested a pivotal role for this residue in the synergistic interaction between both component peptides (Cotter et al., 2006). However, it is now apparent that some substitutions are tolerated to some degree, including replacements with positively charged residues arginine or lysine (Table 1).
Despite the variability of L21 across the mersacidin and lacticin 481 subgroups and the presence of alanine at the corresponding location in the closely related C55a (Fig. 1), a LtnaL21A mutant was previously found to be inactive (Cotter et al., 2006;O'Connor et al., 2007). Thus, this position was previously designated as being essential. However, site-saturation of L21 found that two Tolerance of residues of Ltna to change as determined by (A) alanine scanning, with the assumption that lack of bioactivity on substitution with alanine (or glycine in the case of a native alanine) indicates an immutable residue, while retention of bioactivity suggests that other residues could be tolerated at the specific position, and (B) saturation mutagenesis, with those that retain bioactivity only when the native residue is present designated as immutable residues, and those retaining bioactivity on substitution with one or more other residues classified as tolerant to change. Grey circles indicate tolerant positions while black circles indicate immutable positions. White circles represent residues involved in bridge formation, which were not targeted in this study. conservative substitutions are tolerated, L21M and L21V, and perhaps surprisingly, a L21Y mutant retained a small amount of bioactivity (Table 1). The retention of bioactivity by the L21V mutant is notable in that it renders Ltna more similar to other a peptides (SmbB, BHT-Aa and Blia; Fig. 1). The results of saturation mutagenesis at positions N14 and L21 highlight the risk of relying solely on alanine scanning as an indication of tolerance to change (Fig. 2). E24 is highly conserved among the mersacidin and lacticin 481 subgroups (Fig. 1) and LtnaE24A showed no bioactivity (Cotter et al., 2006). When the corresponding residues in mersacidin (Szekat et al., 2003), actagardine and Hala of haloduracin (Cooper et al., 2008) were substituted with Ala (and also to Gln in the case of HalaE22), all antimicrobial activity was also lost. HalaE22Q was also deficient with regard to the inhibition of peptidoglycan formation by the enzyme PBP1b, which uses lipid II as a substrate for glycan polymerization (Oman et al., 2011). Unsurprisingly, on saturation mutagenesis of LtnaE24, the only mutant to retain activity, albeit low levels, is one in which the negative charge at this position is maintained (E24D; Table 1). This is consistent with previous findings (Deegan et al., 2010) and renders the peptide more similar to many members of the lacticin 481 subgroup (Fig. 1). This observation contradicts a previous investigation that designated E24 as intolerant of change (Cotter et al., 2006) (Fig. 2). However, while a preference for the native glutamate is apparent, it can be said that a negatively charged residue at this position of Ltna is of critical importance.
Despite their variable nature across the mersacidin-like peptides (Fig. 1), bioactivity was abolished when W18 and W28 were converted to alanine (Cotter et al., 2006). It was noted that on saturation mutagenesis of position W18, none of the Ltna mutants identified retained detectable bioactivity (Table 1). In contrast, a W28Y mutant retained bioactivity, albeit at a reduced level compared to the wild-type, in disagreement with the anticipated intolerance to change at this location (Cotter et al., 2006) (Table 1; Fig. 2).
Targeting of 'variable' residues in Ltna for site-saturation mutagenesis
Many Ltna residues were classified as variable in that they could be altered to alanine (or glycine in the case of native alanines) without resulting in complete loss of bioactivity (Cotter et al., 2006). These include the N-terminal variable residues (T3, N4, L8, T5), ring B variable residues (D10, Y11, G13, N15, G16, A17) and C-terminal variable residues (H23, M26, A27, K30). It should be noted that some changes which were previously found to be tolerated (Cotter et al., 2006) did not yield bioactive strains on this occasion, presumably as a consequence of the reduced activity associated with the expression system used in this study.
Residues T3 and T5 of Ltna are both dehydrated to Dhb in mature lacticin 3147. Given the degree to which the previously generated T3A mutant retained bioactivity (Cotter et al., 2006), coupled with the fact that a threonine at this position is not conserved across the a peptide group (Fig. 1), it is perhaps not surprising that four bioactive mutants, T3M, N, Y and D, were identified on saturation mutagenesis of this residue (Table 1). Interestingly, none of these substitutions were residues present at the corresponding locations in other members of the group (Fig. 1). The retention of low levels of bioactivity by T5A, which renders the peptide more similar to Bhaa, was replicated (Cotter et al., 2006), and two additional bioactive mutants were identified (T5V, L; Table 1). Significantly, T5V rendered Ltna more similar to the related a peptides SmbB and BHT-Aa (Fig. 1).
Because some activity was retained upon conversion of N4 to alanine (Cotter et al., 2006) and the residue at this position varies across the a peptide group (Fig. 1), N4 was previously categorized as being non-essential with respect to the bioactivity of lacticin 3147. Three active mutants were identified (Table 1), two of which involved substitutions which rendered the peptides more similar to other a peptides (A, Pnma; V, SmbB and BHT-Aa) (Fig. 1). Similarly, L8 is variable across the group and a L8A mutant previously displayed bioactivity (Cotter et al., 2006). In accordance with this, an additional bioactive mutant was detected (L8E; Table 1).
Previous mutagenesis of the residues within ring B of Ltna suggested that this region is tolerant of change in that six of the nine corresponding mutants retain bioactivity when altered to alanine (Cotter et al., 2006). The first of these residues, the negatively charged residue D10, is not conserved across the a peptide group (Fig. 1) and five active mutants were identified (Table 1). In particular, the retention of bioactivity following substitution with another negatively charged residue, D10E, was anticipated in light of the natural presence of a glutamate at the corresponding position of Pnma (Fig. 1). Despite the non-production of a D10K mutant previously (Deegan et al., 2010), it seems that the presence of a negative charge here is not essential, given that a D10H mutant still retained some bioactivity.
G13A was previously found to retain a considerable level of bioactivity (Cotter et al., 2006), even though glycine is conserved in all a peptides (Fig. 1). It was suggested that alanine alone, because of its similarity to the native glycine, could be tolerated (Cotter et al., 2006). However, saturation mutagenesis established the tolerance of the G13 residue to a variety of substitutions, ranging from other hydrophobic residues (G13A), to nonconservative hydrophilic neutral (G13N and G13Q) and charged residues (G13H and G13R) ( Table 1). It should be noted that the bioactivity level of all G13 mutants was much reduced when compared to wild-type.
Residue G16 is even more highly conserved than G13, being fully conserved across both the mersacidinlike peptides (except RumB) and lacticin 481-like peptides (Fig. 1). This hyper-conserved nature, and the reduced and absent bioactivity of G16A (Cotter et al., 2006) and G16E (Field et al., 2007), respectively, suggested that G16 is less tolerant of change than its G13 counterpart. This was indeed the case as only one additional substitution retained detectable bioactivity (G16S, in which the Ser residue remains unmodified; data not shown) (Table 1). Similarly, on saturation mutagenesis of the corresponding glycine in mersacidin (G9), only three bioactive mutants were identified, including G9A and G9S . The corresponding position in nukacin ISK-1 (G5) has been shown by saturation mutagenesis to be essential to bioactivity (Islam et al., 2009).
Residues N15 and the previously discussed N14 are noteworthy due to the contrasting consequences on conversion to alanine, with N15A retaining significant bioactivity (Cotter et al., 2006). Saturation mutagenesis of N15 revealed eight mutants that retained bioactivity (Table 1), including the previously described N15A. In line with previous studies (O'Connor et al., 2007;Deegan et al., 2010), an N15K mutant, which more closely resembles the related C55a, SmbB, BHT-Aa, Hala and Pnma peptides (Fig. 1), exhibited relatively high levels of bioactivity (Table 1). A N15S substitution that alters Ltna to more closely resemble rumB, plantaricin C, michiganin and actagardine and many of the lacticin 481 peptides (Fig. 1) was also tolerated (Table 1).
Residue A17 is expected to be amenable to substitution based on its variation among related peptides (Fig. 1) and the fact that high activity was observed on substitution with glycine (Cotter et al., 2006). In keeping with this hypothesis, site-saturation at this position yielded a number of active mutants (Table 1). Surprisingly, no residues found at the corresponding locations in other group members were identified. We did not detect a previously described mutation, A17N, that makes Ltna more closely resemble Saca and which has little impact on bioactivity (O'Connor et al., 2007).
Although M26 is conserved in six out of eight a peptides (Fig. 1), the retention of some activity on substitution with alanine led to its classification as a residue that is amenable to change (Cotter et al., 2006). Indeed, following saturation mutagenesis, many active substitutions were identified (Table 1), including M26L that more closely resembles some members of the lacticin 481 subgroup. While an M26I mutant was only slightly active, mutation of the corresponding residue (V22) in nukacin ISK-1 to isoleucine resulted in a variant with increased potency (Islam et al., 2009).
In keeping with its designation as a variable residue (Cotter et al., 2006), and its non-conserved nature (Fig. 1), position A27 was found to be very tolerant of change when subjected to site-saturation mutagenesis (Table 1) (Cotter et al., 2006). In fact, site-saturation mutagenesis of LtnaA27 yielded the greatest number of bioactive mutants. A change to arginine, which is found in the equivalent position in SmbBa and BHT-Aa, resulted in a mutant that retained much of its bioactivity. A change to valine, which renders the peptide more similar to Plwa, had a more damaging impact. The identification of an A27S variant was interesting given that previous attempts to construct this mutant in order to generate a derivative of Ltna that more closely resembled C55a were unsuccessful (O'Connor et al., 2007). We established that this mutant retained close to wild-type levels of bioactivity (Table 1), and like A27T, remained in an unmodified form (data not shown). The A27S variant also displayed levels of bioactivity comparable to those of the wild-type against S. thermophilus NCDO2525 and L. lactis AM2 (data not shown). A27S was purified in order to determine its specific activity (see below).
Ltna has a net neutral charge (two positive residues, H23 and K30; and two negative; D10 and E24). While alanine substitution of the negatively charged residues had a relatively major impact, changing the positively charged amino acids had a lesser effect (Cotter et al., 2006). Indeed, a previously described derivative substituting alanine for both positive residues still retained considerable bioactivity (Deegan et al., 2010). The H23 location also merited attention by virtue of being the only position in the region of Ltna within the predicted lipid II binding domain (residues 19-25), which retained bioactivity on conversion to alanine. Furthermore, both H23 and K30 are variable across the mersacidin-like peptides (Fig. 1). Saturation mutagenesis indicated many permissible substitutions for these positively charged residues (Table 1). In two instances they were replaced by other positively charged amino acids (K30R and H23R). Mutants where the substitution mimics a natural variation between Ltna and the other a peptides (Fig. 1) were identified, namely H23V (SmbB, BHT-Aa and Hala), K30N (Plwa, Hala and Blia) and K30Q (BHT-Aa). Although it has previously been established that both H23D-and K30D-producing strains are bioactive (Deegan et al., 2010), these mutants were not identified in the current study, suggesting that they were not created or were not among those tested.
It was apparent that Ltna H23S-and H23T-producing mutants retained close to wild-type levels of bioactivity against S. aureus NCDO1499, a clinical isolate involved in bovine mastitis, S. thermophilus NCDO2525 and L. lactis AM2 (data not shown). As a consequence of this bioactivity, coupled with our inability to detect these peptides by CMS, it was postulated that production of these peptides may be reduced and thus that specific activity may be relatively high. On that basis, the H23T and H23S peptides were selected for purification and further analysis. Following purification, masses of 3269 and 3255 Da were ascertained for H23T and H23S, respectively, indicating that both hydroxyl residues remained in an unmodified form. Significantly, a serine residue is naturally present at the corresponding positions in both mersacidin and plantaricin C (Fig. 1), which in the case of mersacidin is known to be modified to Dha.
Peptide purification and specific activity studies
Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) of H23S, H23T and A27S confirmed peptides of the expected mass, with the exception of an additional peak corresponding to 3285 Da for H23T, which was indicative of oxidation. This occurred despite the use of a variety of strategies designed to minimize this phenomenon. As a result its specific activity could not be accurately assessed. Accordingly, only purified H23S and A27S were utilized for specific activity studies.
The MICs of Ltna A27S against L. lactis AM2 and S. thermophilus NCDO2525, both alone and when combined with Ltnb, are higher than those of Ltna and the wild-type Ltna-Ltnb combination (Table 2). H23S-Ltnb had a MIC of 0.0313 mM against S. thermophilus NCDO2525, similar to the wild-type Ltna-Ltnb combination ( Table 2). The MICs of the two Ltna peptides were also identical when determined in isolation against S. thermophilus NCDO2525. It was noteworthy that the H23S variant alone is twofold more active than its wildtype counterpart against L. lactis AM2 but, when combined with Ltnb, had a MIC the same as that of wild-type Ltna-Ltnb (Table 2). This is only the second example where one of the peptides of a two peptide lantibiotic exhibits increased solo specific activity relative to the parental molecule. The first such peptide, LtnbR27A, displayed a twofold increased specific activity against L. lactis HP when compared to Ltnb alone (Deegan et al., 2010). However, in that case an eightfold decreased specific activity was observed when LtnbR27A was combined with its sister peptide Ltna against HP. Most notably, further MIC-based investigations revealed that when LtnaH23S is combined with Ltnb, their combined specific activity (0.25 mM) was twofold greater than the natural lacticin 3147 (0.50 mM) against S. aureus NCDO1499 (Table 2), thus making it the first example of the application of bioengineering to successfully enhance the activity of lacticin 3147, or indeed any two peptide lantibiotic. Prompted by this finding, further MIC-based analysis of LtnaH23S combined with Ltnb against a wider selection of indicator strains including other staphylococcal isolates (S. aureus Newman, S. aureus Farm 1), enterococci (E. casseliflavus 5053, E. faecium 5119) and L. lactis HP revealed that in each case, a twofold decrease in specific activity compared to wild-type Ltna-Ltnb was apparent (Table 2). LtnaH23S alone did not show enhanced solo activity against any of the targets. Thus, although LtnaH23S exhibits enhanced specific activity, both alone and in combination with Ltnb, this enhanced activity is very much a strain specific phenomenon.
Conclusion
Only a small number of bioengineered lantibiotics had been created prior to 2005, including derivatives with enhanced antimicrobial activity (Liu and Hansen, 1992;Kuipers et al., 1996;Wiedemann et al., 2001;Yuan et al., 2004;Rink et al., 2007), derivatives with enhanced properties including improved solubility and stability (Liu and Hansen, 1992;Rollema et al., 1995;Yuan et al., 2004), or ones which enabled researchers to gain an appreciation of structure/function relationships (Chan et al., 1996;van Kraaij et al., 1997;2000;Chen et al., 1998;Wiedemann et al., 2001;Szekat et al., 2003). These pioneering studies suggested that lantibiotic peptides are quite adaptable and it was evident that further bioengineering-based approaches could be rewarding. Some recent examples have been successful with regard to the generation and identification of lantibiotic derivatives with improved antimicrobial and/or physicochemical properties (Rink et al., 2007;Field et al., 2008;Appleyard et al., 2009;Islam et al., 2009;Field et al., 2010b;Rouse et al., 2012). The two peptide lantibiotics have been the subject of much interest as they offer many possibilities with respect to the design of new, and possibly more potent, antimicrobials. To facilitate the rational design of such peptides, we performed saturation mutagenesis on one of the two lacticin 3147 peptides. There are already encouraging signs that Ltna would make an excellent candidate for bioengineering considering the significant number of residues (16/30) that retained bioactivity following alanine scanning mutagenesis (Cotter et al., 2006), and the fact that it can function in combination with the b peptide from another two peptide lantibiotic (O'Connor et al., 2007). This flexibility is coupled with the fact that the involvement of two peptides facilitates the examination of distinct functional domains in isolation (Morgan et al., 2005). While both Ltna and Ltnb each possess solo activity, Ltna is significantly more active than Ltnb. Thus, Ltna derivatives can be more easily assessed in isolation, as well as in combination with Ltnb. It has been speculated that once the basis of the mutual interaction between the a and b peptides is revealed, theoretically the a peptide could be directed to other more strain-specific targets than lipid II (Breukink and de Kruijff, 2006), while continuing to interact with the b peptide to facilitate pore formation.
To this end, site-saturation mutagenesis was performed on all residues of Ltna other than those involved in bridge formation, facilitating a more comprehensive determination of the tolerance of Ltna to change than that provided by alanine scanning (Fig. 2). It was apparent that a number of positions in particular were more amenable to change (N14, L21) than was previously predicted (Cotter et al., 2006). Furthermore, a limited number of mostly conservative changes were tolerated at positions previously designated as intolerant (S7, W12, E24 and W28) (Cotter et al., 2006). Significantly, despite the conserved nature of positions G13, G16 and M26, it was found that within lantibiotics a high degree of conservation does not necessarily mean that change at this location is not tolerated.
Additionally, during this process, a H23S substitution was found to improve the specific activity of lacticin 3147 against a strain of S. aureus responsible for bovine mastitis, and that of the Ltna peptide alone against L. lactis AM2. While the bioengineering of lantibiotics has produced some successes and the activity of a number of one peptide lantibiotics has been enhanced, this is the first description of a bioengineered two-peptide lantibiotic with an improved specific activity. The fact that such enhanced combinations have not been described previously most likely stems from the requirement for two peptides to act synergistically for full activity. This imposes a greater structural constraint on each peptide, and thus alterations made to enhance the interaction of the a peptide with its cell target for instance may have a negative impact on its ability to function synergistically with the b peptide. One might have predicted that the H23S alteration could enhance lipid II binding as a consequence of the peptide more closely resembling mersacidin, whose activity is solely based on lipid II binding without pore formation. However, the fact that the solo activity of Ltna H23S against S. aureus is not improved confirms that the enhanced activity is dependent on the presence of Ltnb. We speculate that this change must either improve the Ltna-Ltnb interaction at the target site or that an enhanced Ltna-lipid II interaction may require a Ltnb-induced conformational change. Future work will focus on the elucidation of the mechanistic basis for the strainspecific enhanced activity of lacticin 3147 H23S relative to lacticin 3147.
In summary, through the study of > 200 mutants, this systematic mutagenesis has provided significant information on the key residues that contribute to the bioactivity of lacticin 3147, which should prove valuable for the rational design of novel lantibiotics with improved properties. Furthermore, while the vast majority of mutants were less potent, the high number of derivatives that were produced in this study can also be interpreted as a test of the in vivo promiscuity of the enzymatic machinery, showing that the biosynthetic pathway of lacticin 3147 has a relatively relaxed specificity when it comes to mutants of Ltna. Perhaps most importantly, a Ltna-H23S change was found to improve the specific activity of lacticin 3147 against a strain of S. aureus, representing the first instance in which an enhanced bioengineered derivative of a two peptide lantibiotic has been identified.
Bacterial strains and growth conditions
Bacterial strains and plasmids used in this study are listed in Table S1. L. lactis and Enterococcus strains were grown in M17 broth (Oxoid) supplemented with 0.5% glucose (GM17) or GM17 agar at 30°C and 37°C respectively. Escherichia coli was grown in Luria-Bertani broth with vigorous shaking or agar at 37°C. S. aureus strains were grown in Mueller-Hinton broth (Oxoid) at 37°C. S. thermophilus NCDO2525 was grown in Litmus Milk (Difco BD, USA) before routine subculturing in M17 broth supplemented with 0.5% lactose (LM17) at 37°C. Chloramphenicol and tetracycline were used at 5 and 10 mg ml -1 , respectively, for L. lactis (unless otherwise stated) where required and at 20 and 10 mg ml -1 , respectively, for E. coli. Xgal (5-bromo-4-chloro-3-indolyl-b-Dgalactopyranoside) was used at a concentration of 40 mg ml -1 .
Site-saturation mutagenesis
Oligonucleotide pairs (Table S2) were designed to replace each target ltnA1 codon with the NNK triplet, which should result in the substitution of the relevant residue with all 19 possible alternatives (Cwirla et al., 1990;Scott and Smith, 1990). Plasmid pDF01 was used as template DNA for saturation mutagenesis and PCR amplification was performed as previously described (Field et al., 2008). Following plasmid amplification and introduction into the intermediate E. coli MC1000 host, plasmid DNA from a pooled bank of pDF01 derivatives (each corresponding to a targeted amino acid) was isolated using a Roche High Pure Plasmid Isolation Kit. DNA sequence analysis with pCI372FOR (MWG Biotech, Germany) confirmed randomization at the relevant codon. PbacA1A2 (containing bioengineered ltnA1 genes, the partner ltnA2 gene and the associated promoter region Pbac) was re-amplified using the primers pPTPLA1A2FOR and pPTPLA1A2REV and template DNA isolated from the individual mutagenized pDF01 pools. Amplified products were purified as before, digested with BglII and XbaI (Roche), ligated with similarly digested and shrimp alkaline phosphatase (Fermentas)-treated pPTPL and introduced by electroporation into E. coli MC1000. Transformants were pooled and stored in 80% glycerol at -20°C. Plasmid DNA isolated from each mutant bank was introduced by electroporation into the strain L. lactis MG1363 pOM44 to facilitate expression of the bioengineered Ltna peptide (in the presence of unaltered Ltnb) for further analysis. A total of 144 transformants were chosen at random and inoculated into 96-well plates containing GM17 chloramphenicol and tetracycline (5 mg ml -1 each), incubated overnight and stored at -20°C after addition of 80% glycerol. Mutants were identified by MS analysis and, in instances where the nature of the change remained ambiguous after MS or a peptide could not be detected, sequencing with TETK P1. All bioactive derivatives in each bank were identified. Ten representative inactive derivatives were chosen from each bank for further analysis, with loss of activity attributed to the particular substitution, an insertion, numerous mutations or the introduction of a stop codon. Varying levels of success were observed in the identification of unique inactive derivatives. Steps were taken to ensure that the companion peptide was unmutated (by MS and/or sequencing) in all cases.
Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS)
Colony mass spectrometry (CMS) was performed with an Axima TOF 2 MALDI-TOF mass spectrometer (Shimadzu Biotech, Manchester, UK) as previously described (Field et al., 2010b). For purified peptide, a small amount of lyophilized peptide resuspended in 70% IPA 0.1% TFA was used for analysis.
Bioassays for antimicrobial activity
Deferred antagonism assays were performed as previously described (Field et al., 2007). For high throughput screening of the Ltna site-saturation banks against L. lactis HP, deferred antagonism assays were performed by spotting strains using a 96-pin replicator (Boekel) on GM17 agar plates. Zone size was measured with callipers and calculated as the diameter of the zone of clearing minus the diameter of bacterial growth.
Minimum inhibitory concentration determinations were performed as described previously (Wiedemann et al., 2006), with incubation for 16 h at 30°C (L. lactis) or 37°C (S. aureus, S. thermophilus and Enterococci). The MIC was read as the lowest peptide concentration causing inhibition of visible growth.
Supporting information
Additional Supporting Information may be found in the online version of this article at the publisher's web-site: Table S1. Strains and plasmids used in this study. UCC, University College Cork; NCDO, National Collection of Dairy Organisms. Table S2. Oligonucleotides utilised in this study. Pho indicates 5′ phosphate. Boldface represents randomized nucleotides (N = A + C + G + T, K = G + T, M = A + C). Underlined sequences represent restriction sites. | 2016-05-12T22:15:10.714Z | 2013-02-25T00:00:00.000 | {
"year": 2013,
"sha1": "361f3136cf642af1358a78f090574bbed8af8cc9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1751-7915.12041",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "361f3136cf642af1358a78f090574bbed8af8cc9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261996501 | pes2o/s2orc | v3-fos-license | Time course and correlates of psychological distress post spinal surgery: A longitudinal study
Background Psychological distress post lumbar spine surgery is associated with poorer outcomes. There is a scarcity of studies devoted to analyzing the risk factors associated with psychological distress in patients who have undergone lumbar fusion surgery. The purpose of this study was to (1) describe the time course and severity of psychological distress using the STarT Back Tool (SBT) and (2) determine the demographic and clinical predictors of SBT score post lumbar spine fusion surgery. Methods This retrospective longitudinal study analyzed 227 subjects with 1- and 2-level lumbar fusion surgery who underwent standardized assessment preoperatively and at 4 and 12 weeks postoperatively. Preoperative variables collected were demographic, clinical, and psychological variables. Postoperative psychological distress was measured by self-reported SBT. Risk factors for SBT over time were identified using ordinal and mixed-effects modelling. Results Although the trajectory of SBT levels declined postoperatively over time, at week-12, 20% of patients had moderate to high SBT. Postoperative SBT scores at week-4 time point was significantly greater than SBT scores at week-8 (OR = 2.7, 95% credible interval [CrI]; 1.8–3.9). Greater SBT scores at week-4 were strongly associated with greater SBT scores throughout 12 weeks of follow-up (OR = 7.3, [95% CrI; 1.2–31.4]). Greater postoperative SBT levels over time were associated with being male (OR = 2.2, 95% CrI; 1.0–3.9), greater preoperative back or leg pain intensity (OR = 2.2; 95% CrI: 1.0–4.4), greater preoperative leg weakness (OR = 4.2, 95% CrI: 1.7–7.5) and higher preoperative depression levels (OR = 4.8; 95% CrI: 1.6–10.4). Conclusion Postoperative SBT levels declined nonlinearly over time. However, a sizable proportion of patients had moderate to high psychological distress at week-12 postsurgery. Greater preoperative back or leg pain intensity, leg weakness and depression levels, and male gender were risk factors of greater psychological distress postsurgery. Although requiring validation, our study has identified potential modifiable risk factors which may give an opportunity to provide early (preoperative) and targeted strategies to optimize postoperative psychosocial outcomes in patients undergoing lumbar fusion surgeries.
Introduction
Lumbar fusion surgery is a costly intervention for patients presenting with degenerative lumbar disease and persistent lower back pain.Importantly, the overall cost of this procedure is growing at an alarming rate; an 80% increase from 1998 to 2014 [1] .While lumbar fusion surgery FDA device/drug status: Not applicable.Author disclosures: JMTQ: Nothing to disclose.JT: Nothing to disclose.IT: Nothing to disclose.JLTC: Nothing to disclose.WY: Nothing to disclose.YHP: Nothing to disclose.Fig. 1.Flowchart according to STROBE guidelines.[5] .Identification of these patients is important to ensure accessibility to appropriate postoperative intervention (eg, psychologically-informed physiotherapy care).
In both primary and secondary care settings, the STarT Back Tool (SBT) was used as a clinical measure of psychological distress [6 , 7] as well as a prognostic tool for clinical outcomes in pain intensity [7 , 8] and disability [9] in low back pain sufferers.In summary, there is compelling evidence to show that high SBT levels are associated with poor health outcomes in all physical and psychological domains.However, compared with several studies that have examined the time course of SBT levels in patients with conservatively managed LBP [6 , 10] , no studies have provided detailed longitudinal data of SBT measures in patients who have undergone lumbar fusion surgery.Much emphasis has been placed on identifying the at-risk group of patients developing chronic, persistent nonsurgical low back pain in rehabilitation settings using the SBT but not much attention has been directed to postoperative spinal surgery patients.Furthermore, hardly any studies have examined the associations of preoperative predictors of postoperative psychological outcome.Only 1 recent study found that there was no association between preoperative physical function and postoperative depression in 119 patients who underwent lumbar fusion surgery [11] .However, that study used the Patient Health Questionnaire (PHQ-9) to measure depression, which is a generic outcome measure of depression [12] , unlike the SBT which has a psychosocial component that has been specifically validated for the low back pain population [13] .
Therefore, to help address these gaps, this retrospective longitudinal study aimed to (1) describe the time course and severity of SBT-derived psychological distress post lumbar fusion surgery and (2) to identify its demographic and clinical predictors.
Study sample
Between January 2017 to December 2018, we identified from our medical records, 292 patients who underwent lumbar spine fusion surgery ≤ 2 levels and attended outpatient physiotherapy at the Singa-pore General Hospital -the largest tertiary teaching hospital in Singapore.Of these patients, we included 227 patients in the present analyses ( Fig. 1 ).Patients diagnosed with degenerative lumbar stenosis and spondylolisthesis were included.To minimize the potential confounding influence of multisegmented ( > 2 levels) lumbar fusion on clinical outcomes [13 , 14] , these patients were excluded.We also excluded patients who had metastatic cancer to the spine, previous spinal surgery within a year and other neurological diseases ( Fig. 1 ).
Post lumbar fusion surgery, all patients underwent inpatient rehabilitation and were referred for outpatient physiotherapy within 2 to 6 weeks following discharge.Patients who attended rehabilitation were given exercises, patient education and any modalities at the physiotherapist's discretion.One hundred sixty-two patients were evaluated within a month preoperatively, and were scheduled for 4-, 8-and 12week postoperative assessments as part of routine clinical care.All data were collected by technicians (n = 4) and physiotherapists (n = 14).The institutional review board approved the study with a waiver of informed consent (Singhealth CIRB 2016/2445, Singapore).
Preoperative risk factors
We extracted from medical records variables that were considered to be plausible risk factors of SBT.The extracted information included patient demographics such as gender, age, body mass index (BMI) and preoperative depression, self-reported preoperative leg numbness and weakness.
Preoperative depression: To assess self-reported depression, a single question (Q28) from the SF-36 ('How much of the time during the past 4 weeks have you felt downhearted and depressed?')was used.Based on a prespecified classification, we recoded the 6 possible response choices into 3 categories: (1) "Good Bit or Most or All the time " (response choices 1, 2, and 3); (2) "Little or Some of the time " (response choices 4 and 5); and (3) "None of the time " (response choice 6).
Preoperative leg weakness and numbness: To assess preoperative leg weakness and numbness, self-reported frequency of numbness and weakness experienced in the lower limb (thigh, calf, ankle, or foot) in the past week were identified using items 12 and 13 of the modified NASS Low Back Pain Outcome Instrument respectively.Both items were evaluated using a 6-item Likert scale and were further categorised into 3 categories: (1) "none " (response choice 1); (2) little or some (response choices 2 and 3); and (3) good bit or most or all (response choices 4, 5, and 6)'.
Follow-up measures
At around 4 and 12 weeks postoperatively, SBT scores were measured.
SBT scores: The SBT psychological subscale consisted of 5-items, which was extracted from the full SBT scale of maximum 9-items [13] .
The SBT psychological subscale score (ranging from 0 to 5) was determined by summing the 5-items related to fear, anxiety, catastrophizing, depression, and bothersomeness.For descriptive purposes, we defined "none " as a subscale score of 0, "mild " as 1 to 2 points, and "moderateto-high " as 3 to 5 points.
Back/leg pain intensity
At preoperative and all postoperative visits, patients were asked to rate their worst level of back and leg pain intensity over the past 1 week, using an 11-point numeric pain-rating scale, with 0 indicating "no pain " and 10 indicating "worst pain ever experienced." For descriptive purposes, we summarized pain intensity levels as "none " (level 0), "mild " (levels 1-4), "moderate " (levels 5-7), and "severe " (levels 8-10).Notably, pain intensity was analyzed as an ordinal variable rather than as a dichotomous (present or absent) variable to preserve statistical power [14 , 15] .
Statistical analyses
To examine the time course of postoperative SBT levels and its correlates, we fitted separate Bayesian proportional-odds ordinal mixedeffects models [16 , 17] which included postoperative SBT levels as the response variable.To account for multiple observations from each patient, we used mixed-effects models with patient-level random intercepts.All models included selected predictor variables (assessed preoperatively and week-4 postoperatively) and time (weeks since lumbar surgery) as fixed effects.To avoid assuming linearity, time was modelled flexibly as a restricted cubic spline in all models [15] .For week-4 SBT score and preoperative ordinal variables, these predictors were modelled as monotonic ordered predictors [18] .
In our modelling strategy, we first considered possible covariates in the association between preoperative depression and postoperative SBT scores using a directed acyclic graph (DAG) approach [19] .Because week-4 postoperative SBT scores could be an intermediate variable between preoperative depression and longer-term postoperative SBT (Appendix Fig. 1), we fitted separate models for the preoperative predictors alone.Furthermore, because preoperative depression could be an intermediate variable in the pathways between preoperative back/leg symptoms and postoperative SBT, we fitted another model that excluded preoperative depression levels.
To reduce the likelihood of estimating unrealistic values without excluding reasonable values [20] , we set weakly-informative prior distributions for all model parameters.All Bayesian models were fitted using the brms [21] R package, and each model used 8 chains, 2,000 iterations per chain, to generate the posterior samples for all exponentiated regression coefficients (that is, the odds ratios [ORs]).From these samples, we calculated the proportion of distribution that exceeded 1.0 (null value), thereby estimating the (posterior) probability that a given predictor was associated with the outcome.
To assess statistical significance, we interpreted a predictor effect as statistically "significant " if its posterior probability exceeded 95%.To assess potential clinical significance, we (1) computed the adjusted ORs associated with a 5-point (for 11-point scales) or 3-point (for 6-point * For postoperative SBT scores, we defined none as 0, mild as 1-2, and moderate-to-high scores as 3-5.scales) change and (2) estimated the probability that the ORs exceeded 1.5 ( "moderate " effect size) or 2.0 ( "moderate-to-large " effect size).Finally, to complement the ORs, we transformed the predictor effects back to the original (count) scale and computed the difference in mean postoperative SBT levels at the week-12 timepoint.(Appendix details the model implementation.)We used R software ( http://www.r-project.org ) for all analyses and graphing.
Patient characteristics
Table 1 shows the demographics, preoperative and postoperative clinical characteristics of the study patients, comprising 58% women and a mean age of 64.1 years (SD = 8.8), with mean body mass index (BMI) of 26.3kg/m 2 (SD = 4.2).Based on recommended BMI cut-offs for the Asian population, 43% of our sample was overweight (BMI: 23-27.5kg/m 2 ) and 36% was obese (BMI ≥ 27.5 kg/m 2 ).For each predictor, adjusted ORs (95% CI) compare the odds of greater postoperative SBT scores between 2 comparison values.For example, other variables being equal, a patient with a back/leg pain intensity of 8 points had, on average, 2.3 times (95% CrI: 1.02-4.43times) the odds of having greater popstoperative SBT scores relative to the patient with a pain intensity of 3 points.The posterior probability that back/leg pain intensity was associated with postoperative SBT at an OR exceeding 2.0 was 62%.* Probability of posterior distributions of ORs exceeding 1.0 (null value), 1.5 (moderate effect size), and 2.0 (moderate-to-large effect size).† Adjusted differences indicated the difference in week-12 SBT scores between the 2 comparison values of each predictor.
Postoperative SBT scores
1.8-3.9).Specifically, the odds for greater postoperative SBT levels for an average patient at the week-4 timepoint was 2.7 times that for the average patient at the week-8 timepoint.
Risk factors
Table 2 shows the results of an ordinal mixed-effects regression model that used only the demographics variables of age, gender, and BMI (Model 1), a model that included demographics variables plus preoperative back/leg symptoms (Model 2), a model that included demographics variables plus preoperative back/leg symptoms plus preoper-ative depressive levels (Model 3), and a model that included Model 3 variables plus week-4 postoperative SBT scores (Model 4).
Demographics and preoperative characteristics that were independently associated with greater SBT levels over time included gender (OR comparing men and women was 2.2 [95% CrI, 1.0-3.9];Model 1), preoperative back or leg pain intensity (OR comparing pain intensity of Using an OR threshold of 2.0, the posterior probabilities of potentially clinically relevant associations of gender and preoperative back/leg pain intensity with postoperative SBT were ∼63% whilst the posterior probabilities for preoperative leg weakness and depression both exceeded 95% ( ∼98%).In contrast, age, BMI, and preoperative leg numbness had low (65%-75%) probability of any associations with postoperative SBT levels.
Discussion
The purpose of this longitudinal study was to track the trajectory and severity of the postoperative SBT and to identify the predictors of postoperative SBT in patients who underwent lumbar fusion.Our results show that postoperative SBT scores reduced over time ( Fig. 2 ).Specifically, postoperative SBT scores at week-4 time point was significantly greater than SBT scores at week-8 (OR = 2.7; 95% CrI, 1.8-3.9).Additionally, greater SBT scores at week-4 were strongly associated with greater SBT scores throughout 12 weeks of follow-up (OR = 7.3; 95% CrI, 1.2-31.4).Factors associated with greater postoperative SBT levels over time were male gender (OR = 2.2; 95% CrI, 1.0-3.9),greater preoperative back or leg pain intensity (OR = 2.2; 95% CrI, 1.0-4.4),greater preoperative leg weakness (OR = 4.2, 95% CrI, 1.7-7.5)and higher preoperative depression levels (OR = 4.8; 95% CrI, 1.6-10.4).To our knowledge, this is the first study to report these findings in patients with lumbar fusion surgery ( ≤ 2 levels).
Postoperative SBT scores
As expected, SBT levels declined postoperatively, with the greatest decline occurring within the first 6 to 8 weeks postoperatively ( Fig. 2 ).However, it is noteworthy that even at week 12, around 20% of the patients had moderate to high SBT.This is a sizable proportion, and our study results are similar to those of Power and colleagues (2019) [5] who found 17% of postoperative lumbar spine pa-tients had depression symptoms at 3-months using the Hospital Anxiety and Depression Scale.Moreover, our findings also showed that more severe SBT scores at 4 weeks after surgery were strongly associated with worse SBT scores over 12 weeks of follow-up (OR = 7.3; 95% CrI, 1.2-31.4).As such, our findings highlight the need for close monitoring postoperatively, given that persistently high levels of postoperative psychological distress may eventuate in poor surgical outcomes [5] .
Gender and SBT
We found that men had higher SBT scores than women postoperatively, which contrasts with recent reports that gender was not a risk factor for psychological distress (depression) symptoms experienced in patients after lumbar spine fusion surgery [22] .The explanation for our findings is uncertain but it is possible for gender differences to reduce over time, leading to more comparable scores between genders over the long term [23] .Hence future studies with longer follow-up data are needed to confirm our findings.
Pain and preoperative depression symptoms as a predictor of SBT
Our study demonstrated that greater preoperative self-reported back/leg pain and preoperative depression symptoms were strong predictors of greater postoperative SBT scores.Although we do not have similar studies to compare with, our results are plausible.It is possible that pain causes psychological distress [24] and also possible that both pain and depression coexist via neuroimmune and neuroinflammatory mechanisms [25] .The research literature is not clear whether pain causes psychological distress or does psychological distress cause pain.These 2 factors are bidirectional and mutually interactive, but are so closely intertwined that their origins are obscure.Even though the relationship between pain and psychological distress is complex and difficult to unravel, there may be important clinical implications of our findings.
Given that in patients who have undergone lumbar fusion, higher postoperative levels of anxiety and/or depression are associated with significantly higher healthcare costs and opioid use [4] , it may be helpful to identify patients with high pain scores and significant depression symptoms preoperatively and closely monitor their mental health wellbeing and psychological distress postoperatively so that appropriate and timely intervention can be administered to prevent further complications and facilitate recovery.
Preoperative leg weakness associated with postoperative SBT
Our study is the first to our knowledge to show that preoperative selfreported leg weakness frequency was strongly associated with higher postoperative SBT scores (OR = 4.2; 95% CrI, 1.7-7.5).Although we lack studies to compare with, our study results are biologically plausible.In particular, previous studies have suggested that muscle weakness associated with lumbar stenosis may lead to gait disturbances [26] and reduced walking ability [27 , 28] , and this may in turn lead to psychological distress.Because our analyses were adjusted for pain intensity, it is unlikely that this association was mediated by pain.Indeed, our findings are supported by Wahlman et al. [23] , who found that a reduction in pain post lumbar surgery did not correlate proportionally with decrease in depression scores [23] .
Given that 1 previous intervention study has demonstrated moderate associations between improvements in preoperative leg muscle strength and greater postoperative (1-year) physical activity levels [29] , future studies are warranted to examine if improving leg weakness prior to surgery improves postoperative SBT scores, mediated by improvements in physical function.
Clinical implications
Our study has clinical implications.First, having an understanding of the trajectory of SBT during the postsurgery recovery period in patients after lumbar fusion surgery may instil confidence and give reassurance to patients as well as young clinicians that mild levels of psychological distress are expected in the initial postoperative phase.Second, our data could provide valuable information to assist in educating patients about what to expect after surgery.Third, our study highlights that high distress levels at week 4 were strongly predictive of worst SBT score over time, and this finding warrants the importance of early screening using the SBT to identify at-risk patient group of developing complex persistent low back pain post lumbar spine surgery.
Additionally, because psychological health screening is not a routine practice in a busy orthopedic practice, it is understandable that previous study demonstrated that spine surgeons from a single academic spine center failed to diagnose psychological disorders in up to 21% of cases despite their attempts to screen preoperatively [30] .This calls for attention for closer monitoring of psychological distress for patients undergoing spine surgery.Accordingly, the ease of scoring and brevity of the SBT makes it a valuable and efficient tool in a busy clinical setting to screen for psychological distress as it can be administered within 5 minutes.
Limitations
Our study has limitations.First, data collection was limited to a single institution and follow-up period was limited to 12-week for the selfreported SBT measures.Therefore, future larger multicenter cohort studies with longer-term follow-up are needed to confirm our findings.Second, our sample size was modest which limited our ability to assess potential interactions of risk factors with time.Third, although the present study has examined several preoperative clinical and demographic factors, we acknowledge that more detailed factors such as physical activity or step count should be considered.Fourth, while every effort was made to ensure timely data collection, our missing data rate ranged between 29% and 54% across the follow-up time points.Although we used a full-likelihood approach to reduce the potential biases caused by missing data, some residual bias is likely to remain.
Conclusion
In conclusion, our study findings showed that the trajectory of SBT score reduced over time postoperatively.However, a sizable proportion of patients reported moderate to high SBT scores 12-weeks postsurgery.Factors associated with greater postoperative SBT levels over time were being male, higher preoperative back or leg pain intensity, preoperative leg weakness and preoperative depression symptoms.Understanding the trajectory of SBT score over time may help clinicians to identify the atrisk patients of developing disabling back pain and lead to improved care.Additionally, identifying potentially modifiable preoperative factors of leg weakness and depression could open up opportunities to provide effective preventive care and education for at-risk patients post lumbar fusion surgery.Preoperative intervention may potentially optimize postoperative psychosocial health in patients undergoing lumbar fusion surgeries, however future studies are warranted to verify this.
Fig. 2
Fig.2shows the smoothed model-predicted postoperative SBT over time.Overall, postoperative SBT scores improved (reduced) nonlinearly over time: a steep improvement rate was observed in the first 6 to 8 weeks, beyond which the improvement was more gradual.Based on the proportional odds model (Model 1), comparing the 2 timepoints at weeks 4 and 8, the estimated OR was 2.7 (95% credible interval [CrI],
Fig. 2 .
Fig. 2. Natural spline-smoothed predicted mean SBT levels with 95% credible intervals (shaded) over time post lumbar fusion surgery.Model 3 was used to generate model-predicted data.
Fig. 3 .
Fig. 3. Conditional associations of week-4 SBT scores with week-12 SBT scores.Error bars represent 95% credible interval for the regression estimates.To generate the partial plot from the Bayesian mixed-effects model (Model 1), model covariates were set at their median (continuous variables) or mode (categorical variables) values.
Table 1
Demographic and clinical characteristics.
SBT, STarT Back Screening Tool; BMI, Body Mass Index.Continuous variables are summarized as 25th, 50th , 75th percentiles (mean ± SD).Categorical variables are summarized as percentages and frequencies (N).The values in bold in the table are the 50th percentile figures for the Continuous variables.
Table 2
Predictors of SBT scores post lumbar fusion surgery.
PredictorComparison OR (95% CrI) Probability of effect size (%) * Adjusted difference † OR > 1.0 OR > 1.5 OR > 2.0 mean (95% CrI) SBT, STarT Back Screening Tool; BMI, Body Mass Index; OR, Odds Ratio; 95% CrI, 95% Credible Interval.All models were Bayesian proportional-odds mixed-effects models with patient-level random intercepts and adjusted by time since surgery.Model 1 included age, gender, and BMI.Model 2 included demographic variables and preoperative back/leg symptoms.Model 3 included demographic variables, preoperative back/leg symptoms, and preoperative depressive levels.Model 4 included demographic variables, preoperative back/leg symptoms, preoperative depressive levels, and week-4 postoperative SBT scores. | 2023-09-17T15:11:12.390Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "bcdf65b823d5d0da5046a3321a8f99b61e65270d",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7bc9e269533dc6a78ff46a5d0c9be9ec4c87ea76",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
235128327 | pes2o/s2orc | v3-fos-license | Paediatric palliative screening scale as a useful tool for clinicians’ assessment of palliative care needs of pediatric patients: a retrospective cohort study
Background Although the importance of palliative care in pediatric patients has been emphasized, many health care providers have difficulty determining when patients should be referred to the palliative care team. The Paediatric Palliative Screening Scale (PaPaS) was developed as a tool for screening pediatric patients for palliative care needs. The study aimed to evaluate the PaPaS as a reliable tool for primary care clinicians unfamiliar with palliative care. Methods This was a retrospective cohort study of patients referred to the pediatric palliative care teams in two tertiary hospitals in the Republic of Korea between July 2018 and October 2019. Results The primary clinical and pediatric palliative care teams assessed the PaPaS scores of 109 patients, and both teams reported a good agreement for the sum of the PaPaS score. Furthermore, the PaPaS scores correlated with those obtained using the Lansky performance scale. Although the mean PaPaS score was higher in the pediatric palliative care team, the scores were higher than the cut-off score for referral in both groups. Conclusion The PaPaS can be a useful tool for primary care clinicians to assess the palliative care needs of patients and their families.
Background
The World Health Organization defines palliative care as the prevention and relief of physical, psychological, social, and spiritual suffering of adult and pediatric patients and their families due to life-threatening illnesses [1]. Although children's suffering was not fully recognized for a long time, and the concept of holistic suffering in pediatric patients appeared later than in adults, there has been remarkable progress in pediatric palliative care (PPC) in the past decades due to the efforts of dedicated professionals [2,3]. However, some barriers still hinder PPC development in several countries and various medical settings [4][5][6][7]. Previous studies have indicated that a significant gap exists between the needs of children and their families and the services they receive; one of the main underlying reasons for this is an insufficient understanding of PPC by physicians [8][9][10][11]. Therefore, referrals to palliative care are often made late in the disease trajectory even in countries with comparatively better PPC availability [12,13]. Bergstraesser et al. proposed that palliative care can have more diverse aims in children unlike in adults. These include reducing burdensome treatments in advanced cancer, enhancing the quality of life through effective symptom control, and easing the emotional burden on parents and family. An instrument must be available that captures children's palliative care needs at an earlier stage than in adults [13]. Thus, they developed a tool for screening patients called the Paediatric Palliative Screening Scale (PaPaS) which aimed to support health care professionals in accurately identifying children with palliative care needs. Previous validation of the tool showed promising results, and additional studies on the PaPaS have been reported [7,[13][14][15]. However, there are few studies on the clinical application and usefulness of the PaPaS in various settings.
In South Korea, due to the perception and sociocultural trend among health care providers and the general public, the focus has been on intensive curative treatment rather than the quality of life of seriously ill children. Moreover, PPC development has been slow over the past few decades and is only available in limited settings [16]. The Ministry of Health and Welfare started a pilot project for consultant-based PPC in September 2018 to establish and develop PPC as a national health care policy for children with life-threatening or lifeshortening illnesses in South Korea. This project was executed to promote PPC through financial and policy support to PPC providers and monitor the palliative care service delivery processes and their outcomes to apply them to future nationwide policies. During the pilot project, to identify children in need of palliative care, primary health care providers were instructed to use the PaPaS to evaluate children and their families' palliative care needs and refer them to the PPC team based on the results. Therefore, we aimed to investigate the agreement between primary care clinicians of the referral and PPC teams in measuring each palliative care needs domain included in the PaPaS. Ultimately, we wanted to investigate whether the PaPaS can be a useful tool for primary referring clinicians who are not as familiar as the PPC team with the palliative care needs of children and their families.
Setting
The PaPaS was developed to assess the palliative care needs of children and improve the timely identification of children who can benefit from a palliative care approach. It consists of five domains: the trajectory of disease and impact on daily activities; expected outcomes of disease-directed treatment and burden of treatment; symptom burden; preferences of the patient, parents, or health care professionals; and estimated life expectancy. A score ≥ 15 indicates that PPC can be initiated [13,14].
Before beginning the national pilot program, we developed the Korean version of the PaPaS with permission from the original author. Translation, back-translation, and cognitive interviews with experts in palliative care were performed during the translation process. Since the scale was developed for children between 1 and 18 years old, some domains of the PaPaS were difficult to apply in patients < 1 year old [13]. As there is no validated tool for assessing the palliative care needs of patients < 1 year old, we excluded four domains ( 2) from the scale which were not appropriate for infants and developed a modified version for this age group after discussion with the author of the PaPaS. In the pilot program, the criterion for registration and initiating palliative care was set as an index score of 15 for patients ≥1 year old, as recommended by the original study, and 10 for patients < 1 year old according to the modification. Primary care clinicians and the PPC team assessed the patient separately within a day of the consultation.
Design
We conducted a retrospective cohort study of patients referred to the PPC team. In Korea, palliative care was a very unfamiliar concept, and only a few doctors were confident with it [17,18]. Therefore, we assumed that primary care physicians had a lack of knowledge in PPC. However, PPC specialists in those teams were required to complete 76 h of palliative care curriculum before participating in the pilot program. The PaPaS scores of the primary clinical team were compared with those of the PPC team. During the development process of the PaPaS, assessing the performance status of patients was considered an important factor in investigating the prognosis and palliative needs of patients and families [13]. Therefore, the scale was compared with the Lansky performance scale that measured the performance status of patients < 16 years old at the same time as the PaPaS evaluation by the PPC team [19].
Data collection and analysis
The study population comprised patients referred to the PPC team at Seoul National University Children's Hospital and Severance Children's Hospital between July 2018 and October 2019. Among them, patients assessed using the PaPaS by both the primary clinical and PPC teams were included. Data were collected from the patients' hospital records to analyze the clinical characteristics and PaPaS.
Cohen's Kappa was used to measure the agreement between two raters, and linear regression was performed to assess the correlation between the PaPaS and the Lansky performance scale. The total scores and score of each domain were compared between two raters using a paired t-test. We performed subgroup analyses according to the primary care physicians' category: medical specialists or residents. For estimating the sensitivity and specificity of PaPaS assessment by the primary clinical team, we compared it with the PPC team to determine whether the sum of the individual domain scores met the criteria for registration. STATA version 15.1 (Stata-Corp, College Station, TX, USA) was used for statistical analysis.
Results
A total of 467 patients were referred to the PPC team at both hospitals between July 2018 and October 2019. Among them, 109 patients (23.3%) had the PaPaS assessed by both the primary clinical and PPC teams. The median age was 8 years, and over 60% were boys. In hospital A, 80.0% of subjects were cancer patients, whereas 72.2% were non-cancer patients in hospital B (p < 0.001). The performance scale and PaPaS scores were not statistically different between the two hospitals. The mean PaPaS scores of patients aged < 1 and ≥ 1 year old were 14.2 (±4.0) and 22.0 (±5.3), respectively ( Table 1).
The mean scores were not different between cancer and non-cancer patients (22.8 vs. 21.5, p = 0.271). Most patients (93.6%) were referred from the department of pediatrics. Two pediatricians and two nurses were assessors on the PPC team, and their median duration of experience in pediatrics was 9.5 years. There were 55 patients (50.5%) who were assessed the PaPaS by pediatric specialists on the primary clinical teams, and the rest were assessed by pediatric residents (Table 2).
Good agreement (kappa = 0.546) was observed for the total PaPaS scores of patients aged ≥1 year between the primary clinical and PPC teams. Medical specialists showed better agreement compared to residents (kappa = 0.673 vs. 0.442). Additionally, the PaPaS scores correlated with those obtained using the Lansky performance scale. The total PaPaS scores and Q1.1 scores were used to assess patients' performance, correlated with the score calculated by both teams (Table 3).
Although the mean PaPaS score was higher in the PPC team, the scores were > 15 in both groups for patients aged ≥1 year. In subgroup analysis, the mean score of medical specialists was comparable with the PPC team (22.3 ± 5.1 vs. 22.3 ± 6.8, p-value = 1.0). However, pediatric residents showed a lower mean score than the PPC team (18.8 ± 6.3 vs. 21.7 ± 5.5, p-value < 0.001). Five out of nine sub-scales were statistically different between the two groups, and the scores were mostly higher in the PPC team. For patients aged < 1 year, the total scores (mean) were not significantly different between the two groups and two out of seven sub-scales were different (Table 4). Comparing the total PaPaS scores between the primary clinical and PPC teams, the PaPaS assessment by the primary clinical teams showed a sensitivity of 83.7% and specificity of 100%.
Discussion
To our knowledge, this is the first study to compare the palliative care needs assessed by primary clinical and PPC teams using the PaPaS. We found that the PaPaS can be a useful tool to assess the palliative care needs of children with life-threatening illnesses by using consultation-based PPC services in tertiary hospitals in Korea.
Although it is ideal to deliver PPC through a shared care model in which the PPC team works together with primary care clinicians, there is still little consensus between the PPC and primary clinical teams who are not familiar with PPC services regarding the needs of children and their families [11,13,20]. In an earlier study Neonates and infants aged < 1 year have been excluded from previous studies due to different disease trajectories and needs. Since there were no validated tools in previous studies, we evaluated the palliative care needs of this age group using a modified PaPaS tool. As the mean total scores did not differ between the two groups and were higher than the cut-off value (10 points) in both groups, we expect it to be used as a preliminary tool since there are no verified tools for patients < 1 year old.
Furthermore, the PaPaS score correlated with the Lansky Performance status. The total PaPaS and Q1.1, which evaluated patient performance, scores demonstrated a meaningful correlation with the patient's performance. Therefore, the PaPaS may reflect patients and their families' needs by assessing the deterioration of patient performance.
The strength of this study includes evaluating the PaPaS of primary care clinicians for the purpose of the tool development. Second, we tried to validate the Korean version of PaPaS and this is the first study that evaluates the translated version of PaPaS. Third, we attempted to apply the tool to infants albeit this has limitations. Lastly, we compared PaPaS with the Lansky performance scale to assess the performance evaluation section of this tool. However, it is important to note some limitations. First, it was not a prospective study. Since PPC is still an unfamiliar model of care in South Korea due to a lack of understanding among the general public and health care providers, this study was performed as a retrospective analysis of PaPaS data derived from the national pilot project. For the same reason, the subjects were limited to patients referred to the PPC team. Furthermore, both teams assessed PaPaS in only 23.3% although primary care physicians were encouraged to assess patients with the PaPaS. Therefore, the results should be cautiously interpreted because of selection bias. Future studies including patients with complex diseases are needed, and policy makers and clinicians should continue to integrate palliative care to usual medical care. In addition, if the future studies can assess the association between the PaPaS and patients' outcomes (eg. morbidity, mortality, PPC visits), potential benefits of the PaPaS can be shown. Third, the modified PaPaS used for neonates and children < 1 year old in this study has not been validated yet and the number of enrolled patients was too small for validation. For those < 1 year old, validation of the modified PaPaS in a larger study is required. Fourth, only one of five domains was compared with the Lansky performance scale. This may be the reason for the statistical significance in the correlation between the two groups, although not exceptionally strong. Because there were few tools assessing needs for PPC, it was difficult to validate the PaPaS externally. Future studies are needed for external validation of the PaPaS. Finally, this study was limited to consultationbased PPC services in a tertiary hospital setting. Understanding PPC and its availability may vary depending on health care systems, medical conditions, culture, priorities, and values for medical services; therefore, studies in various settings are required. In conclusion, the PaPaS showed good agreement between primary care and palliative care teams, suggesting it may be a useful tool to assess patients and their families' palliative care needs. Although the referring team tended to underestimate the palliative care needs compared to the PPC team, their PaPaS score was sufficient to identify the need for palliative care. Thus, the PaPaS as a tool could help primary care clinicians indentify the need and appropriate timing of referrals, especially in countries where PPC has been newly introduced. | 2021-05-24T13:46:39.206Z | 2021-05-24T00:00:00.000 | {
"year": 2021,
"sha1": "71e75e8f01c361bfec56d72a8b2d81ca8b638785",
"oa_license": "CCBY",
"oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/s12904-021-00765-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71e75e8f01c361bfec56d72a8b2d81ca8b638785",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28102315 | pes2o/s2orc | v3-fos-license | The roles of Cbl-b and c-Cbl in insulin-stimulated glucose transport.
Previous studies suggest that the stimulation of glucose transport by insulin involves the tyrosine phosphorylation of c-Cbl and the translocation of the c-Cbl/CAP complex to lipid raft subdomains of the plasma membrane. We now demonstrate that Cbl-b also undergoes tyrosine phosphorylation and membrane translocation in response to insulin in 3T3-L1 adipocytes. Ectopic expression of APS facilitated insulin-stimulated phosphorylation of tyrosines 665 and 709 in Cbl-b. The phosphorylation of APS produced by insulin drove the translocation of both c-Cbl and Cbl-b to the plasma membrane. Like c-Cbl, Cbl-b associates constitutively with CAP and interacts with Crk upon insulin stimulation. Cbl proteins formed homo- and heterodimers in vivo, which required the participation of a conserved leucine zipper domain. A Cbl mutant incapable of dimerization failed to interact with APS and to undergo tyrosine phosphorylation in response to insulin, indicating an essential role of Cbl dimerization in these processes. Thus, both c-Cbl and Cbl-b can initiate a phosphatidylinositol 3-kinase/protein kinase B-independent signaling pathway critical to insulin-stimulated GLUT4 translocation.
Previous studies suggest that the stimulation of glucose transport by insulin involves the tyrosine phosphorylation of c-Cbl and the translocation of the c-Cbl/CAP complex to lipid raft subdomains of the plasma membrane. We now demonstrate that Cbl-b also undergoes tyrosine phosphorylation and membrane translocation in response to insulin in 3T3-L1 adipocytes.
Ectopic expression of APS facilitated insulin-stimulated phosphorylation of tyrosines 665 and 709 in Cbl-b. The phosphorylation of APS produced by insulin drove the translocation of both c-Cbl and Cbl-b to the plasma membrane. Like c-Cbl, Cbl-b associates constitutively with CAP and interacts with Crk upon insulin stimulation. Cbl proteins formed homo-and heterodimers in vivo, which required the participation of a conserved leucine zipper domain. A Cbl mutant incapable of dimerization failed to interact with APS and to undergo tyrosine phosphorylation in response to insulin, indicating an essential role of Cbl dimerization in these processes. Thus, both c-Cbl and Cbl-b can initiate a phosphatidylinositol 3-kinase/protein kinase B-independent signaling pathway critical to insulin-stimulated GLUT4 translocation.
Molecular protein adapters have emerged as essential components of signal transduction pathways. The Cbl family of adapters, which comprises c-Cbl, Cbl-b, and Cbl-c/Cbl-3, has been implicated in receptor tyrosine kinase signaling. These related gene products all have a tyrosine kinase-binding (TKB) 1 domain, a RING finger domain, and a proline-rich region (1,2). The TKB domain, also called the Cbl-N domain, is an integrated phosphopeptide-binding platform composed of a four-helical bundle, a Ca 2ϩ -binding EF hand, and an SH2 domain (3). The Cbl family proteins are tyrosine-phosphorylated in response to a wide variety of stimuli, including epidermal growth factor, platelet-derived growth factor, various antigens, integrins, and cytokines (1,2,4). Moreover, Cbl and Cbl-b interact with critical signaling molecules in both phosphorylation-dependent and -independent fashions. Their binding partners include Src family tyrosine kinases, Zap-70/Syk family tyrosine kinases, the p85 subunit of PI 3-kinase, Vav, Crk, and the Slp-76/BLNK family of linker proteins (5)(6)(7)(8)(9). Recent studies have also shown that Cbl family proteins negatively regulate protein-tyrosine kinase signaling. This effect may be dependent, at least in part, on the activity of Cbl as an E3 ubiquitinprotein ligase (10 -14). However, evidence has also emerged for positive roles of Cbl proteins in cellular signaling processes. For example, c-Cbl facilitates Met-induced activation of c-Jun Nterminal kinase and ERK in HeLa cells via two separate mechanisms. Although binding of tyrosine-phosphorylated c-Cbl to c-Crk is crucial for c-Jun N-terminal kinase activation, the activation of ERK in response to Met is Crk-independent (15). A recent study has also shown that Cbl-b positively regulates the activation of phospholipase C-␥2 by Btk in B cells (16).
Our previous studies demonstrated that c-Cbl plays an important role in insulin action. This function is regulated by two additional adapter proteins, APS (for adapter containing PH and SH2 domains) and CAP (for Cbl-associated protein). Insulin stimulates the tyrosine phosphorylation of c-Cbl in 3T3-L1 adipocytes, inducing its association with Crk (9). The phosphorylation of c-Cbl by the insulin receptor kinase is facilitated by APS. Upon stimulation, the insulin receptor catalyzes the tyrosine phosphorylation of APS on tyrosine 618. Once phosphorylated, APS recruits c-Cbl to the insulin receptor for subsequent phosphorylation of tyrosines 700 and 774 (17). CAP contains three SH3 domains in its C terminus and a region of homology to the gut peptide sorbin (SoHo domain) in its N terminus. CAP constitutively interacts with Cbl via its Cterminal SH3 domain (18). Upon Cbl phosphorylation, the CAP/Cbl complex migrates to caveolin-enriched lipid rafts, as a result of the interaction of the SoHo domain on CAP with the lipid raft-associated protein flotillin (19,20). This leads to the recruitment of the Crk/C3G complex to this microdomain of the plasma membrane, where C3G, a guanyl nucleotide exchange factor, activates the small G protein TC10. The activation of TC10 has been shown to occur independently of the PI 3-kinase pathway and, more importantly, to be crucial to insulin-stimulated GLUT4 translocation (21).
c-Cbl and Cbl-b are ubiquitously expressed in a variety of mammalian cells. They share an evolutionarily conserved Nterminal region, and both possess multiple tyrosine phosphorylation sites at the C-terminal region (1,2,4). In 3T3-L1 adipocytes, overexpression of the Y618F APS mutant inhibited insulin-stimulated tyrosine phosphorylation and subsequent Crk binding of c-Cbl, as well as the membrane translocation of GLUT4 (17). In separate studies, overexpression of a CAP mutant in which either the SH3 domains or the SoHo domain were deleted blocked the recruitment of Cbl to lipid rafts and GLUT4 translocation in response to insulin (19,20).
Although these results strongly suggest an essential signaling role of c-Cbl in insulin-stimulated glucose transport, it remains to be elucidated whether Cbl-b becomes tyrosine-phosphorylated and functions as an adapter protein in this pathway, and how these proteins interact with the insulin receptor. We report here that both c-Cbl and Cbl-b undergo tyrosine phosphorylation and membrane translocation in response to insulin and, further, that these events require Cbl dimerization.
MATERIALS AND METHODS
Antibodies-The HA (F-7), Myc (9E10), c-Cbl (C-15), Cbl-b (G-1 and C-20), and phospho-ERK antibodies were purchased from Santa Cruz Inc. The FLAG (M2) monoclonal antibody was obtained from Stratagene. The anti-phosphotyrosine monoclonal antibody (4G10) was from Upstate Biotechnology, Inc. The Crk monoclonal antibody was purchased from Transduction Laboratories. Horseradish peroxidase-linked secondary antibodies were from Pierce. The Alexa Fluor secondary antibodies were from Molecular Probes.
Plasmids and Mutagenesis-The Myc-APS (pRK5-Myc-APS) and HA-Cbl-b (pCEFL-Cbl-b-HA) constructs were kindly provided by Dr. David Ginty and Dr. Stanley Lipkowitz, respectively. c-Cbl full-length cDNA was derived from pSX-HA-Cbl by PCR. HA-tagged c-Cbl was made as described previously (17). FLAG-tagged Cbl was constructed by placing c-Cbl cDNA with FLAG tag fused at the C terminus in frame in the BamHI and EcoRI sites of pRK7 vector. CAP full-length cDNA was derived from FLAG-tagged CAP constructs made as described previously (19,20,22). Myc-CAP was constructed by cloning CAP cDNA in the BamHI and EcoRI sites of pRK7-Myc vector. All mutated forms of Cbl and CAP were generated by using the Stratagene Quick Change mutagenesis kit, according to the protocol from the manufacturer. The mutations were confirmed by automated DNA sequencing.
Cell Culture and Transfection-CHO-IR cells were maintained in ␣-minimal essential medium containing 10% fetal bovine serum. COS-1 cells were grown in DMEM containing 10% fetal bovine serum. 3T3-L1 fibroblasts were maintained in DMEM supplemented with 10% calf serum, 100 units/ml penicillin G sodium, and 100 g/ml streptomycin sulfate. Differentiation to adipocytes was induced as previously described (23). The cells were then cultured in DMEM containing 10% fetal bovine serum. Before insulin treatment, CHO-IR cells were serumdeprived for 3 h in F-12 Ham's medium. 3T3-L1 adipocytes were routinely serum-starved for 3 h in low glucose DMEM with 0.5% bovine serum albumin. Both CHO-IR cells and 3T3-L1 adipocytes were transfected by electroporation as described (19,24). COS-1 cells in 60-mm dishes were transfected by using FuGENE 6 reagent (Roche Diagnostics) as described previously (25).
Immunoprecipitation and Immunoblotting-Cells in 60-mm dishes were washed twice with ice-cold phosphate-buffered saline, and were lysed for 30 min at 4°C with buffer containing 50 mM Tris-HCl, pH 8.0, 135 mM NaCl, 1% Triton X-100, 1.0 mM EDTA, 1.0 mM sodium pyrophosphate, 1.0 mM sodium orthovanadate, 10 mM NaF, and protease inhibitors (1 tablet/7 ml of buffer) (Roche Diagnostics). The clarified lysates were incubated with the indicated antibodies for 2 h at 4°C. The immune complexes were precipitated with protein A/G-agarose (Santa Cruz, Inc.) for 1 h at 4°C and were washed extensively with lysis buffer before solubilization in SDS sample buffer. For anti-Cbl immunoprecipitation, Cbl antibodies conjugated on agarose beads were incubated with 3 mg of lysate protein for 2 h to overnight at 4°C. Bound proteins were resolved by SDS-PAGE and transferred to nitrocellulose membranes. Individual proteins were detected with the specific antibodies and visualized by blotting with horseradish peroxidase-conjugated secondary antibodies.
Subcellular Fractionation-Subcellular fractions including cytosol, plasma membrane, high density membranes, and low density membranes were isolated from 3T3-L1 adipocytes cell homogenates using a combination of differential and equilibrium centrifugation as described by Fisher and Frost (26). Briefly, cells plated on 150-mm plates were incubated in DMEM containing 0.5% fetal bovine serum for 3 h before treating cells with or without 100 nM insulin for various times. Cells were washed three times on ice with ice-cold PBS before collecting the cells in 10 ml of TES buffer (20 mM Tris, 1 mM EDTA, and 255 mM sucrose, pH 7.4) containing protease inhibitor mixture (Roche Diagnostics), 100 M phenylmethylsulfonyl fluoride, 100 M sodium vanadate, 100 M sodium pyrophosphate, and 1 mM sodium fluoride. Cells were homogenized in a 10-ml Potter-Elvehjem homogenizing flask using 20 strokes. The cell homogenates were fractionated into various fractions, and the final pellets were resuspended in TES. Protein concentration was determined using a Bio-Rad protein assay kit. Fifty micrograms of each sample was separated by SDS-PAGE, transferred to nitrocellulose membrane, and immunoblot analysis performed using appropriate antibodies.
Fluorescence Microscopy-Electroporated 3T3-L1 adipocytes were grown on glass cover slips in 6-well dishes. After insulin treatment, cells were fixed with 10% formalin for 15 min, permeabilized with 0.5% Triton X-100 for 5 min, and then blocked with 1% bovine serum albumin and 1% ovalbumin for 1 h. Primary and Alexa Fluor secondary antibodies were used at 2 g/l in blocking solution, and samples were mounted on glass slides with Vectashield (Vector Laboratories). Cells were imaged using confocal fluorescence microscopy.
Images were then imported into Adobe Photoshop (Adobe Systems, Inc.) for processing.
Insulin Stimulates the Tyrosine Phosphorylation of Cbl-b in
3T3-L1 Adipocytes-To address a possible role of Cbl-b in insulin action, we examined the tyrosine phosphorylation of this protein in 3T3-L1 adipocytes. Lysates were prepared from cells treated with or without insulin, and incubated with anti-Cbl-b antibodies. The resultant immunoprecipitates were separated by SDS-PAGE, and tyrosine phosphorylation was analyzed by immunoblotting with anti-phosphotyrosine antibodies. As shown in Fig. 1A, Cbl-b was not tyrosine-phosphorylated in unstimulated cells. The addition of insulin caused a marked tyrosine phosphorylation of a 120-kDa protein, representing Cbl-b. Intense phosphorylation of Cbl-b was detected within 2 min of insulin stimulation, reached a maximum by 5 min, and declined thereafter. Anti-Cbl-b immunoblotting confirmed that equal amounts of Cbl-b protein were immunoprecipitated at each time point. A similar time course of c-Cbl tyrosine phosphorylation in response to insulin was observed. Thus, insulin stimulation of 3T3-L1 adipocytes induces a prominent, rapid, and transient tyrosine phosphorylation of both Cbl-b and c-Cbl.
To further characterize insulin-induced Cbl-b tyrosine phosphorylation, we pretreated cells with inhibitors of different kinase pathways downstream of the insulin receptor (Fig. 1B). Treatment of cells with the mitogen-activated protein kinase/ ERK kinase inhibitor PD098059 to block the mitogen-activated protein kinase pathway or the PI 3-kinase inhibitor wortmannin to block the PKB pathway did not change the stimulation of Cbl-b tyrosine phosphorylation observed in response to insulin. The effects of PD098059 and wortmannin on the activation of mitogen-activated protein kinase and PKB were demonstrated by immunoblotting of cell lysates with respective phosphospecific antibodies (Fig. 1B). In adipocytes, c-Cbl phosphorylation seems to be catalyzed by the insulin receptor rather than Src family tyrosine kinases such as Fyn (17). In this regard, pretreatment of cells with the selective Src kinase inhibitor PP2 did not affect the tyrosine phosphorylation of Cbl-b in response to insulin. These results suggest that the tyrosine phosphorylation of Cbl-b and c-Cbl is directly catalyzed by the insulin receptor.
Cbl-b and c-Cbl Move to the Plasma Membrane in Response to Insulin-In 3T3-L1 adipocytes, c-Cbl is translocated to the plasma membrane in response to insulin (19). Hence, we wanted to test whether the Cbl-b protein also changes its intracellular localization in response to insulin. To this end, serum-starved 3T3-L1 adipocytes were treated with insulin, and membranes were prepared by differential centrifugation. Equal amounts of total protein from low density microsome and plasma membrane fractions were separated by SDS-PAGE and probed with anti-c-Cbl and anti-Cbl-b antibodies (Fig. 2). Under basal conditions both c-Cbl and Cbl-b were found only at low levels in the plasma membrane fraction. However, both proteins were significantly increased in the plasma membrane fraction upon treatment of cells with insulin for 5 min. Flotillin, a lipid raft protein, did not change its membrane localization in response to insulin. The results indicate an insulin-triggered movement of Cbl-b to the plasma membrane similar to that observed for c-Cbl.
In Vivo Interactions of Cbl-b with Crk and CAP-Because tyrosine-phosphorylated c-Cbl binds to the SH2 domain of Crk (9), we evaluated whether the insulin-stimulated tyrosine phosphorylation of Cbl-b might result in a similar association. Lysates derived from unstimulated or insulin-stimulated 3T3-L1 adipocytes were immunoprecipitated with anti-Crk antibodies. As shown by anti-Cbl-b immunoblotting of the immunoprecipitates, insulin rapidly stimulated the association of Cbl-b with endogenous Crk (Fig. 3A). Anti-Crk immunoblotting revealed that equal amounts of Crk were immunoprecipitated from unstimulated and insulin-stimulated samples.
We next examined the ability of Cbl-b to form complexes with CAP in 3T3-L1 adipocytes. Previous studies have shown that the third SH3 domain of CAP (SH3C) is responsible for its interaction with c-Cbl (17,18). We overexpressed Myc-tagged CAP or CAP⌬SH3C along with FLAG-tagged Cbl-b in cells, and evaluated the co-immunoprecipitation of FLAG-Cbl-b with Myc-CAP (Fig. 3B). Immunoblotting of the cell lysates indicated that FLAG-Cbl-b and the two different Myc-CAP proteins were expressed at comparable levels. FLAG-Cbl-b was found to interact specifically with wild type CAP. However, deletion of the third SH3 domain in CAP completely abolished the coprecipitation of FLAG-Cbl-b. Similar results were obtained by co-immunoprecipitation of endogenous Cbl-b with Myc-CAP. These data indicate that Cbl-b and CAP are capable of forming complexes via an interaction between the third SH3 domain of CAP and the proline-rich region(s) of Cbl-b. Cbl proteins were detected in the immunoprecipitates by immunoblotting with anti-phosphotyrosine (pY) and respective anti-Cbl antibodies. Phosphorylated ERK was detected in cell lysates by immunoblotting with an anti-phospho-ERK antibody. pMAPK, phosphorylated mitogenactive protein kinase. B, serum-starved 3T3-L1 adipocytes were pretreated with Me 2 SO alone (DMSO), 25 M PD98059 (PD), 100 nM wortmannin (Wart), or 1 M PP2 for 30 min prior to treatment with or without 100 nM insulin for 2 min. Cbl-b was immunoprecipitated from cell lysates and was detected by anti-phosphotyrosine and anti-Cbl-b immunoblotting. Phosphorylated Akt and ERK were detected in cell lysates by immunoblotting with phosphospecific antibodies.
FIG. 2. Translocation of c-Cbl and
Cbl-b to plasma membrane in response to insulin. 3T3-L1 adipocytes were serum-starved and treated with 100 nM insulin for various times. Cells were homogenized and subcellular fractionation was performed as described under "Materials and Methods." 50 g of total protein from a low density microsomal fraction (LDM) and plasma membrane (PM) was separated by SDS-PAGE. c-Cbl, Cbl-b, and flotillin were detected by immunoblotting.
APS Mediates the Tyrosine Phosphorylation of Cbl-b by the Insulin
Receptor-APS is a Lnk family adaptor protein that is tyrosine-phosphorylated by the insulin receptor (17,27,28). Insulin-stimulated phosphorylation of tyrosine 618 in APS is necessary for its association with c-Cbl and the subsequent tyrosine phosphorylation of c-Cbl by the insulin receptor in 3T3-L1 adipocytes (17). To determine whether APS also couples Cbl-b to the insulin receptor for phosphorylation, we expressed HA-tagged Cbl-b alone or with Myc-tagged APS in 3T3-L1 adipocytes (Fig. 4). Following treatment of cells with or without insulin, comparable amounts of HA-Cbl-b were isolated by anti-HA immunoprecipitation. Anti-phosphotyrosine immunoblotting revealed no insulin-stimulated tyrosine phosphorylation of HA-Cbl-b when expressed alone. Co-expression with Myc-APS dramatically increased the tyrosine phosphorylation of HA-Cbl-b in response to insulin. These results indicate that APS is capable of mediating Cbl-b tyrosine phosphorylation by the insulin receptor.
Previous studies have shown that tyrosines 700 and 774 are the two major residues in c-Cbl phosphorylated in response to insulin. Phosphorylation of these two sites in c-Cbl mediates its interaction with Crk. Cbl-b has tyrosines at 709 and 665 in its C-terminal region with flanking sequences that are homologous to Tyr 700 and Tyr 774 in c-Cbl, respectively (29). To determine whether the insulin receptor phosphorylates these two sites in Cbl-b, we generated a Cbl-b construct with both Tyr 665 and Tyr 709 mutated to phenylalanine. Compared with the wild type Cbl-b protein, the phosphorylation of the Y665F/Y709F mutant was profoundly decreased in response to insulin when Myc-APS was co-expressed (Fig. 4). Thus, tyrosines 665 and 709 are the major sites phosphorylated by the insulin receptor in the presence of APS.
APS Promotes the Movement of Cbl Proteins to the Plasma Membrane in Response to Insulin-APS is constitutively localized at the plasma membrane in 3T3-L1 adipocytes (17). To assess the role of APS in the translocation of Cbl proteins to the plasma membrane upon insulin stimulation, FLAG-c-Cbl or HA-Cbl-b was expressed in 3T3-L1 adipocytes in the presence or absence of Myc-APS, followed by double immunofluorescence staining (Fig. 5). Cells were treated with or without insulin, fixed, and incubated with anti-Myc and anti-FLAG or anti-HA antibodies that were later labeled with complementary fluorescence-conjugated secondary antibodies. Consistent with our previous report, Myc-APS was localized at the plasma membrane independent of insulin treatment. Under basal conditions, FLAG-c-Cbl and HA-Cbl-b were both present throughout the cytoplasm regardless of co-expression of Myc-APS. Insulin stimulation did not significantly change this diffusive distribution of FLAG-c-Cbl and HA-Cbl-b when Myc-APS was not coexpressed. However, the co-expression of APS was found to cause a robust recruitment of c-Cbl and Cbl-b to the plasma membrane in response to insulin. A single Gly to Glu mutation in the TKB domain of Cbl-b prevented this membrane translocation induced by APS co-expression, whereas Y665F/Y709F double mutations had no effect. These results indicate that the interaction between tyrosine-phosphorylated APS and the TKB domain of Cbl is crucial for the membrane translocation of Cbl and, further, that the tyrosine phosphorylation of Cbl requires its translocation to the plasma membrane in response to insulin.
Homo-and Hetero-dimerization of Cbl-b and c-Cbl-It has been reported previously that c-Cbl forms homodimers through its C-terminal leucine zipper domain, and deletion of this domain decreases Cbl tyrosine phosphorylation and its association with the epidermal growth factor receptor (30). Because the leucine zipper domain is highly conserved between Cbl-b and c-Cbl, we tested whether Cbl-b would form heterodimers with c-Cbl in 3T3-L1 adipocytes. Cells were transfected with vector alone or HA-tagged c-Cbl. HA-c-Cbl was then immunoprecipitated from cell lysates, and the immune complexes were immunoblotted for the presence of endogenous Cbl-b. As can be seen in Fig. 6A, a protein of ϳ120 kDa, which corresponds to the correct size of Cbl-b, was detected in anti-HA immunoprecipitates from cells expressing HA-c-Cbl, whereas no Cbl-b could be detected in immunoprecipitates from vector-transfected cells. HA-c-Cbl was isolated only from cells expressing HA-c-Cbl. To examine whether endogenous Cbl proteins also heterodimerize, we immunoprecipitated c-Cbl and Cbl-b from lysates of 3T3-L1 adipocytes with the appropriate antibodies (Fig. 6B). As negative controls, immunoprecipitations were performed by using nonspecific immunoglobulins. Cbl-b was specifically co-immunoprecipitated with c-Cbl. Stimulation of cells with insulin did not influence this interaction. Reciprocally, c-Cbl was found to co-precipitate with Cbl-b when an antibody recognizing the N-terminal region of Cbl-b was used for immunoprecipitation (Fig. 6C). However, no c-Cbl was detected in the Cbl-b immunoprecipitates when an antibody against the C-terminal region of Cbl-b was used. Immunoblotting showed that similar amounts of Cbl-b were isolated with both antibodies. Collectively, these results indicate that Cbl-b constitutively heterodimerizes with c-Cbl in vivo and, further, that this dimerization may involve the C-terminal region of Cbl-b.
To examine the role of the leucine zipper domain in the heterodimerization of Cbl proteins, we generated a mutant of c-Cbl in which the leucine zipper domain was deleted from the full-length FLAG-tagged c-Cbl (FLAG-c-Cbl⌬LZ). COS-1 cells were transfected with vector alone, FLAG-c-Cbl/wild type, FLAG-c-Cbl⌬LZ, or FLAG-c-Cbl/1-700 in the presence of either HA-c-Cbl or HA-Cbl-b. As shown in Fig. 7A, all FLAG-c-Cbl proteins were expressed at comparable levels. Although FLAGc-Cbl/wild type dimerized with both HA-tagged Cbl proteins, neither c-Cbl⌬LZ nor c-Cbl/1-700 associated with c-Cbl or Cbl-b. This result suggests that the leucine zipper domain is absolutely required for the homodimerization and heterodimerization to occur. Because the region containing only the acidic and leucine zipper domains is sufficient to mediate homodimerization of c-Cbl, we finally tested the interaction of HA-Cbl-AcLZ, which contains only the acidic and leucine zipper domains, with endogenous c-Cbl and Cbl-b proteins in 3T3-L1 adipocytes (Fig. 7B). HA-c-Cbl-AcLZ was found to bind c-Cbl but not Cbl-b. In contrast, HA-Cbl-b-AcLZ bound Cbl-b but not c-Cbl. Therefore, the acidic and leucine zipper domains are sufficient to mediate homodimerization of Cbl proteins. However, additional upstream sequences may be involved in the heterodimerization between c-Cbl and Cbl-b.
Dimerization Is Required for Efficient Binding of c-Cbl to APS and Subsequent Tyrosine Phosphorylation of c-Cbl following Insulin Stimulation-
The stimulation of the insulin receptor induces the rapid association of c-Cbl with APS and the recruitment of c-Cbl to insulin receptor for tyrosine phosphorylation (17). To test whether these events are influenced by the dimerization of Cbl, CHO-IR cells were transiently transfected with FLAG-c-Cbl/wild type or FLAG-c-Cbl⌬LZ along with Myc-APS. Following treatment of cells with or without insulin, comparable amounts of Myc-APS were isolated by anti-Myc immunoprecipitation (Fig. 8A). Consistent with our previous findings, insulin stimulated the rapid association of FLAGc-Cbl with Myc-APS. Deletion of the leucine zipper domain in c-Cbl completely abolished its interaction with Myc-APS in response to insulin. Immunoblotting of cell lysates showed that equivalent amounts of wild type c-Cbl and c-Cbl⌬LZ were expressed in all samples.
To analyze the role of Cbl dimerization in its insulin-stimulated tyrosine phosphorylation, FLAG-c-Cbl proteins were immunoprecipitated by anti-FLAG antibodies, and the immune complexes were blotted with antibodies against phosphotyrosine. As shown in Fig. 8B, insulin stimulation resulted in a dramatic increase in tyrosine phosphorylation of FLAG-c-Cbl/wt when APS was co-expressed. However, FLAG-c-Cbl⌬LZ showed no detectable phosphorylation under the same conditions. Thus, the leucine zipper-mediated dimerization of c-Cbl appears to be required for the insulin-stimulated association of c-Cbl with APS and for the efficient tyrosine phosphorylation of c-Cbl. Finally, the effect of dimerization on the subcellular localization of c-Cbl was examined. 3T3-L1 adipocytes were transfected with FLAG-c-Cbl/wt or FLAG-c-Cbl⌬LZ (Fig. 8C). After fixation, cells were incubated with anti-FLAG antibodies followed by fluorescence-conjugated secondary antibodies. Although wild type c-Cbl localized diffusely throughout the cytoplasm, c-Cbl⌬LZ was found to be confined predominantly to the plasma membrane. Insulin treatment of cells did not change the distribution of either protein (data not shown). This result suggests that Cbl dimerization is required for its cytoplasmic retention.
Expression of a C-terminal Truncated Form of c-Cbl Attenuates Insulin-stimulated GLUT4 Translocation-A critical role of Cbl in insulin-stimulated GLUT4 translocation has been implicated by several studies, using approaches in which c-Cbl tyrosine phosphorylation was blocked by expressing mutated forms of Cbl-binding proteins. For example, overexpression in 3T3-L1 adipocytes of either a CAP mutant (CAP⌬SH3) or an APS mutant (APS/Y618F) deficient in Cbl binding inhibits insulin-stimulated tyrosine phosphorylation of c-Cbl and membrane translocation of GLUT4 (17,19). To provide additional evidence that Cbl phosphorylation is an essential step in insulin-stimulated glucose transport, we tested the dominant negative effect of ectopic expression of a C-terminal deletion mutant of c-Cbl (c-Cbl/1-700) in 3T3-L1 adipocytes. Although it lacks the C-terminal tyrosine phosphorylation sites for Crk binding and sequences for mediating dimerization, c-Cbl/1-700 possesses the intact proline-rich region for CAP association. To examine whether c-Cbl/1-700 would still form a stable complex with CAP, we co-expressed vector alone, FLAG-tagged c-Cbl or c-Cbl/1-700 along with Myc-CAP in 3T3-L1 adipocytes (Fig. 9). Immunoblotting of cell lysates revealed that Myc-CAP and FLAG-c-Cbl were expressed at comparable levels. Anti-FLAG immunoprecipitation followed by anti-Myc immunoblotting showed that c-Cbl/1-700 bound CAP with an affinity similar to that observed with the wild type c-Cbl. The result indicated that the deletion of the C-terminal 206 amino acids did not affect the ability of c-Cbl to form complexes with CAP.
We next tested whether expression of c-Cbl/1-700 would affect insulin-stimulated GLUT4 translocation. 3T3-L1 adipocytes were transfected with vector alone, FLAG-c-Cbl, HA-Cbl-b, or FLAG-c-Cbl/1-700 along with a construct encoding an enhanced green fluorescence protein fusion of GLUT4 (GLUT4- FIG. 8. Deletion of the leucine zipper domain abolishes the interaction of Cbl with APS as well as the APS-mediated tyrosine phosphorylation of Cbl in response to insulin. A, 3T3-L1 adipocytes were cotransfected with FLAG-c-Cbl/wt or FLAG-c-Cbl⌬LZ plus Myc-APS. After treatment with or without 100 nM insulin for 2 min, cells were lysed and immunoprecipitation (i.p.) was performed using an anti-FLAG antibody. Immunoprecipitates were analyzed by anti-phosphotyrosine (pY) and anti-FLAG immunoblotting. Cell lysates were immunoblotted with an anti-Myc antibody. B, cells were transfected as described in A. After treatment with or without 100 nM insulin for 2 min, cells were lysed and immunoprecipitation was performed using an anti-Myc antibody. Myc-APS was detected in the immunoprecipitates by immunoblotting with anti-Myc antibody. FLAG-c-Cbl was detected by anti-HA immunoblotting in both immunoprecipitates and cell lysates. Phosphorylated ERK was detected in cell lysates by immunoblotting with an anti-phospho-ERK antibody. C, 3T3-L1 adipocytes were transfected with FLAG-c-Cbl/wt or FLAG-c-Cbl⌬LZ. Cells were fixed 24 h after transfection, and FLAG proteins were stained by indirect immunofluorescent staining. EGFP). The cells were treated with or without insulin, and the localization of GLUT4-EGFP was examined by fluorescence microscopy (Fig. 10A). The co-expressed Cbl proteins were visualized by indirect immunostaining. Although wild type c-Cbl localized diffusely throughout the cytoplasm, c-Cbl/1-700, like c-Cbl⌬LZ, was found to be constitutively at the plasma membrane. As expected, insulin stimulated the translocation of GLUT4-EGFP to the plasma membrane in empty vector-transfected cells. Cotransfection with either wild type c-Cbl or wild type Cbl-b did not have any effect on the insulin-stimulated translocation of GLUT4-EGFP. In contrast, co-expression of c-Cbl/1-700 resulted in marked inhibition of insulin-stimulated translocation of GLUT4-EGFP. Quantitation of these data demonstrated that the number of the cells displaying rim green fluorescence in response to insulin decreased by 64.7% with the coexpression of Cbl/1-700 in comparison with control cells (Fig. 10B). As controls, phosphorylated PKB and caveolin were stained with respective antibodies (Fig. 10C). Expression of Cbl/1-700 had no effect on the constitutive membrane localization of caveolin, nor did it attenuate the insulin-stimulated appearance of phospho-PKB at the plasma membrane, indicative of activation of the PI 3-kinase pathway.
DISCUSSION
Tyrosine phosphorylation-dependent interactions between adapter and effector proteins play important roles in tyrosine kinase signaling pathways. Upon insulin stimulation of differentiated adipocytes, c-Cbl becomes phosphorylated on tyrosines 700 and 774, producing its interaction with SH2-containing signaling molecules such as Crk (9,17). Tyrosine phosphorylation of c-Cbl by the insulin receptor is mediated by the adapter protein APS. APS is an insulin receptor substrate and targets c-Cbl to the insulin receptor upon phosphorylation on tyrosine 618 (17). Moreover, insulin induces translocation of the c-Cbl/CAP complex to lipid raft subdomains of the plasma membrane, where CAP directly binds to the hydrophobic protein flotillin (19,20). Overexpression of dominant negative mutant forms of CAP blocks both translocation of phospho-c-Cbl to lipid rafts, as well as the membrane movement of GLUT4 in response to insulin, indicating a critical role of c-Cbl in insulin-stimulated glucose transport (19,20).
Cbl-b is another mammalian Cbl gene product with multiple proline-rich motifs and tyrosine phosphorylation sites. Although less is known about Cbl-b tyrosine phosphorylation, it is clear that Cbl-b is tyrosine-phosphorylated in immune and hematopoietic cells in response to stimuli such as interleukin-7, FL, and T cell receptor/CD3 ligation (31,32). We demonstrate here that Cbl-b becomes tyrosine-phosphorylated in response to insulin in 3T3-L1 adipocytes. This transient phosphorylation of Cbl-b is independent of classical PI 3-kinase and mitogen-activated protein kinase pathways, and is also insensitive to the inhibition of Src kinase activity, indicating that the insulin receptor directly catalyzes this phosphorylation. Moreover, we also demonstrate that tyrosine-phosphorylated APS is the adapter that couples Cbl-b to the insulin receptor. As a result, Cbl-b is phosphorylated on tyrosines 665 and 709. Tyrosines 665 and 709 have been determined as CrkL binding sites (29), although the precise tyrosine phosphorylation sites of Cbl-b had not been identified prior to this study.
Although numerous studies show that c-Cbl exerts many of its functions by recruiting various proteins to physiologically relevant complexes, little is known about the biological role of Cbl-b as an adapter protein. Experiments described here reveal that Cbl-b interacts with CAP and Crk in 3T3-L1 adipocytes. Like c-Cbl, Cbl-b appears to form a stable complex with CAP through an SH3 proline-rich motif interaction. The interaction of Cbl-b with Crk is sensitive to insulin stimulation and thus is dependent on the ability of Cbl-b to undergo tyrosine phosphorylation.
In addition to its tyrosine phosphorylation, Cbl-b traffics to the plasma membrane in a manner similar to that observed with c-Cbl (19). Like c-Cbl, Cbl-b appears to transiently relocate to the plasma membrane upon insulin stimulation through its interaction with phosphorylated APS. This was based on the ability of ectopically expressed APS to drive the membrane localization of both c-Cbl and Cbl-b in response to insulin. Moreover, mutation of either the TKB domain of Cbl proteins abolished this translocation. It should be noted that the amount of endogenous Cbl proteins translocated to the plasma membrane is low relative to their total cytosolic pool, possibly as a result of the limited availability of endogenous APS. For the same reason, no significant translocation of ectopically expressed Cbl proteins occurs in the absence of the co-expression of APS.
The leucine zipper domain is an ␣-helical structure formed by several heptad repeats of hydrophobic residues such as leucine and isoleucine. The C-terminal regions of c-Cbl and Cbl-b contain a conserved leucine zipper, which in c-Cbl was reported to mediate homodimerization (30). Our results demonstrate for the first time that Cbl-b is capable of forming a homodimer as well as heterodimers with c-Cbl. Formation of both dimers seem to require the participation of the leucine zipper domain, because deletion of this domain was sufficient to abolish both homo-and heterodimerization. Despite a high degree of sequence similarity in the leucine zipper domains, fusion proteins containing the acidic and leucine zipper domains of c-Cbl and Cbl-b prefer homodimerization. Therefore, the acidic and leucine zipper domains are sufficient for Cbl homodimerization, whereas other upstream sequences may be involved in forming the heterodimeric complexes.
Dimerization of Cbl has functional significance in insulin signaling. In the case of c-Cbl, our data show that the deletion of the leucine zipper and the resultant loss of Cbl dimerization lead to decreased insulin-stimulated Cbl-APS association and tyrosine phosphorylation of c-Cbl. In this regard, APS was also found to have the ability to undergo self-dimerization (33). Thus, it would be interesting to examine whether dimerization of APS is necessary to recruit Cbl dimers to the insulin receptor. Moreover, Cbl dimerization also appears to play an important role in determining its cellular localization. Deletion of the leucine zipper resulted in the constitutive association of c-Cbl with the plasma membrane, suggesting that dimerization is indispensable for the intracellular retention of Cbl under basal conditions. One possible mechanism would involve the interaction of Cbl dimers with other cytoplasmic proteins. Insulin might induce the release from these cytoplasmic anchors at the initial stage of membrane translocation of Cbl. Furthermore, deletion of the leucine zipper domain had no effect on the ability of Cbl to interact with CAP, indicating that dimerization, although important for Cbl tyrosine phosphorylation, may not be required for its localization to lipid rafts.
In our previous studies, overexpression of dominant negative mutants of APS or CAP inhibited tyrosine phosphorylation of Cbl and translocation of GLUT4 in response to insulin. The data presented here directly demonstrate a critical role of Cbl as an upstream signaling intermediate in insulin-stimulated glucose transport. The overexpression of a c-Cbl mutant (c-Cbl/ 1-700) in which a C-terminal portion containing the tyrosine phosphorylation sites and the acidic and leucine zipper domains has been deleted profoundly inhibited insulin-stimulated GLUT4 translocation in 3T3-L1 adipocytes. On the other hand, the overexpression of wild type c-Cbl or Cbl-b had no such inhibitory effect. In addition, overexpression of c-Cbl/1- FIG. 10. Overexpression of c-Cbl/1-700 blocks insulin-stimulated GLUT4 translocation to the plasma membrane. A, 3T3-L1 adipocytes were electroporated with 100 g of GLUT4-EGFP plus 200 g of vector, FLAG-c-Cbl/wt, FLAG-c-Cbl/1-700, or HA-Cbl-b/wt, and allowed to recover for 30 h. The cells were treated with or without 100 nM insulin for 30 min. Cells were fixed and fluorescence visualized by confocal microscopy. GLUT4-EGFP was visualized by direct fluorescence, and Cbl proteins were visualized by indirect immunofluorescence. These are representative images of middle sections of cells obtained from three independent experiments. B, numbers of GLUT4-EGFP transfected cells displaying visually detectable plasma membrane (PM) rim fluorescence were plotted. These data were obtained by blind counting of more than 80 cells from three independent experiments. C, 3T3-L1 adipocytes were electroporated with 200 g of vector alone or FLAG-c-Cbl/1-700, and allowed to recover for 30 h. FLAG-c-Cbl/1-700, caveolin (Cav), and phospho-Akt were visualized by indirect immunofluorescence. These are representative images of middle sections of cells obtained from three independent experiments. 700 does not impair PKB activation by insulin, another major determinant of GLUT4 translocation. We propose that the apparent dominant negative effect of c-Cbl/1-700 on GLUT4 translocation is the result of the ability of c-Cbl/1-700 to interact with and thus sequester proteins involved in the regulation of insulin signaling through endogenous Cbl. One such candidate protein is CAP. c-Cbl/1-700 binds to CAP with an affinity similar to wild type c-Cbl. Sequestration of endogenous CAP by c-Cbl/1-700 would in turn block the translocation of endogenous Cbl to the flotillin-enriched lipid rafts, a process previously shown to be critical for insulin-stimulated GLUT4 translocation. | 2018-04-03T01:05:59.950Z | 2003-09-19T00:00:00.000 | {
"year": 2003,
"sha1": "7cbc7d2ead61b248c26cd5ef5434f351c7d08fc2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/38/36754.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "fae56109f612318ee660af05bc570bb64e38294a",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16580090 | pes2o/s2orc | v3-fos-license | Flaring and warping of the Milky Way disk: not only in the gas
This paper presents an investigation of the outer disk structure by using data from a recent release of the 2 micron sky survey (2MASS). This 2MASS data show unambiguously that the stellar disk thickens with increasing distance from the Sun. In one of the field (longitude l$=$240) there is also strong evidence of an assymetry associated with the Galactic warp. This flaring and warping of the stellar disk is very similar to the features observed in the HI disk. The thickening of the stellar disk explains the drop in density observed near the Galactic plane: stars located at lower Galactic latitudes are re-distributed to higher latitudes. It is no longer necessary to introduce a disk cut-off to explain the drop in density. It is also not clear how this flaring disk is distinct from the thick disk in the outer disk region. At least, for lines of sight in the direction of the outer disk, the thickening of the disk is sufficient to account for the excess in star counts attributed by some models to the thick disk.
INTRODUCTION.
The structure of the stellar disk of our Galaxy is still not very well known. The stellar population of the disk is usually decomposed in a thin disk and thick disk component. The thin disk and thick disk populations are separated on the basis that they may have different metallicity and velocity profiles. The typical ratio of the thick disk to thin disk components in the solar neighborhood is about a few percent (Gilmore & Reid 1983). . Both disk are supposed to follow a double exponential profile, associated with a radial scale length and a vertical scale height. It is unclear at the moment if the scale length of these 2 disks are the same. There is still some uncertainty even for the scale length of the thin disk itself, although the more recent measurement seems to indicate that it is between 2 and 3 Kpc. For the the thick disk the uncertainty is much larger, typically the scale length is in the range 1.5 to 4.5 Kpc. The scale heights of the thin disk is about 0.3 Kpc , and is around 1 Kpc for the thick disk. The structure of the gas component (whether HI or CO) is better understood, and suggests some questions concerning the stellar disk. Since a warp is well identified in the gas ( W. B. Burton and P. te Linkel Hekkert 1986), one may wonder if it does exists also in the stellar components, with the same amplitude and phase. Another problem is the extension of the gas component which is visible up to a distance of 25 Kpc from the Galactic center. Some observations suggests that the stellar disk ends long before, at about 14 Kpc from the center. This fact may indicate that stellar formation in the disk did not occur beyond 14 Kpc, or that (l,b)=(180,0) (∆l,∆b)=(10,100) 3.8 10 6 L240 (l,b)=(240,0) (∆l,∆b)=(10,100) 4.7 10 6 the geometry of the stellar disk is not well understood. One additional source of complication is the possibility that the disk in addition to its warp mode presents an elliptical mode of deformation. At the moment this issue is very speculative, however this effect may exist and should be investigated.
Presentation
The data set consist of 3 strips perpendicular to the Galactic plane around the longitudes, l=66, l=180, l=240. All the data have been extracted form the 2MASS point source catalogue accessible on the Web (http://www.ipac.caltech.edu/2MASS/). The detailed characteristics of the fields are given in Table 1.
Canceling the effect of extinction.
By combining 2 photometric bands it is easy to derive a magnitude that is independent of reddening. For instance using J and K, we can derive ME: Following Rieke and Lebofsky (1985) we take: AK AV = 0.112 and AJ AV = 0.282 Leading finally to: This corrected magnitude will be very insensitive to reddening, furthermore, it is important to notice that the reddening in the 3 fields is low (see Fig. 1). The maximum reddening is about AV ≃ 1.5, which is only 0.17 in the K band. Consequently, after adding the correcting term: −0.659 (J − K), the residual will not exceed a few percent.
Star counts as a function of latitude
Before making the star counts, it is essential to correct for empty areas in the catalogue (due most of the time to the obliteration of fainter stars by a very bright star). To locate these bad areas a 2 dimensional histogram in the l and b coordinates was computed. The size of the bin was chosen in order to have a mean of 50 stars in the bin. It was required that the bin contains a minimum of 5 stars to be considered valid. This way the histogram of the valid areas was constructed as a function of latitude. The step in latitude is 0.5 degree. The counts were finally normalized to a surface of 1 square degree. To avoid any problem related to the completeness of the sample, only stars brighter than ME = 12.5 have been included. The resulting histograms are presented in Fig. 2.
Assymetry between positive and negative longitude.
It is interesting to investigate the symmetry of the distribution of the counts as a function of latitude. A significant assymetry would reveal the presence of the Galactic warp. In Fig.3 the difference in star counts between positive and negative longitude is plotted. It appears immediately that the field L240 presents large deviations from symmetry near b=6. It is likely that this assymetry is associated with the presence of a significant Galactic warp in this field. Note that possible evidences for an assymetry in the star counts has already been presented by Carney & Seitzer (1993). For the 2 other fields the pattern of assymetry is less clear. For l=180 we observe some systematic deviation near the 2 σ level, in the range b=10 to b=30. For the field at l=66 there are a few deviating points near b=4.
3 ANALYZING THE DATA.
3.1 Deviations from the exponential profile.
To estimate numerically the star counts, we need to integrate the luminosity function of the stellar population and the density function of the disk along the line of sight. The luminosity function is assumed to be the same everywhere and is identical to the luminosity function in the solar neighborhood tabulated by Wainscoat et al. (1992). To represent the density profile of the disk, it is natural to look first at the simplest model, which is a single double exponential: It is important to note than despite its simplicity, the exponential profile seems to show en excellent consistency with the Z-profile of edge-on galaxies within a few scale length form the center (de Grijs et al. 1996). It is interesting to check the consistency of the data with such simple model, and in particular to study the shape of the deviations from this model which may indicate that other components are present. It is possible to study this model in a the full range of latitude covered by our data (−50 < b < 50, 169 line of sight). By probing different line of sight, with increasing latitudes, we probe areas of the disk more and more distant form the Galactic plane. Thus basically, by doing this, we explore the Z profile of the Galactic disk. If the disk is really a double exponential, this Z profile should be a constant exponential, whatever the distance R from the center. Consequently along all the line of sight the density should be consistent with an exponential in Z with constant scale height, HZ. In case the Z profile is not exponential we should observe a variation of the scale height as a function of the latitude. To first order in Z we can develop the variation to the Z profile for each latitude b is equivalent to a variation of the scale height HZ: it is easy to reconstruct the function HZ(b) for the 169 line of sight. Along each line of sight the data have been decomposed in 20 bins of magnitudes, allowing direct maximum likelihood determination of HZ(b) by maximum likelihood.
It is interesting to note that the estimation of the factor HZ(b) along each line site can also compensate for an imperfect estimation of the scale length R h . This is obvious for the line of sight near l = 180, since the distance d is proportional to Z in this direction. For the line of sight near l = 240 this is still a good approximation. An even for l = 66, one can check easily numerically that the factor HZ(b) can compensate with good accuracy for small discrepancies in the estimation of R h . The function HZ(b) has been estimated for the 3 fields, allowing a full reconstruction of the density profile of the disk. In order to make an easy comparison with an exponential profile, the Z profile of this density has been computed for different values of the distance to the center d. The result for the 3 fields is presented in Fig. 4,5,6. The most obvious difference between the 3 fields is that the vertical profile does not deviate much from the straight line (pure exponential) all along the line of sight in the field at l = 66. The situation is very different in the 2 other fields, where the profile becomes broader and broader with distance with an increasingly large deviations from an exponential. One could always claim that this is a bias due to the fact that the Z profile is constant but not exponential. However, we see immediately that this is not possible, since in this case the same discrepancy should be observed in the field at l = 66. There are small deviations near the top of the profile at l = 66, which could be due to the fact that we reach a distance of about 10 Kpc, for which we already observe some flattening in the other fields. It might be also that in this field, source confusion affects the star counts in a small in a small range of latitude (−1.5 < b < 1.5). For the 2 other fields, the stellar density is about 10 times smaller, thus we do not expect any significant effects due to the source confusion.
Disk models with more parameters.
It is important to check that this thickening of the disk with distance that we observed is not related to some bias, due for instance that we did not included a thick disk. It is easy to check this issue by introducing a thick disk component in the model, and to reproduce the fitting procedure and the set of vertical profile in this case. Some recent determination of the thick disk parameters indicates that the thick disk accounts for about 0.06 % of the local density (Buser et al. 1998). The scale height of the thick disk is close to 1 Kpc while its radial scale length, is about 3 Kpc (Buser et al. 1998). This thick disk component was included in the model for the field towards the anti-center. The direction of the anti-center for chosen for reason of simplicity. In this field the amplitude of the warp is negligeable, and if the disk is elliptical, the effect of ellipticity is constant along the line of sight. The resulting set of density profiles is presented in Fig. 7. The addition of the thick disk component does not modify much the shape of the profiles. The only effect is only a reduction of the density in the wings of the profiles, but it does not affect the flattening. Other model of thick disk with larger density or different scale length do not modify the result. This can be easily understood, since due to its "thickness", the thick disk becomes really important beyond Z = 1.5, and most of the flattening effect we observe occur at Z < 1.5. Another possibility that has been used to reproduce the the star counts towards the anti-center is to . The vertical profile of the disk for the field at l = 240. The profile has been reconstructed for distances ranging from 1 to 10 Kpc along the line of sight at b = 0 (according to the limiting magnitude and the luminosity function, the sensitivity of the data beyond 10 Kpc is very limited). The corresponding distance from the Galactic center are: R = 8. 5,9.2,9.8,10.6,11.4,12.2,13.0,13,9,14.7 Kpc. The deviation from the exponential profile become very large beyond 13 Kpc. The solid lines are computed by smoothing of the data points. A line passing by the maxima of the different profiles has also been represented. Note that this line is not vertical, which indicates that the distribution is not symmetrical, and most likely warped. Figure 5. The vertical profile of the disk for the field at l = 180. The profile has been reconstructed for distances ranging from 1 to 9 Kpc along the line of sight at b = 0. The corresponding distance from the Galactic center are: R = 9, 10,11,12,13,14,15,16,17 Kpc. The flattening of the profile with distance is obvious. The deviation from the exponential profile become very large beyond 13 Kpc. A line passing by the maxima of the different profiles has also been represented. Note that this line is very close to vertical, which indicates shows that on the contrary to the field near l = 240 the warp may not be present in this field. Figure 6. The vertical profile of the disk for the field at l = 66. The profile has been reconstructed for distances ranging from 1 to 9 Kpc along the line of sight at b = 0. The corresponding distance from the Galactic center are: R = 7.7,7.4,7.3,7,35,7.5,7.8,8.2,8.7,9.3 Kpc. It is important to note that the deviation from the exponential profile are much smaller in this field than in the 2 previous fields. It can be explained easily, since in this field , even by going 10 Kpc along the line of sight, we do not go beyond 10 Kpc from the Galactic Center.
used a cut-off of the disk to some distance of the center. For instance Robin et al. (1992) found that a cut-off of the disk at 14 Kpc was necessary in order to account for the stellar density in a field close to anti-center direction, near b=2.5. The density reconstructed with this cut-off in our model of the disk is presented if Fig. 8. This cut-off does not help to reduce the variation of profile with distance. This fact was easily predictable, since if we try estimate the cut-off for different line of sight, the value of the cut-off is not constant (see Fig. 9 and 10). It shows that a cut-off of the disk is not a good solution. This we can conclude that the effect of flattening of the profile with distance, is not a bias due to the simplicity of our approach, but a real effect.
Evolution of the scale height as a function of the distance to the center.
To estimate the scale height corresponding to the different vertical profiles an exponential has been fitted to each profile. The result for the 3 fields is presented in Fig. 11 as a function of the distance to the Galactic center. It is clear that the 3 fields show a good consistency in the thickening of the profile. The effect seems to be somewhat larger at l=240, and is the smaller at l=66. It is very hard to know what happens when going closer to the Galactic center, since our data do not go closer than 7.5 Kpc from the center. Unfortunately the lines of sight which would allow to probe the disk closer to the center are not in the areas released by the 2MASS team to date.
Full maximum likelihood fit.
The former analysis conducted along each line of sight has been particularly useful to present the data, and demon- The profile has been reconstructed for distances ranging from 1 to 9 Kpc along the line of sight at b = 0. The corresponding distance from the Galactic center are: R = 9,10,11,12,13,14,15,16,17 Kpc.
Note that the addition of the thick disk component has some influence in the range Z =1.5,2.5. However it does not change the shape of the profiles for Z < 1.5, and the basic features associated with the flattening of the profile with distance are unchanged. strate the existence of a basic feature: the thickening and flattening of the Z profile with distance. This effect has been well established, however it is also important to derive simple analytical full maximum likelihood solution of the whole data set. For this full maximum solution all the line of sight in the range −50 < b < 50 will be fitted at the same time. Note that has we did previously, we will fit by maximum likelihood, the decomposition in bins of magnitude (20 bins) of the distribution along the different line of sights, and not only the total counts.The effect of disk thickening will be investigated in 2 the fields towards the outer disk, at l=180 and l=240. First, it is interesting to illustrate the fitting of different type of models to the data. To illustrate this issue we will present the result of the fitting of 4 different models for the field at l=180. For all the models the luminosity function of Wainscoat el al. (1992) has been used. The scale height for each population has also been taken from Wainscoat el al. 1992. Figure 11. The variations of the scale height of the Z profile as a function of the distance to the Galactic center. The crosses represent the field at l=66, the stars are for the field at l=240, and the triangle for the field at l=180.
5-)
Disk with variable scale height (Fig. 15, l=180, Fig. 18, l=240): 6-) Disk with variable scale height and flattening (Fig. 17 l=180, Fig. 19, l=240): Let's define: Then: With the same definition of ZH (R) as for the previous model, and: The best maximum likelihood fit for these 6 models are presented in Fig. 12,13,14,15,16,17. One can note immediately that disk models with constant scale height do not give any acceptable fit of the data ( Fig. 12 and 13). Even by including a thick disk component, it is not possible to reproduce the star counts (Fig. 13). By exploring different scale height scale length and ratio in the solar neighborhood, for the thick disk one cannot reproduce the star counts properly. The situation is not improved, if instead of an exponential profile, one adopt an isothermal profile for the Z distribution (see Fig. 14, 15) However, as one can see in Fig. 16, just by adding a linear variation of the scale height with distance from the Sun, to the single exponential disk model, one can reproduce the star counts with much better accuracy. Note that this disk model with variable scale height show this good agreement with the data, with even less free parameters than the model with a thick disk component, which despite more parameters is not able to reproduce the star counts. An even better agreement is possible if one introduces some flattening of the disk profile by using a "saturation" parameter β (see model (6) for the definition of β, and Fig. 16 for the fit). Finally, model (5) and (6) are fitted to the data in the field at l=240. As one can check in Fig. 18 and 19, these simple model reproduce the data with quite good accuracy. Note that the thickening of the disk calculated using model (5) is the same for the 2 fields: ZH = (1 + 0.32 (R − R0)) × HZ. It is also interesting to see that the flattening of the profile is larger at l=240, something that is also visible in our previous analysis. However something more troublesome is the fact that the scale length of the disk differs in the 2 fields. Some bias can be introduced by the presence of a spiral arm towards the anti-center, however the contrast in density of a spiral arm in the old population is small and unlikely to produce such discrepancies. This difference in scale heights is more likely due to the fact that even if our model reproduces well the star counts, it is too simple to account for the complexity of the situation. The thickening of the disk is a flaring process and is likely to be non-linear, and the shape of the Z profiles are probably more complex than our simple analytical forms. To conclude with the full maximum likelihood fit, one can say that the results are in very good agreement with the simple analysis along each line of sight that was conducted before. The basic features, the thickening and flattening of the outer disk region is confirmed.
DISCUSSION.
Our first analysis, which used a first order re-construction of the Z profile along each the line of sight has been very useful to present the data and describe the basic effect. Once the thickening of the disk with distance had been established, a full maximum likelihood of the data has been performed. The full maximum likelihood analysis confirms the result obtained previously, and show that simple model, with linear variations of the scale height and flattening are in good agreement with the star counts. The maximum likelihood analysis confirms that the warping of the disk is present in the field at l=240. As for the thickness of the disk, a linear variation starting form the position of the Sun has been used to describe the warp. The maximum likelihood analysis shows also that a significant amount of warping is present in the field near l=180, however the amplitude of the effect is much smaller than in the field at l=240 (about 6 times smaller). In this analysis, a large number of models has been tested, models with thick disk, isothermal disk models, disks models with cut-off..., none of them is in agreement with the data set, except the models including thickening of the disk with increasing distance from the Sun. We arrive at the obvious that the thickening of the disk is the only possible explanation. Note that we did not took into account the metallicity effects, but since metallicity is expected to decrease towards the outer disk, and that stars becomes brighter with decreasing metallicity, the effect goes into the other direction. Since we observe a drop of the star counts and gives an accurate description of the effect. It is interesting to note that for other galaxies (edge-on spiral) some evidence of thickening of the disk vertical profile in the outer regions is also observed by de Grijs et al. (1996) (see Fig. 2 in de Grijs 1996). As a whole we see that a self-consistent picture of the stellar disk emerge from this work: the stellar component is warped and flared, just like the HI component. The direction of the Z offset due to the warp is the same as in the HI maps presented by Burton & te Linkel Hekkert (1986). The amplitude of the effect is consistent with the HI to an accuracy of about 30 %. One may not forget that our model of the warp is very simple, in the range of distance investigated. Note that warp effect was treated in the linear approximation; it is possible that as is observed for the gas, the warp effect is not linear, however this linear approximation is sufficient to reproduce the data. Concerning the thickening of the disk it is interesting to make a comparison to the flaring of the HI disk. This comparison is presented in Fig. 20, scale height of the HI is taken from Wouterloot et al. (1990). It is clear that the amplitude of the thickening of the disk follow is comparable to what is observed in HI. This thickening of the disk, offers a natural explanation to the so called set of different "disk cut-off" observed for the Milky way (Robin et al. 1992, Ruphy et al. 1996, Freudeinrech 1998. due to the thickening, stars close to the plane are raised above the plane. This thickening gives the impression of a drop in the star counts at low latitudes, and fitting a cut-off in such density will give inconsistent results for different latitudes (as illustrated in Fig. 10). This effect may also confuse the separation between the thick disk and disk component. The separation between this 2 components may prove very difficult. At least this work shows that clear evidences of the thick disk should be searched rather towards the inner regions of the Galaxy. Obviously the modelisation of the thickening and of the warping is somewhat degenerated, and it is clear that non linear thickening laws would fit the data also. However, in such case it is better to keep the simplest model that can fit the data. More accurate determination of the outer disk would require to overcome the degeneracy due to the unknown distances of the sources. Such result could be easily achieved by using contact binaries. The contact binaries are good distance indicators, and are found in large numbers (about 1% of the main sequence stars are contact binaries). These stars are ideal tools to probe the structure of the outer disk, and some experiment should be conducted in the near feature. | 2014-10-01T00:00:00.000Z | 2000-07-02T00:00:00.000 | {
"year": 2000,
"sha1": "508f5cb6d21a879321bd6fc0dad825c39646a4bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0d5d3742058fabe22cb2e1eb4b52672a7e1b08d0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244838667 | pes2o/s2orc | v3-fos-license | Statistical Analysis and Development of Accident Prediction Model of Road Safety Conditions in Hisar City
The severity of road accidents is a big problem around the world, particularly in developing countries. Recognizing the major contributing variables can help reduce the severity of traffic accidents. This research uncovered new information as well as the most substantial target-specific factors related to the severity of road accidents. T-stat, P-value, Significance and other test values are determined to check the dependency of dependent variable on independent variable in order to obtain the most significant road accident variables. In this research, a comparative analysis of accident data from Hisar and Haryana are compared. According to the findings, Haryana’s accident severity index (46.20) was higher in 2019 than Hisar’s (36.01), while Hisar had fewer accidents per lakh population (33.34) than Haryana (38.40). The outcomes of the study were used to develop an effective and precise accident predicting model is developed for Hisar city and state Haryana using a statistical method. Four models were created using linear regression analysis, two each for Hisar and Haryana. These models produce good results with a margin of error that is within acceptable bounds (0-5%), allowing them to be used to predict future traffic accidents and deaths.
INTRODUCTION
Transportation system is a vital component of a country's economic growth. It has an impact on the rate, consistency and pattern of growth. National Highways' capacity to handle traffic must keep up with economic expansion. With a total length of 62. 16 Road safety continues to be an issue of worry because of its magnitude, severity and indirect impact on national economy, human health and local economies. It is a serious nationwide issue. Road traffic injuries now account for a significant portion of all deaths and hospitalizations worldwide, with significant socioeconomic implications. Deaths and injuries caused by traffic accidents result in huge financial losses as well as severe emotional suffering. Road accidents, which is based on data provided by law enforcement agencies across the country, in 2019 India has reported 449002 accidents in which 151113 people got killed [4][5][6].
Hisar city in state Haryana has mixed traffic flow and the city is witnessing an increase in the number of accidents with the growth in economy, as the city houses a large number of industries. In 1990, road accidents laid in ninth place out of over hundred separately identified causes of death and disability. It is forecasted that by the year 2020, road accidents will move up to sixth place . Including Jindal Stainless (Hisar) Limited (JSHL) placed in the city, Hisar is also having 2 nd largest cropped area (645000 hectares) in Haryana state [7], which itself reflects the high use of agricultural vehicles such as tractor, field cultivators, combine harvesters and many others including animal drawn vehicles like bullock carts. As the city is gifted with historical monuments and places combined with industrialization and agricultural activities, Hisar attracts a large traffic volume due to which the road accidents are increasing with the economic development of the city. Hisar has a total road length of 2380 km (metalled road length) highest in the state Haryana and accounting for 8.8% of total road length of the state itself. In , Hisar was the second-largest city in terms of population after Faridabad.
The study will help in developing a proper model and predictions of road accidents in the area selected. There can be the up gradation in the traffic pattern of city to prevent the road accidents and provide good traffic movements. In Hisar city accidents per lakh population 33.34 can be prevented and the overall accident phenomenon can be improved from the present study. There can be a great impact of the study in society for reduction of the accidents and the death ratio of selected city [8][9][10].
LITERATURE REVIEW
The statistics from 20 countries in 1938 on road casualties, automobiles and population. The research discovered that ten of the death values predicted by this technique were within 15% of their true values, 19 were within 40%, and one value was in error upto 67% [11][12][13].
Analysis for its lack of model reliability. He claimed that the Smeed formula could not be applied to all countries.The inability of Smeed's model to forecast deaths in many industrialised and developing nations prompted the authors to refine it by including other variables [14][15][16].
Traffic crash models for India's major megacities. The study's main finding was that, in order to reduce rates increase, however the real data fits better with a curve that increases initially and then reduces [17,18], see table 1 and figure 1.
ACCIDENT SEVERITY INDEX (ASI)
Accident severity is defined as the number of people died per 100 accidents. Year ACCIDENT Year ACCIDENT
MODEL ANALYSIS
The term "linear regression analysis" refers to any approach for constructing a model and assessing various variables with the goal of focusing on the relationship between the dependent and independent variables. x Hypothesis (Non-zero correlation): There is a strong correlation between dependent and independent variable. The total number of accidents is the 'Y' variable, whereas the total population of Haryana is the 'X' variable. While developing the model, confidence level is set at 95%, and both significance-F (0.009594= M1 and 0.00054= M2) and P-value are less than 0.05, indicating that the model is good and matches the hypothesis (non zero correlation). High R 2 value (0.70=M1 and 0.88=M2) of both the models shows their accuracy, see table 2 and 3.
Figure-7:
Validation of models for state Haryana From figure-7 we can conclude that the models are highly accurate and the error is in range of 0-5% only which indicate that models can be used for prediction of accidents and deaths in Haryana.
Figure-8: Validation graph for Hisar city
From figure-8 we can conclude that the models are highly accurate and the error is in range of 0-5% only which indicate that models can be used for prediction of accident and deaths in Hisar.
Conclusion, Limitation and Future Scope
The present study has come up with following findings: 1. In 2019, accident severity index of Haryana (46.20) is higher than Hisar (36.01).
limitation of study
The present study is totally based upon the data available on the website and other offices of government/private sectors .There may a chance of missing of complete information of minor accidents as well as the less important road accidents. Sometime various types of accidents are not counted in the government/private sectors, but it exists and have significant effects on the overall accident behavior. Accident model and death model discussed in the study are totally based on available data, but there may be a chance of difference in actual conditions as per site of the study area of paper.
Future Scope of the Study
The data analyses in the present study in only for Hisar city and Haryana state. The statics of paper can be used for different district of Haryana as well as other states. The comparison can be done for two or more cities with India for more batter analysis and results. The analysis used in the present study on the stretch can be done on different stretches and different states, where maximum data is available for accident prediction models. Accident prediction of different cities and roads can be increased by adding more area of cities and states under study. M Selveindran S, Samarutilake G D N, Rao K M N, Pattisapu J V, Hill C, Kolias A G, Pathi R, Hutchinson P J A and Vijaya Sekhar M V 2021 An exploratory qualitative study of the prevention of road traffic collisions and neurotrauma in India: perspectives from key informants in an Indian industrial city (Visakhapatnam) BMC Public Health 21 [5] Coronell G, Arellana J and Cantillo V 2021 Location of speed control cameras on highways: A geospatial analysis Transport 36 199-212 [6] Jun Y, Park J and Yeom C 2021 The evaluation of experimental variables for sustainable virtual road safety audits Sustain. 13 [7] Lapegna M and Stranieri S 2022 DClu: A Direction-Based Clustering Algorithm for VANETs | 2021-12-03T20:09:13.744Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "874951d79936ffe69fb381dda14cb042ac301ff2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/889/1/012034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "874951d79936ffe69fb381dda14cb042ac301ff2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
3737139 | pes2o/s2orc | v3-fos-license | The meanings assigned to disablement based on military ‘ habitus ’
This is a qualitative study based on indepth interviews, with the aim of analyzing the meanings associated with a social phenomenon, specifically disabilities in the military field. A total of 22 people were interviewed, 3 managers and 19 Brazilian Navy professionals responsible for coordinating, standardizing, enforcing and overseeing the Special Care Program (Programa de Assistência Especial) in Rio de Janeiro. Data was processed using the Interpretation of Meaning Approach, based on the concept of disablement and military habitus. The results show interpretations that completely deny the possibility of disabled persons in the military, considering such an idea to be insane. Others welcome the idea, albeit limited to administration, logistics and support functions. We find that the greatest hurdle for the involvement of people with disabilities in the Brazilian Navy is not their bodies, but the stigma associated with disablement. Their bodies become the main defining and deprecatory element of these subjects.
Introduction
Disablement experts [1][2][3][4] agree that how people interpret disablement is an important element to explain how this diversity is managed.Incorporating people with disabilities as a social practice represents the differentiated treatment in leading policies and programs for people with disabilities.
In 2011, the World Health Organization estimated that over a billion people around the world live with some sort of disability, 785 million of whom are economically active 5 .According to the IBGE, Brazil's Institute for Geography and Statistics, out of a population of 190,755,799 in 2010, 45,606,048 had at least one of the disabilities investigated.This is 23.9% of the population 6 .This is a significant number of people, requiring that public policies must be prepared to address the challenges of this situation.
When incorporated into social practice, the meanings associated with a social phenomenon such as disablement impacts the cultural system and social structure, and thus the otherness of the disability, whose objective expression is found in the rights, laws and public policies created for this segment.
Hence the importance of understanding disablement in its broader conception and as part of group health, or as a milestone of social determination of the health-disease process, using as a reference the social contradictions found in the contradictions in life, especially when it comes to jobs for people with disabilities.
This article analyses the meanings assigned to disablement based on the military habitus of the professionals and managers of the Brazilian Navy (BN) involved in standardizing, coordinating, enforcing and overseeing the PAE (Special Care Program) in Rio de Janeiro.
Disablement and military habitus are the key concepts that guide this effort.The former is referenced to the social model of disablement, understood as a manifestation of human differences.This understanding enables separating disablement from injury, a bodily data of no value, and place this phenomenon as the result of the interaction between an injured body and a society that discriminates.In this model, disablement is a sociological and political phenomenon [7][8][9][10] .
The concept of military habitus is based on the contributions of Bourdieu 11 and other military sociology and anthropology authors 12,13 .Looking at the military institution from the point of view of Bourdieu 11 , implies in considering it the locus for building a symbolic system.
Men and women joining a military institution become the heirs of a symbolic set of institution identifiers, comprised of practices and discourses, expressed in ceremonies, rituals and the dayto-day of the institution.The institution must have mechanisms that enable this process of legacy assimilation.This mechanism is ensured via a process of socialization imposed on everyone who joints; this social construct creates and shapes the military identity.
This construct, named by Castro 12 as the "military spirit", consists of a process of professional socialization that happens when subjects acquire dispositions, perceived as evident, that lead them to behave in a certain manner, with no need to explicitly remember the rules to follow.In other words, when the military habitus is incorporated.According to Janowitz 13 , to become an armed forces professional, soldiers must cease being an individual and become a being whose identity is determined by the institution.All of a soldier's learning is focused on creating this new person.Thus insertion into the barracks mans, for those seeking a career in the armed forces, embracing a set of values, vision and principles that will result in their acquiring the military habitus.
This study is based on the assumption that building a meaning for disablement on the part of the managers and professionals involved in PAE, is influenced by the military habitus, creating a culture that normalizes differences and a social construct of the body required for a military professional.Finally, it is expected that the military body reference and anchor the social identity of the group or, in other words, a disciplined body 14 .
Understanding disablement and the associated stigma 15 in military institutions may suggest something is out of place, as the meeting of disability and the military stresses the military habitus as it questions the pillars upon which it is supported.
It is believed that cross-referencing disablement and military habitus will offer valuable contributions to understanding how the military habitus influences the treatment of people with disabilities in this area, be they military personnel, their dependents or civilians.This analysis will help understand the barriers that exist in the military field to participation by this segment of society, a factor that creates discrimination and oppression for those with disabilities.It also contributes to the field of group health, as it problematizes aspects present in the reality of military professionals with disabilities, and that interfere in the work-health relationship, with the social character of disablement vs. the military habitus at the core of this analysis, and the need to design it from its articulation with the process of social production and reproduction.
This study is aligned with the concept that proposes to break the interpretations of disablement that reduces it to bodily impediments, including an analysis of the social, cultural and political issues associated with the phenomenon.
Method
This is a qualitative study based on in-depth interviews.The aim is to analyze the meanings associated a social phenomenon, specifically disabilities in the military field.
Institutional context of the study
PAE is one of the programs in the BN social services policy.It is focused on the dependents of navy personnel (civilian and military) with disabilities who are above the age of five.The program covers the entire country.In Rio de Janeiro execution is the responsibility of the Naval Social Services.
The Naval Social Services Department is a military organization responsible for standardizing, coordinating and managing the navy's social services policy, and for managing the funds set aside for social programs.This department is also responsible for overseeing the activities of the Naval Social Services, a military organization that is under its umbrella, and is responsible for executing and overseeing the PAE in RJ.
Access to the PAE requires an assessment by GAAPE (Group for the Assessment and Follow-up of Special Patients), which will decide on the types of therapy, their frequency and the best institutions to provide them.Annual assessments performed by GAAPE are essential for remaining in the PAE and for expanding treatment.
Boundaries of the study field
The empirical study field is made up of DASM, SASM and GAAPE.In selecting this field we used the following criteria: a) Sectors of the BN responsible for managing the PAE in RJ; b) Sectors of the BN that standardize, coordinate, execute and oversee the PAE in RJ; c) Sectors of the BN responsible for the acceptance, permanence and discharge of program users in RJ.For insertion into the field DASM sent a message (official communication vehicle in the military field) to SASM and GAAPE, asking for authorization to perform the study.
Selecting study subjects
The study subjects were DASM, SASM and GAAPE managers and professionals.Subjects were selected to create a representative sample of the phenomenon under study.Inclusion criteria were the following: a) subjects responsible for managing the PAE in RJ; b) subjects who standardize, coordinate, enforce and oversee the PAE in RJ; c) subjects who determine who may join, remain and exit the Program in RJ.
Data gathering tool
The main source of data was a semi-structured interview.The goal of the interviews was to identify, based on the responses obtained, the meanings assigned to disablement.Interviews were based on guiding scripts to ensure greater flexibility and freedom of discourse, while at the same time ensuring that all of the essential issues for the study were addressed.The interviews had four groups of questions.The first one was designed to describe the participants, and the second focused on identifying their involvement in the PAE (GAAPE).The third attempted to detect how they interpret disablement, and the fourth focused on relating these interpretations to the military habitus.Three DASM staff members were interviewed (a manager and two other professionals), four SASM members (one manager and three other professionals, and 13 (thirteen) GAAPE staff (one manager and twelve professionals).We also interviewed two retired professionals who had been present when the BN PAE was first created, bringing the total number of interviews to 22 (twenty two), resulting in about 15 hours of taped conversation.Interviews were recorded, transcribed, digitized and reviewed to enable analysis.Transcripts were encoded to preserve the anonymity of study subjects.Thus the following codes were used for interviews: I (Interviewee) and M (Manager), followed by a sequential number referring go the particular interview.Interviews were applied to the following departments: DASM, SASM and GAAPE.
Data analysis
Interviews were analyzed using the Interpretation of Meaning Method of Gomes et al 16 .In a first step the interview material was read to capture the content of the material and get an overview of the specificities of the material.This reading allowed us to put together an analytical structure used to rank and distribute the units comprising the material.The structure was assembled anchored on the concepts of disablement and military habitus.
In the next step, based on the analytical structure created, we performed the following steps: a) identification of the explicit and implicit ideas about disablement; b) search for the broader (sociocultural) meanings of disablement; c) a dialog between the ideas posed, the information from other studies on disablement, and the theoretical reference of the study.
The third step was an interpretative synthesis, attempting to articulate the study objective, the theoretical basis and the empirical data.
This study is part of a Ph.D. dissertation 17 .The project that originated this study was submitted to and approved by the BN Research Ethics Committee.This procedure complies with the guidelines in National Board of Health Resolution 466/12 18 , governing research involving human beings.
Results and discussion
The results of the relationship between military habitus and the interpretation of the phenomenon to disablement are based on the perceptions built around the presence of people with disabilities in the BN, as military professionals.It is important to point out the existence of understandings that totally deny the possibility that people with disabilities may pursue a naval career, actually considering it madness: Are you crazy?A person with disabilities in the Navy?... it clashes with our values.(G1).It would be insane!Far too advanced for us.I don't think that tradition would allow it (E4).
An analysis of the statements enables identifying a static vision of the military institution, linked to its professional phase.Military professionalization enabled creating a group of individuals technically and organizationally trained to manage armed violence and legitimately and directly involved in its preparation and application 19 .However, it is known that the values that currently guide military institutions were historically linked to questions of power, which attempted to transform individuals into cogs in their wheels 20 .
Underlying the statement below is a reaction of uncertainty regarding the destination of the very institution with the possible inclusion of people with disabilities in the armed forces.Putting a person with special needs here is like buying a new TV or a used one.
.]. It is this uncertainty regarding how things will work that makes you not have confidence.(G3).
Disablement is understood as a difference.People with disabilities are iconic of differences.At the same time, they resist the order and normalcy established and attack the established standards, in an anarchical type of existence 21 .Thus difference confronts the military habitus, which attempts to impose full standardization of the agents in this field.
The image of disablement in military institutions
The perceptions surrounding the idea that a military profession would be incompatible with people with disabilities has been justified in the argument that this type of profession has specific requirements that differ from all other professions.I don't see how a person with special needs could become a military professional [...] Joining the navy, because of its career requirements, imposes a limitation on this public.I can't see a person with disabilities in such an environment.(E19).
According to the reports, the specificities that such professionals must present are linked to the very meaning of having armed forces: national defense, which in extreme situations can only be ensured through combat.What happens is that everything in military life [...] is linked to one activity: combat.(G1).
The nature of combat is one of the main characteristics of the military profession.The possibility of doing their duty of defending the nation, possibly requiring that they sacrifice their very lives.This is at the core of the understanding that leads military institutions and their agents to view themselves as different from civilians.According to Ferreira 22 , nowhere will we find civilian organizations whose members are required to die to defend their homeland.The idea of Homeland and the moral obligation to sacrifice oneself to defend it make military personnel feel different from civilians.This characteristics of the military professionals is essential to understanding the development and reproduction of the military habitus, and provides an understanding of how those socialized in this field incorporate the concepts of courage and willingness for combat.This transforms the abominable situation of the objective conditions that may condemn them to death into something virtuous.Ideologically, this is raised to a matter of honor, one of the guiding principles of the armed forces.
Combat thus has an ambivalent nature.One the one hand, a military person can get the most of what his/her body has to offer in the battlefield, possibly becoming a hero.On the other hand, it is also in the operating theater that the largest number of casualties occur -death, injury and mutilation.
Estimates indicate that among German soldiers surviving WWI, one and a half million returned with severe disabilities, including 80,000 who had had an upper and/or lower limb amputated 23 .WWII left about 28 million mutilated persons, including both civilians and military personnel 24 .The latter, by being discharged from the armed forces, experienced a double process of grief, the loss of a certain way of being in a body that had changed, and having to leave their profession, that which comprised their identity.The hero defending his/her country became defenseless and unnecessary.
A military career and the standards of body and health
According to Bourdieu 25 , habitus in any field gives rise to different types of bodily expression, as the willingness and readiness incorporated mold the body based on material and spiritual conditions, translating into a way of being.Along this line of thinking, the military body, the body hexis, is a strong element of the relationship between military personnel and the rest of the world, and military personnel are willing to mold their body to favor this relationship.Military staff learn with their bodies -how to walk, speak, dress, wear their hair and speak to others.
Their body is a vehicle that expresses the social order they are part of, distinguishing them from civilians.Physical (and behavioral) attributes distinguish them and make military personnel recognizable even when not in uniform, when they are not bearing the most visible marker of the corporation outside the military field.Often the military habitus will condition military personnel to make certain gestures, or to move in ways they are not conscious of and that escape their very control.
In the perceptions of these professionals we see the concept that the body should translate military identity: a body from which one gets the most efficiency.This idealized body, capable of defending the homeland, contrasts with the perception that something is missing, associated with the bodies of those with disabilities.We expect that at any time a military person will take up arms and fight to the death.[...] this would be difficult for a person with disabilities.(G3).
This ideal body, with attributes considered essential for the performance of military duties, must be useful and subject to the institution it serves.However, this is not what we see in the day-to-day of the Brazilian Navy, as it includes bodies that do not conform to this ideal and that break with the reference use to identify the military condition.In theory, [...] we are prepared to face anything.For this reason our bodies must be ready.But in actual fact, this is not what we see.
[...] there are military staff who are unable to perform tasks that demand much of their bodies, we have military personnel with quite severe disabilities.(E13).
This situation points to inconsistencies in the military habitus, which offers the opportunity to reinterpret the values and meanings assigned to the body in this field.This shows the possibility of transforming the habitus, to the extent that it is not the end-point of a journey, but a system in constant transformation 25 .
Another issue identified in this position is the concept that disablement is the opposite of health, showing an understanding of disability anchored on the medical model of disablement, associating the phenomenon to disease, as suggested in the following statement: A person with disability is a person who requires care.No matter what the disability is, he or she will always require [...] healthcare.(E2).
It is worth pointing out however, some of the issues present in this perception, offering new ways to interpret the relationship between the armed forces and disabilities.Starting with the increasing use of technologically sophisticated resources in war, leading to changes in how we wage war.These changes show the need to acquire new competences for the profession, and the need to staff the military with a more diverse range of social segments, including people with disabilities.I believe this may be a trend going forward.With the technology we have, why can't a mobility impaired person operate a [...] drone?(E5).
Another issue worth mentioning is the requirements for joining the military, impossible for a person with a disability -from the simplest to the most complex -to meet, and that are unrelated to the details of the condition of the mission and responsibilities of each one.(E17).
The military habitus is present in this institutional posture by the willingness to standardize field agents, which confronts the association between disablement linked to stigma and difference.When you add a person to the navy, you add people who can all do the same things.[...] A [...] person lacking a leg, a finger, an arm or an ear, [...] I'm here, but everyone feels sorry for me [...].We now these people will never meet the standards required to be a member of the armed forces.(G3).
Underlying all of these statements is the belief that bodily characteristics are what result in excluding people with disabilities from the armed forces.This perception is foreign to the social structure of this organization, which is unwilling to acknowledge the different ways that a disability can be experienced.Disablement is an individual and not a social problem, which complies with the social model adopted by Brazil when it ratified the Convention on the Rights of People with Disabilities 26 .
The belief that the armed forces should be willing to keep people with disabilities acquired during their military career is more prevalent among managers.I am a military professional, [...] if I become disabled, and if this disability does not limit my performance in that role in any significant way.Then [...] I believe I should remain in the Navy.(G1).A person who enters the Naval Academy or an [...] apprentice sailor, corporal or sergeant who at some point loses a finger or a leg in a workplace accident [...].This person can still perform some of the bureaucratic tasks [...].The navy must structure itself so that these people remain.(G3).
These statements show there is an ideal body for military activities, based on the military habitus, which has been excluding agents already included and molded in this field.This institutional practice is counter to the working potential of such subjects.
Maintaining a certain body and health standard is a constant concern within the armed forces.Thus disease and disability are assessed against strict standards that are constantly updated.A military person with a disease or with some negative bodily attribute compared to this reference is removed from the service.In other words, if a military person acquires any disease or disability considered to be incompatible with the exercise of their profession, they are immediately retired ex-officio (by virtue of one's position).
The association between disablement and stigma provides the elements required to understand this part of the military habitus.According to Goffman 15 , there is an ideological construct around this stigma, which is used to explain the inferiority of those against whom the stigma exists.This is reflected in the perceptions regarding the permanence of military personnel with disabilities in the armed forces.In this way, using the contributions of the author, one may defend the argument that the greatest hurdle to military personnel with disabilities remaining in the armed forces is not the disability itself, but the fact that this becomes the main defining element of this person.
Regulating differences in the military habitus
The perceptions around including people with disabilities in the armed forces, primarily limited to administration, support and logistics, deny the possibility of difference for such individuals.To be accepted, these people must become equal to any of the field agents.So long as their disability does not keep them from fulfilling all of the requirements in the rules, I see no problem employing them in the administrative area.(G3).
Underlying understanding is that a disablement is something to be overcome to include people with disabilities in military careers.However, this does not take into account the possible architecture, attitude, institutional and organizational barriers that may favor or limit this performance.Therefore this perspective is an attempt to negate and reject the disability.
This position is seen in the concept of integrating people with disabilities in the armed forces and failure to include them, which contradicts the ideas in the Convention on the Rights of People with Disabilities.Integration is based on the concept of normalizing those with disabilities.This perspective does not question the structures and social attitudes that produce the inequalities experienced by differentiated bodies, as defended for social inclusion 27,28 .
Thus, even though this outlook admits the presence of those with disabilities in the armed forces, the military habitus operates to build the perceptions of others by means of normalizing the differences viewed as essential for stability in that area.
Final Considerations
The knowledge from group health contributed to this study, enabling a critical dialog and the identification of the contradictions that directly interfere in the life of the military professional with disabilities.
The results show there is a relative amount of consistency in the perceptions of managers and professionals on disablement and their perceptions on the exclusion of those with disabilities from the naval profession.The arguments in this study to exclude those with disabilities evoke the Brazilian Navy values and traditions.However, for the most part they are anchored on the military condition: rights, duties and situations to which the military staff are submitted, in light of the nature of the armed forces mission -national defense.
The military habitus appears in the conservative statements, expressed in the invocation of a professional tradition and profile that legitimize an organization and a working process specific to the current naval military institution.
The perceptions built around a body idealized for war also reveal the conflict between the military habitus and the meanings associated with disablement.Thus the social construct of the body conceived to "fight and die if necessary in defense of the homeland", conflicts with the perception of the lack or absence of something that managers and professionals associated with a disabled body.
This bodily construct, structured and embedded in the military habitus, also offers elements that explain our understanding of the criteria for entering the military, which constitute a hurdle for those with disabilities, be they simple or severe.One may deduce then that the military habitus is present in this institutional posture, and its willingness to standardize and rank agents in this field.This is aligned with the perceptions of disability linked to stigma and difference.
The perceptions that include the possibility of the disabled in the military ranks also bring to light the tension between military habitus and disablement.The background to this issue is the perception of disablement as a difference that must be eliminated if these individuals are to be integrated into a military career.To an extent, this negates and rejects disablement.
These perceptions also associate disablement with a challenge and failing that must be overcome to insert those with disabilities in the military.These constructs show the invisibility of social, architectural, attitude and institutional barriers, which either favor or hinder the inclusion of people with disabilities.Once again, the military habitus is present in these associations, to the extent that they structure standardizing conducts.
This study also shows that while the military habitus is a concept viewed as a system designed in the past and focused on the present, it is a system that is continuously being reformulated.This aspect may be seen in the perceptions that indicate the existence of people in the armed forces whose bodies differ from the image of the warrior, breaching the paradigm associated with the military condition and employed in non-combat roles.This shows that it is not only those with bodies fit for combat who meet the institutional needs.These perceptions announce the possibility of reinterpreting the values and meanings assigned to the body in a military field, which may impact the military habitus and hence how these agents perceive and handle bodily differences.This reinterpretation is also found on other studies on healthcare from the point of view of reintegration military personnel in different contexts 29,30 .
In light of the ideas of Goffman 15 , it is fair to say that the meaning of disablement is social and in construction, and thus the link between stigma and disability is not fixed.There is always the possibility of change in the perception of stigma during the course of one's life, especially following closer contact, such as personal experience or having a work colleague who experiences stigma.
This study returns to the debate on diversity, considering it a matter of group health.Disablement is thus conceived based on the social determinant of health, understood as the "process resulting from historical and structural determinants that mold social life in the different social formations" 31 .In recognizing the different forms of social reproduction present in the various social-historical contexts, this concept also associates the different potentials of wear and strength, which presents itself in the reality of the military professional, understood here as the processes that mediate between work/life, health/disease and military habitus/disability.This analytical outlook is essential to problematize the creation of paths that enable overcoming the tension existing between military habitus and disablement, facilitating or hindering access to people with disabilities to the armed forces, thus collaborating to the recognition of human rights.
Collaborations
NX Moreira helped design the study, review the literature, gather and analyze data, and draft and critically review the text.LF Cavalcanti helped design the study, plan the research and critically review the text.RO Souza worked on the review and final working of the text.
[...] When you turn on the used TV, [...] you have no idea what behavior you will see on the TV [.. | 2018-04-03T03:18:21.434Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "a15d0a27a08ff3c8ccc8147a59008d5368c52da3",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/csc/v21n10/en_1413-8123-csc-21-10-3027.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aad0593a20917e58182ff461aa7c15478e5314d6",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
119036352 | pes2o/s2orc | v3-fos-license | Moriond QCD 2017: theoretical summary
I summarize the highlights of the theory talks at the 52nd Rencontres de Moriond: QCD and high energy interactions.
-Higgs production and decay signal strengths in different channels versus the Standard Model prediction (µi = 1). The figure is taken from arXiv:1606.02266. couplings requires good control over the SM predictions and thus precision calculations.
The theory talks have covered the following topics: • Higgs self-coupling • Higgs effective (EFT) couplings • beyond the SM Higgses • CP properties in multi-Higgs models • dark matter in multi-Higgs models • Higgs as the only fundamental scalar in Nature Measuring the Higgs self-coupling is the next big challenge at the LHC. Even if the other Higgs couplings are close to their SM predictions, the self-coupling can deviate by O(100%). This could, for instance, be due to the Higgs mixing with another (SM singlet) scalar or radiative effects of the hidden sector. The most straightforward channel sensitive to the Higgs triple coupling is di-Higgs production in gluon fusion, which proceeds through the hhh-vertex. The background for this process includes a box diagram with the top quark in the loop. Jones reported on incorporating the full m t -dependence of such diagrams at NLO QCD, which improves significantly over the Higgs EFT (m t → ∞) limit and leads to about 14% correction to Higgs pair production.
New physics in the Higgs sector may also manifest itself through dimension-6 effective operators of the type These are obtained by integrating out heavy new physics states. Such operators generally affect the Higgs production cross section. When multiple operators are present simultaneously, they can conspire to give rise to the same production rate as that in the SM. Ilnicka discussed their effects on the Higgs p T distribution which provides an additional handle on the size of these operators and breaks the above degeneracy. Extensions of the Higgs sector were reviewed by Carena, with the focus on 2 Higgs doublet models (2HDMs) and composite Higgs models. Due to the apparent SM-like nature of the 125 GeV Higgs h, these models are forced into special limits. In particular, viable 2HDMs are of the "alignment" type: the SM limit for the h-couplings is achieved either (a) through decoupling of the heavy states or (b) due to special relations among the scalar potential couplings ("alignment without decoupling"). The latter case is particularly interesting since it allows for relatively light additional scalar and pseudoscalar states, although the theoretical motivation for the above coupling relations is still to be understood. In the realm of composite Higgs models, the critical parameter controlling the deviations of the Higgs couplings from their SM values is where f is the scale of strong dynamics. To keep such deviations at the ∼10% level, f is bounded from below by roughly 700 GeV, depending on the specifics of the model. New composite states are expected at the scale gf , where g is the corresponding coupling constant. One of the interesting applications of multi-Higgs models concerns the dark matter problem. In 2HDMs, one may impose a Z 2 symmetry under which only the second Higgs doublet transforms, H I → −H I . This forbids its coupling to fermions and makes it stable. One finds that all the dark matter constraints can be satisfied by the "inert Higgs doublet" H I , although the direct DM detection constraints push its mass to the 500 GeV region. Sokolowska discussed an extension of this idea whereby one adds a further inert doublet. This relieves some of the constraints as long as the particle spectrum is somewhat compressed and allows for efficient co-annihilation of dark matter. As a result, new mass windows for lighter DM open up, e.g. in the regions of 400 GeV and m h /2.
Subtleties of CP transformations in multi-Higgs models were discussed by Ivanov. In general, one may accompany the usual CP transformation by a unitary transformation U that mixes the Higgs doublets φ i : This is particularly relevant when the resulting transformation is a symmetry of the potential, while the usual CP φ i → φ * i is not. Such a generalized CP does not necessarily square to one since U U * may not be unity. In this case, some of the (complex) scalars can be CP-halfodd instead of being odd or even, that is, they pick up the phase i under the generalized CP transformation, which leads to peculiar phenomenology.
An alternative to the common SM extensions was advocated by Shaposhnikov. The main cosmological shortcomings of the Standard Model can be addressed by adding 3 right-handed neutrinos which would play the role of dark matter and be responsible for baryogenesis, and including the Higgs-gravity coupling of the form where R is the scalar curvature. In this case, the Higgs can play the role of the inflaton since at large (Planckian) field values the scalar potential becomes flat allowing for slow-roll inflation. This minimalistic idea fits well with the cosmological observations and may also explain the lack of new physics signatures at the LHC and other experiments. One concern to be aware of is some tension of the Higgs potential absolute stability with the current top-quark data, although it does not appear very significant at the moment.
Flavor Physics
Even though the CKM picture appears to be in excellent agreement with the data overall ( Fig.2), new physics may still affect a subset of observables. Theoretically, there is no preference for the scale of fundamental flavor dynamics: it may be at the TeV scale, in which case it must be very special, or just as well at the Planck scale as is the case in string theory. Given that the flavor structures we see in the SM are special, one should keep an open mind and be prepared for surprises. Having said that, it is important to appreciate the sheer volume of B-physics data: the B + meson already has close to 500 decay modes, such that statistical fluctuations would not be unexpected.
The following topics were discussed: • B-physics anomalies • progress in lattice calculations • heavy flavor production at the LHC Flavor physics theory talks were dominated by the discussion of the current B-physics anomalies. Neubert reviewed their current status focusing on the R D and R K observables, where l = e, µ. These quantities are clean and sensitive probes of lepton universality in B-decays. Both of them currently exhibit deviations from the SM predictions: R D ( * ) at the 3.5σ level and R K at the 2.6σ level. While the R K anomaly is driven by the LHCb data, R D ( * ) shows deviations of varying significance in all three experiments: BaBar, Belle and LHCb. This is particularly surprising since this observable is dominated by the tree level decay in the SM and therefore rather clean. In the R K case, the theoretical uncertainties are particularly small since both µ and e are effectively massless, while more statistics is needed to see if the current tendency persists a .
a The latest LHCb measurements of RK * also show a similar size deviation from the SM prediction.
There are further observables that show some tension with the SM predictions, in particular, P 5 that is sensitive to the angular distributions in B → K * µµ. However, the theoretical status of this anomaly is not as firm: the hadronic uncertainties due to, for instance, breakdown of quark-hadron duality, plague the analysis. A possible resolution of the above anomalies in terms of new physics was offered by Virto. Once the operator with the Wilson coefficient C NP 9µ ∼ −1 is included, the quality of the fit to the data improves dramatically. Clearly, this muon-specific interaction violates lepton universality and is mostly relevant to R K . The size of the new physics contribution is quite large: in the SM the corresponding Wilson coefficient takes on the value 4.1. It is therefore a highly non-trivial task to design a UV complete model satisfying all the flavor and as well as collider, etc. constraints.
The case for new physics in b → sl + l − transitions was strengthened by Neshatpour. He argued that introducing hadronic power corrections and fitting their coefficients to the data including P 5 does not result in a better quality fit compared to that with just the new physics contribution C NP 9µ ∼ −1. As a result, this statistical view supports the new physics interpretation (modulo the above-mentioned issues).
Another area where possible hints of new physics effects are seen is Kaon decays. As discussed by Lunghi, advanced lattice calculations allow one to analyze the K → ππ matrix elements in detail. In particular, the long standing puzzle known as the "∆I = 1/2 rule" has been explained by a subtle accidental cancellation in the ∆I = 3/2 amplitude. The resulting / exhibits some tension with the SM prediction: The discrepancy is at the ∼ 2σ level, so no conclusive statement can yet be made.
A new strategy to determine the CP violating phase φ s of the B s −B s mixing was suggested by Vos. The Standard Model prediction for this phase is close to zero, φ s −0.04, which makes it a convenient place to look for new physics effects. The new approach is based on the following ingredients: (a) minimal use of U-symmetry (d ↔ s interchange); (b) γ and φ d input; (c) input of non-factorizable effects through semi-leptonic branching ratios. The resulting accuracy can reach an impressive sub-degree level in the future.
A powerful framework to describe hadronic decays of the B-mesons was discussed by Lu. Within the factorization-assisted topological amplitude approach, one fits 14 non-perturbative parameters using the existing data which leads to testable predictions for more than 100 channels. d'Enterria discussed NNLO predictions for total bottom and charm production at the LHC. The corresponding NLO predictions have very large scale uncertainties, 30 to 60%, which are naturally expected to be reduced at the NNLO level. Adapting the available NNLO calculations for tt production (Top++) to the bb and cc final states, one finds a substantial NNLO/NLO K-factor of about 1.2 and the anticipated reduction of the scale uncertainties. However, only the bb cross section appears to be well behaved for all PDFs with the scale uncertainty of about 15%. The charm production cross section suffers from large uncertainties and particular sensitivity to the low-x gluon PDF which makes the current description unsatisfactory.
Precision Calculations
Precision calculations allow us to both test the Standard Model and extract its fundamental parameters. Among the latter, the top quark mass and the strong coupling are of particular importance. Indeed, electroweak vacuum stability depends sensitively on these parameters, especially on m t (Fig.3). One could say it is literally a "matter of life and death".
The topics discussed at the conference include The strong coupling affects most LHC observables and as such is a crucial SM parameter. Its current uncertainty at M Z is just below 1%, which leaves room for improvement. Klijnsma reported on α s determination from tt production in gluon fusion. This process itself at NNLO allows one to extract α s with 4% precision, which is complementary to other measurements. Other options were discussed by Pires (single jet production) and Niehues (DIS). In particular, Pires showed that the NNLO calculations for single jet inclusive production at the LHC reduce the scale uncertainty to a percent level. Such impressive accuracy is comparable to that of the experimental measurements and will facilitate an independent α s determination. Niehues discussed NNLO dijet producton in DIS, which is also sensitive to α s as well as to the gluon PDF. There, the situation is not as transparent due to NNLO normalization issues for the total cross section, although the scale uncertainty decreases compared to the NLO result. This problem can presumably be traced back to the use of the NLO PDFs.
An ingenious approach to the top quark mass measurement at the LHC was suggested by Kim. The di-photon production in gluon fusion receives loop contributions from all charged colored particles. In particular, at the toponium threshold the di-photon spectrum exhibits a kink whose exact shape and position are determined with resummation techniques. In principle, this may allow for m t determination within 0.5 GeV.
Zhu reported on single top production at the LHC. This process provides one with a direct measurement of V tb among other things. Both production and decay computations are performed at the NNLO level, with fully differential rates. Surprisingly, the NNLO results often lie outside the NLO scale variation bands. Further improvement is possible by going beyond the structure function approximation.
Importance of interference effects in vector boson pair production at the LHC was emphasized by Röntsch. At NLO, these account for a 5% reduction of the differential rates, which is characteristic of the Higgs unitarizing behavior. The scale uncertainty at NLO decreases to the 10% level.
Kallweit and Wiesemann discussed state-of-the-art LHC di-boson production at NNLO QCD. The calculations are performed using an automated framework "Matrix". The resulting NNLO corrections are found to be significant, e.g. the WW inclusive cross section receives a correction of 10-14% compared to the NLO result, while the scale uncertainty decreases to an impressive 3% level.
One of the crucial ingredients in precision calculations is the particle distribution functions (PDFs). Progress in PDF determination was reviewed by Nadolsky. He emphasized the importance of a coordinated effort of theorists and experimentalists to reduce the relevant uncertainties. For example, as a result of dedicated benchmarking, the uncertainty in the gluon-gluon luminosity and thus the Higgs production was reduced from 7% in 2012 to 3% in 2015. Further progress is expected from the new generation of NNLO PDFs NNPDF3.1, CT17 and MMHT'16 which are due to be released shortly.
Bertone discussed the xFitter project which provides a unique open-source framework to extract PDFs and fundamental parameters from a large variety of experimental data. For example, xFitter allows for a competitive charm quark mass determination from inclusive and charm production in DIS (HERA I+II). Another important recent application is the determination of the photon PDF from the ATLAS 8 TeV Drell-Yan data.
Salgado reported on the update of the nuclear PDFs EPPS16. Compared to its previous version EPS09, the new set includes the dijet and W,Z LHC data as well as neutrino results. It also supersedes EPS09 in that the new analysis is more realistic allowing for more freedom in parametrization, which tends to increase the PDF uncertainties.
New Phenomena
New physics at the TeV scale has traditionally been motivated by the hierarchy problem. The quantum corrections in the SM are large and tend to drive the Higgs mass to Planckian values. However, such apparent instability may be deceiving and under the surface certain stabilizing mechanisms could be at work. For instance, one may imagine that the stones in Fig.4 are kept together by a metal rod. In particle physics, the role of such a stabilizer would be played by supersymmetry (or technicolor, etc.). On the other hand, one could also argue that the system is simply fine-tuned and no new mechanism is needed to explain the existence of this configuration: after all, it does not violate any laws of physics. Given the current data, it would be premature to dismiss either of these options, even though the first possibility seems much more appealing.
The discussion in the "New Phenomena" session was focused around supersymmetry and composite models (apart from the talks included in the Higgs section).
Deandrea reviewed the hierarchy problem focusing on the composite Higgs solution. If the Higgs boson is a composite state, no fundamental scalar appears at low energies and thus the quantum corrections are well under control. This entails a tower of other composite statesà la ρ-meson to be accessible at (future?) colliders. Furthermore, the Higgs interactions involve energy-dependent formfactors which can also be probed.
Status of supersymmetric models was reviewed by Porod. The Higgs mass of 125 GeV requires a large radiative correction from the top quark superpartners, which implies either | 2019-04-22T13:06:06.996Z | 2017-04-28T00:00:00.000 | {
"year": 2017,
"sha1": "5978d048d47f94651f6d6d035d97896e4add474f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "192884ff04792f7fb2b1098f188d182c7caafd66",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15818167 | pes2o/s2orc | v3-fos-license | Reconstructing binary matrices under window constraints from their row and column sums
The present paper deals with the discrete inverse problem of reconstructing binary matrices from their row and column sums under additional constraints on the number and pattern of entries in specified minors. While the classical consistency and reconstruction problems for two directions in discrete tomography can be solved in polynomial time, it turns out that these window constraints cause various unexpected complexity jumps back and forth from polynomial-time solvability to $\mathbb{N}\mathbb{P}$-hardness.
Introduction
The problem of reconstructing binary matrices from their row and column sums is a classical inverse task in combinatorics; see [17]. Even though the term was introduced much later it can be seen as a root of discrete tomography; see [19,16,11,12,1], and the collected editions [14], [15]. Of particular relevance for the present paper are two well-known observations: (1) The question of consistency of the data i.e., the question whether there exists a matrix whose row and column sums coincide with the given data, can be solved in polynomial time. (2) Typically, its row and column sums do not determine the underlying matrix uniquely. See [17,8,2,13] for characterizations of the 'rare' cases of uniqueness.
In the present paper we address the second issue by adding additional window constraints, specifying (or giving bounds on) how many points there are in certain minors of the matrix. These constraints come up naturally in dynamic discrete tomography [3]. An application of particular relevance in physics is that of particle tracking; see [5], [20]. Here, the positions of particles over time are to be reconstructed from two (sometimes more) high speed camera images. Window constraints are a natural way of modeling additional physical information for instance on the speed of the particles; see [3] and the background literature given there for further information.
Here we are taking a complexity theoretical view commemorating Observation (1). In fact, when adding various kinds of window constraints, we are interested in the boundary between polynomialtime solvability and NP-hardness. Intuitively speaking, we ask which kinds of additional constraints can be added without imposing a significant extra computational effort. Or, phrased differently, what is the computational price to pay for reducing the number of solutions by utilizing additional window information. We will focus on the effect of three different parameters: k corresponds to the size of the window, ν is the number of 1's in the nonzero minors, and t specifies the allowed positions of 1's in the windows, referred to as the pattern of the window. The choice for these parameters will specify the given problem Rec(k, ν, t), which is formally introduced in Section 2. Hence k, ν, and t are given beforehand (i.e., are not part of the input). As it will turn out, the problem exhibits various unexpected complexity jumps.
Omitting technical details (which are all given in Section 2) some of these jumps can be summarized as follows; see Table 1.
For k = 1 the problems are in P regardless on how the other parameters are set; see Theorem 1(i). For k ≥ 2 it depends on ν and t whether the problems are in P or NP-hard; see Theorems 1(ii),(iii) and 2. For k ≥ 2, some values of ν render the problems tractable while others make them NP-hard; see Theorems 1(ii) and 2(ii) even if the patterns are not restricted at all. Adding a pattern constraint may turn an otherwise NP-hard problem into a polynomial time solvable problem; see Theorems 2(ii) and 1(iii). The reverse complexity jump, however, can also be observed; see Theorems 1(ii) and 2(i).
The present paper is organized as follows: Section 2 will introduce our notation and state the main results. The proofs of our tractability results are given in Section 3 while the NP-hardness results are proved in Section 4. Section 5 contains some final remarks. In particular, it provides a (potentially also quite interesting) extension of our main problem and collects related implications of our main results.
Notation and Main Results
We begin with some standard notation.
Let Z, N, and N 0 denote the set of integers, natural numbers, and non-negative integers, respectively.
The support of a vector x := (ξ 1 , . . . , ξ d ) T ∈ Z d is defined as supp(x) := {i : ξ i = 0}. With 1 1 we denote the all-ones vector of the corresponding dimension. The cardinality of a finite set F ⊆ Z d is denoted by |F |. We will use the notational convention that specific settings of variables and parameters are signified by a superscript * .
In this paper we often refer to points (or variables) ξ p,q where (p, q) are points of the integer grid Z 2 . Then, p and q denote the x-and y-coordinates of (p, q), respectively. In the grid [m]×[n], the set {i}×[n] and [m] × {j} is called column i and row j, respectively. For k ∈ N we refer to {(i − 1)k + 1, . . . , ik} × [n] and [m] × {(j − 1)k + 1, . . . , jk} as vertical strip i (of width k) and horizontal strip j (of width k), respectively. Of course, the standard matrix notation can be obtained from our notation by the coordinate transformation ξ 1,n · · · ξ m,n . . . . . . with (i, j) ∈ C(m, n, k) a block ; see Figure 1 for an illustration. (Here we depict the structure both as point set and as pixels. In the following we restrict the figures to pixel images, which we find more intuitive.) The blocks form a partition of [m] × [n], i.e., (i,j)∈C(m,n,k) B k (i, j) = [m] × [n]. In the main part of this paper, we consider such non-overlapping blocks, which play also a role in super-resolution imaging [4]. In Section 5 we consider also other windows, which may be positioned at other places than those defined by C(m, n, k). Next we introduce three patterns that we will study in detail since they exhibit already the general complexity jump behavior we are particular interested in.
The first pattern P (k, 0) is unconstrained, i.e., does not pose any additional restrictions on the positions of 1's. The second pattern P (k, 1) forces all elements in the k × k block to be 0 except possibly for the two entries in the lower-left and upper-right corner. The third pattern P (k, 2) excludes all patterns that admit more than one 1 in each row of the k × k block (see also Figure 2). Here are the formal definitions.
In other words, we ask for 0/1-solutions that satisfy given row and column sums, block constraints of the form (p,q)∈B k (i,j) ξ i,j ≤ v(i, j) with given v(i, j) ∈ {0, ν}, and pattern constraints that restrict the potential locations of the 1's in each block. We remark that the v(i, j), (i, j) ∈ C(m, n, k), can be viewed as some part of prior knowledge. In particle tracking [20,5], for instance, the v(i, j) may reflect prior knowledge about physically meaningful particle trajectories; see also [3].
Our main results show that the computational complexity of Rec(k, ν, t) may change drastically when k, ν, or t is varied.
The most notable changes are summarized in Table 1. Some of these changes may at first glance seem somewhat counterintuitive. For instance, restricting the solution space via pattern constraints turns the NP-hard problem Rec(k, 2, 0) (k ≥ 2), into the polynomial time solvable problem Rec(k, 2, 2); Figure 2 depicts the possible types of blocks in Rec(k, 2, 2). Conversely, additional pattern constraints convert the tractable problem Rec(k, 1, 0) into the NP-hard problem Rec(k, 1, 1) (k ≥ 2).
Tractability results: Polynomial-time solvability
This section contains the proofs of our tractability results stated in Theorem 1.
Clearly, a polynomial-time algorithm for Rec(1, ν, t) can be given along the classical lines for deciding whether given row and column sums of a binary matrix are consistent. In the following we give one of the various arguments.
Proof of Theorem 1(i). The problem obviously reduces to that of reconstructing a 0/1-matrix from given row and column sums with some of the matrix entries fixed to 0. It is well known that the coefficient matrix is totally unimodular, see, e.g., [12,2], [18,Sect. 19.3]. The problem can thus be solved efficiently, e.g., by linear programming.
Next we turn to the proof of Theorem 1(ii). For (i, j) ∈ I ⊆ C(m, n, k), let denote the number of blocks B k (a, b) with (a, b) ∈ I, b ≤ j and a ≤ i, that lie in the same vertical and horizontal strip, respectively, as the block B k (i, j). Moreover, for I ⊆ C(m, n, k) we set Note that Π x (I) and Π y (I) denotes the projection of I onto the first and second coordinate, respectively.
We use a (slightly generalized version of a) result of [4] on the following problem.
DR(1)
Instance: m, n ∈ kN, I ⊆ C(m, n, k), (a set of corner points) or decide that no such solution exists.
As a service to the reader we remark that a solution ξ * p,q , (p, q) ∈ G(I), for a given instance of DR(1) is obtained by setting for every (i, j) ∈ I and (p, q) ∈ B k (i, j) : An illustration is given in Figure 3 (taken from [4]). 1), . . . , v(m − k + 1, n − k + 1)) denote an instance of Rec(k, 1, 0). We proceed in two steps.
First, we solve the following reconstruction-from-row-and-column-sums instance of finding η i,j ∈ {0, 1}, (i, j) ∈ C(m, n, k), satisfying the constraints If no solution exists, we report infeasibility of I. The idea behind solving (3.2) first is that in this way we locate boxes that we want to contain a 1.
In a second step, if (3.2) is feasible, we invoke DR (1) and If there is no solution we report infeasibility of I; otherwise, we return the solution of DR(1). It remains to be shown that this solves I correctly.
To this end, suppose that I has a solution ξ , denote a solution to (3.2) and let I be defined as in (3.3). For every (i, j) ∈ I we have clearly The integer linear program (3.2) can be solved in polynomial time as the coefficient matrix is well known to be totally unimodular; see, e.g., [12,2], [18,Sect. 19.3]. The set I in (3.3) is determined in polynomial time, and DR(1) ∈ P by Proposition 1(ii). Hence, Rec(k, 1, 0) ∈ P.
Next we turn to the proof of Theorem 1(iii). It relies on the following result. A = (a 1 , . . . , a m1 ) T ∈ {0, 1} m1×n1 be the node-edge incidence matrix of a (simple) bipartite graph, and let H = (h 1 , . . . , h m2 ) T denote a binary m 2 ×n 1 matrix with the following properties.
Lemma 1. Let
Then, A H is totally unimodular.
Proof. It suffices to prove the assertion under the additional assumption that
4)
All vectors h l whose support is a singleton can be added later since appending any subset of rows of the m 1 × n 1 -identity matrix to a totally unimodular matrix yields again a totally unimodular matrix.
As the matrix A is the node-edge incidence matrix of a bipartite graph, let (R 1 , R 2 ) denote a corresponding partition of (the indices of) the vertices of the graph i.e., a partition of [m 1 ]. Clearly, for Of course, the sets (R 1 ∩ I) and (R 2 ∩ I) form a partition of I, i.e., Also, it follows from (i), (ii), (3.4), (3.5), and the fact that the graph is simple that i.e., the sets constitute a partition of L.
By (i), (ii), (3.5), (3.6) and (3.7) we have
i.e., the collection I and L of rows of A and H, respectively, is split into two parts such that the sum of the rows in one part minus the sum of the rows in the other part is a vector with entries in {0, ±1}.
Proof of Theorem 1(iii). We claim that the problem can be formulated as that of finding an integer solution to {x : M x ≤ z, x ≥ 0} with a totally unimodular matrix M and integral right-hand side z, showing that the problem is solvable in polynomial time; see [12], [18,Thm. 16.2].
Assembling the variables ξ i,j , for (i, j) ∈ [m]×[n] into an mn-dimensional vector and rephrasing the row and column sum constraints in matrix form Ax = (r T , c T ) T , where r contains the r j 's and c contains the c i 's, A is the node-edge incidence matrix of a bipartite graph, and hence totally unimodular.
As the pattern constraints ensure that no row of a block contains two 1's, and since v(i, j) = 0 or v(i, j) ≥ k, we can (equivalently) replace the block and pattern constraints by the box constraints (p,q)∈W (i,j+l) where We can rephrase the box constraints in matrix form Hx ≤ v, with a binary matrix H and a vector v containing the respective right-hand sides.
Since each box W (i, j +l) is contained in a row of [m]×[n] the condition in Lemma 1(i) is satisfied. Also, any two of these boxes are disjoint since the blocks are disjoint. Hence the condition in Lemma 1(ii) is satisfied. Therefore, by Lemma 1, the matrix
A H
is also totally unimodular. Appending −A and the identity matrix E yields again a totally unimodular matrix, hence is totally unimodular. Since the corresponding right-hand side vector z = (r T , c T , v T , −r T , −c T , 1 1 T ) T is integral, the instance can be solved by linear programming.
Intractability results: NP-hardness
This section contains the proof of Theorem 2. We begin with the NP-hardness of Rec(k, 1, 1) for k ≥ 2.
Proof of Theorem 2(i). It suffices to show the result for k = 2 since the NP-hardness for larger k can be inferred from that for k = 2 by setting the row and column sums r j+l and c i+l to zero for every (i, j) ∈ C(m, n, k) and l ∈ {3, . . . , k − 1}; see [4].
So, let k = 2. We use a transformation from the following problem 3-color tomography, which is NP-hard by [7]. (Note that the third color is just "blank.")
3-color tomography
Instance: m, n ∈ N, r or decide that no such solution exists.
The block and pattern constraints ensure that there are only three possibilities for setting the 1's in each block. They are shown in Figure 4. Note that the types can be viewed as representing three colors, which are counted by different row and column sums. The row and column sums with odd indices count type 1 blocks, the other sums count type 2 blocks. Setting , and using the correspondences ) is of type 3, we conclude that I admits a solution if, and only if, I admits a solution.
Next we turn to the NP-hardness of Rec(k, 2, 0) for k ≥ 2. Again it suffices to prove the result for k = 2.
Proof of Theorem 2(ii). The general structure of the proof follows that of the NP-hardness result under data uncertainty given in [4]. However, some adaptations and additions are required as will be detailed below.
As in [4] we give a transformation from the NP-hard problem 1-In-3-SAT [6], which asks for a satisfying truth assignment that sets exactly one literal true in each clause of a given Boolean formula in conjunctive normal form where all clauses contain three literals (involving three different variables).
For a given instance of 1-In-3-SAT a "circuit board" is constructed that contains an initializer, several connectors, and clause chips. The general structure of the circuit board from the proof of [4] and its modification, which contains additional connectors and initializers, are outlined in Figure 5.
Connector
Connector
Connector
Connector
Connector
Initializer
Initializer
Initializer Initializer (b) Figure 5. The general layout of (a) the original circuit board and (b) the modified circuit board, here for two clauses.
The initializers contain, for every variable τ t , t ∈ [T ], so-called τ t -chips, while the connectors contain ¬τ t -chips. The clause chips are more complex as they consist of two collectors, two verifiers, and a transmitter. The initializer holds a truth assignment for the variables τ 1 , . . . , τ T of the given instance I of 1-In-3-SAT. The Boolean values True and False are encoded in τ t -chips, t ∈ [T ], by the type 1 and 2 blocks, respectively, that are shown in Figure 6.
False True Figure 6. Boolean values in τ t -chips.
The proof in [4] shows how the truth assignment is transmitted through the circuit board, and how the verifier chips indeed check the feasibility of the given 1-In-3-SAT instance. Figure 7 depicts a specific problem instance and a corresponding solution for the original construction of [4] .
Connector
Connector C l a u s e c h i p (a) The circuit board. By setting to zero suitable row and column sums it is ensured that the non-zero components ξ * p,q of a solution are only possible in the bold-framed boxes within the white blocks in the clause chip, connectors, and the initializer. (b) A solution x * (non-zero components ξ * p,q are depicted as black pixels), representing the Boolean solution (τ * 1 , τ * 2 , τ * 3 , τ * 4 ) = (True, True, False, False).
The proof in [4] employs three different types of block constraints. For every block B 2 (i, j), (i, j) ∈ C(m, n, 2), there is one of the following constraints: the names in parenthesis are those from [4] (even though (≤, 2) may seem more natural here in the third case).
As for binary variables, the constraint (p,q)∈B2(i,j) ξ p,q = 0 is equivalent to (p,q)∈B2(i,j) ξ p,q ≤ 0, it suffices to adapt the construction in such a way that no (=, 2)-block constraints are required. In fact, the (=, 2)-block constraints in the original construction are only employed for the ¬τ t -chips, t ∈ [T ], contained in the connectors.
For each ¬τ t -chip, t ∈ [T ], in a connector, we replace the (=, 2)-block constraints by a (≈, 1)-block constraint. Then, between any two clause chips we insert an additional copy of a connector and an initializer each. These copies are placed as indicated in Figure 5(b). The additional row and column sums are prescribed again to values 1 and 3 in alternation. The new copies ensure that each ¬τ t -chip, t ∈ [T ], is contained in a horizontal or vertical strip that contains a τ t -chip of an initializer. The corresponding row or column sums have values 1 and 3, and since we have (≈, 1)-block constraints for both chips, we can distribute the 1 + 3 = 4 ones only in such a way that each of the two chips contains exactly two 1's. In other words, each ¬τ t -chip, t ∈ [T ], is required to contain exactly two 1's. Figure 8 shows the solution of our correspondingly adapted circuit board from Figure 7.
With this adaptation the proof from [4] carries over. As a service to the reader we explain how the truth assignment is transmitted in our particular example; see Figure 9. The formal proof that, in particular, this transmission and, in general, the complete transformation works as needed, follows exactly as in [4].
Suppose the τ 1 -chip of the initializer in the lower right corner of the circuit board shown in Figure 9 is of type 1. The truth assignment is vertically transmitted from the initializer to a connector, which negates the Boolean value as the X-rays are set to 1 and 3 in alternation. Then, this Boolean value is horizontally transmitted to an initializer (again negated, so that we have the original values back), then vertically to a connector (again negated). From here the Boolean value is transmitted horizontally to the first clause chip, where it enters and exits, yet again negated, through a collector. In the part between the two collectors, the Boolean values for the literals appearing in the first clause take a path through the verifiers that ensure that exactly one literal is set to true; the values for the other literals are just transmitted (again with negation). This process continues in a similar way for the remaining clauses. We end up with feasible solution for our instance of Rec(k, 2, 0) if, and only if, there is a satisfying truth assignment for our given instance of 1-In-3-SAT.
Connector
Connector C l a u s e c h i p
Final Remarks
Let us first point out that the question of inferring information about an otherwise unknown object from observations "under the microscope," i.e., through certain windows, is fundamental for a wide range of applications. For corresponding results on the scanning of binary or integer matrices; see e.g. [9,10]. In our terminology, [9] studies the task of reconstructing binary matrices from their number of 1's in each windows of a fixed size.
We will close by introducing a more general problem and stating some implications of our main Theorems 1 and 2.
Let us finally point out, that the Tables 2 and 3 can be largely extended. In particular, we can consider reconstruction problems for different types of windows which are even allowed to vary for each (i, j) ∈ [m] × [n]. While some of the complexity results in this more general setting may seem rather marginal, other may be interesting for certain applications; see [3]. As this would go far beyond the scope of the present paper we refrain, however, from carrying the results to the extremes here. | 2017-02-20T14:51:14.000Z | 2017-02-20T00:00:00.000 | {
"year": 2017,
"sha1": "ce5075db2ff7c5e7f890522a2f18902e09bdff4e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1702.06121",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ce5075db2ff7c5e7f890522a2f18902e09bdff4e",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
218674458 | pes2o/s2orc | v3-fos-license | Integrating Semantic and Structural Information with Graph Convolutional Network for Controversy Detection
Identifying controversial posts on social media is a fundamental task for mining public sentiment, assessing the influence of events, and alleviating the polarized views. However, existing methods fail to 1) effectively incorporate the semantic information from content-related posts; 2) preserve the structural information for reply relationship modeling; 3) properly handle posts from topics dissimilar to those in the training set. To overcome the first two limitations, we propose Topic-Post-Comment Graph Convolutional Network (TPC-GCN), which integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection. As to the third limitation, we extend our model to Disentangled TPC-GCN (DTPC-GCN), to disentangle topic-related and topic-unrelated features and then fuse dynamically. Extensive experiments on two real-world datasets demonstrate that our models outperform existing methods. Analysis of the results and cases proves that our models can integrate both semantic and structural information with significant generalizability.
Introduction
Social media such as Reddit1 and Chinese Weibo2 has been the major channel through which people can easily propagate their views.In the open and free circumstance, the views expressed by the posts often spark fierce discussion and raise controversy among the engaging users.These controversial posts provide a lens of public sentiment, which bring about several tasks such as news topic selection, influence assessment (Hessel and Lee, 2019), and alleviation of polarized views (Garimella et al., 2017).As a basis of all mentioned tasks, automatically identifying the controversial posts has : The point is that their lights, skins, functions, and even names are similar.No reason to say that Xiaomi don't copy.↳ (Refute) C 3-1 : No, the point is that the manuscript is original.
Comments Attached to P
(Refute) RP 1 : I'm against Xiaomi this time.The component library is too similar.Whether the faces are hand-made or not is not important.Can't this fact be the evidence?(Refute) RP 2 : I think Mimoji is similar to Memoji.Even if the process of faces is different, their ideas are too close.
Related Posts
Figure 1: A controversial post P about whether Xiaomi's Mimoji copies Apple's Memoji.These Supports and Refutations are to either their respective parent comments or P .
attracted wide attention (Addawood et al., 2017;Coletto et al., 2017;Rethmeier et al., 2018;Hessel and Lee, 2019).This work focuses on post-level controversy detection on social media, i.e., to classify if a post is controversial or non-controversial.According to (Coletto et al., 2017), a controversial post has debatable content and expresses an idea or an opinion which generates an argument in the responses, representing opposing opinions in favor or in disagreement with the post.In practice, the responses of a target post (the post to be judged) generally come from two sources, i.e., the comments attached to the post and other content-related posts.Figure 1 shows an example where the target post P expresses that Xiaomi's Mimoji do not copy Apple's Memoji.We can see that: 1) The comments show more supports and fewer refutes to P , which raises a small controversy.However, the related posts show extra refutations and enhance the controversy of P .2) C 3−1 expresses refutation literally, but it actually supports P because in the comment tree, it refutes C 3 , a refuting comment to P .3) There exist two kinds of semantic clues for detection, topicrelated and topic-unrelated clues.For example, support and against is unrelated to this topic, while copy and similar are topic-related.Topic-related clues can help identify posts in a similar topic, but how effective they are for those in dissimilar topics depends on the specific situation.Therefore, to comprehensively evaluate the controversy of a post, the information from both the comments and related posts should be integrated properly on semantic and structure level.
Existing methods detecting controversy on social media have exploited the semantic feature of the target post and its comments as well as structural feature.However, three drawbacks limit their performance: 1) These methods ignore the role of the related posts in the same topic in providing extra supports or refutations on the target post.Only exploiting the information from comments is insufficient.2) These methods use statistical structure-based features which cannot model the reply-structure relationships (like P -C 1 and C 3 -C 3−1 in Figure 1).The stances of some comments may be misunderstood by the model (like C 3−1 ).
3) These methods tend to capture topic-related features that are not shared among different topics with directly using information of content (Wang et al., 2018).The topic-related features can be helpful when the testing post is from a topic similar to those in the training set but would hurt the detection otherwise.
Recently, graph convolutional networks have achieved great success in many areas (Marcheggiani et al., 2018;Ying et al., 2018;Yao et al., 2019;Li and Goldwasser, 2019) due to its ability to encode both local graph structure and features of node (Kipf and Welling, 2017).To overcome the first two drawbacks of existing works, we propose a Topic-Post-Comment Graph Convolutional Network (TPC-GCN) (see Figure 2a) that integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection.First, we create a TPC graph to describe the relationship among topics, posts, and comments.To preserve the replystructure information, we connect each comment node with the post/comment node it replies to.To include the information from related posts, we connect each post node with its topic node.Then, a GCN model is applied to learn node representa-tion with content and reply-structure information fused.Finally, the updated vectors of a post and its comments are fused to predict the controversy.
TPC-GCN is mainly for detection in intra-topic mode, i.e., topics of testing posts appear in the training set, for it cannot overcome the third drawback.We thus extend a two-branch version of TPC-GCN named Disentangled TPC-GCN (DTPC-GCN) (see Figure 2b) for inter-topic mode (no testing posts are from the topics in the training set).We use a TPC-GCN in each branch, but add an auxiliary task, topic classification.The goals of the two branches for the auxiliary task are opposite to disentangle the topic-related and topic-unrelated features.The disentangled features can be dynamically fused according to the content of test samples with attention mechanism for final decision.Extensive experiments demonstrate that our models outperform existing methods and can exploit features dynamically and effectively.The main contributions of this paper are as follows: 1. We propose two novel GCN-based models, TPC-GCN and DTPC-GCN, for post-level controversy detection.The models can integrate the information from the structure and content of topics, posts, and comments, especially the information from the related posts and reply tree.Specially, DTPC-GCN can further disentangle the topic-related features and topic-unrelated features for inter-topic detection.
2. We build a Chinese dataset for controversy detection, consisting of 5,676 posts collected from Chinese Weibo, each of which are manually labeled as controversial or noncontroversial.To the best of our knowledge, this is the first released Chinese dataset for controversy detection.
3. Experiments on two real-world datasets demonstrate that the proposed models can effectively identify the controversial posts and outperform existing methods in terms of performance and generalization.
Related Work
Controversy detection on the Internet have been studied on both web pages and social media.Existing works detecting controversy on web pages mostly aims at identifying controversial articles in is the representation matrix, containing all node vectors in the l-th layer of Branch B. X is the initial representation.L c and L t refer to controversy classification loss and topic classification loss respectively.FC means fully connected layer.
Unlike the web pages, social media contains more diverse topics and more fierce discussion among users, which makes controversy detection on social media more challenging.Early studies assume that a topic has its intrinsic controversy, and focus on topic-level controversy detection.Popescu and Pennacchiotti (2010) detect controversial snapshots (consisting of many tweets referring to a topic) based on Twitter-based and externalknowledge features.Garimella et al. (2018) build graphs based on a Twitter topic, such as retweeting graph and following graph, and then apply graph partitioning to measure the extent of controversy.However, topic-level detection is rough, because there exists non-controversial posts in a controversial topic and vice versa.Recent works focus on post-level controversy detection by leveraging language features, such as emotional and topicrelated phrases (Rethmeier et al., 2018), emphatic features, Twitter-specific features (Addawood et al., 2017).Other graph-based methods exploit the features from the following graph and comment tree (Coletto et al., 2017;Hessel and Lee, 2019).The limitations of current post-level works are that they do not effectively integrate the information from content and reply-structure, and ignore the role of posts in the same topic.Moreover, the difference between intra-topic and inter-topic mode is not realized.Only Hessel and Lee (2019) deal with topic transfer, but they train on each topic and test on others to explore the transferability, which is not suitable in practice.
Methodology
In this section, we introduce the Topic-Post-Comment Graph Convolutional Network (TPC-GCN) and its extension Disentangled TPC-GCN (DTPC-GCN), as shown in Figure 2. We first introduce the TPC graph construction and then detail the two models.
TPC Graph Construction
To model the paths of message passing among topics, posts, and comments, we first construct a topicpost-comment graph G = (V, E) for target posts, where V and E denote the set of nodes and edges respectively.First, to preserve the post-comment and inter-comment relationship, we incorporate the comment tree, each comment node of which is connected with the post/comment node it replies to.Then, to facilitate the posts capturing information from related posts in the same topic that proved helpful in Section 1, we connect each post with its topic.The topic node can be regarded as a hub node to integrate and interchange the information.Another way is to connect post nodes in a topic pairwise, but the complexity will be high.Note that the concept topic here is not necessarily provided by the platform, such as the subreddit on Reddit and the hashtag (#) on Weibo.When topics are not provided, algorithms for text-based clustering can be used to construct a topic with related posts (Nematzadeh et al., 2019).
In G, each node may represent a topic, a post, or a comment and each edge may represent topic-post, post-comment, or comment-comment connection.We initially represent each node v with an embedding vector x of their text by using the pre-trained language model.
TPC-GCN
In this subsection, we detail the TPC-GCN, by first introducing the generic GCN and then our TPC-GCN model.
The GCN has been proved an efficient neural network that operates on a graph to encode both local graph structure and features of node (Kipf and Welling, 2017).The characteristic of GCN is consistent to our goal that integrates the semantic and structural information.In a GCN, each node is updated according to the aggregated information of its neighbor nodes and itself, so the learned representation can include information from both content and structure.For a node v i ∈ V , the update rule in the message passing process is as follows: i is the hidden state of node v i in the lth layer of a GCN and N i is the neighbor set of node v i with itself included.Incoming messages from N i are transformed by the function g and then pass through the activation function σ (such as ReLU) to output new representation for each node.b (l) is the bias term.Following Kipf and Welling (2017), we use a linear transform function g(h where W (l) is a learnable weight matrix.Based on node-wise Equation 1, layer-wise propagation rule can be written as the following form: where H (l) contains all node vectors in the l-th layer and  is the normalized adjacency matrix with inserted self-loops.W (l) is the weight matrix and B (l) is the broadcast bias term.
In TPC-GCN (see Figure 2a), we input the matrix consisting of N d-dimensional embedding vectors H (0) = X ∈ R N ×d to a two-layer GCN to obtain the representation after message passing H (2) .Next, the vector of each post node i and its attached comment nodes are averaged to be the fusion vector f i of the post.Finally, we apply a softmax function to the fusion vectors for the controversy probability of each post.The cross entropy is the loss function: (3) where y c i is a label with 1 representing controversial and 0 representing the non-controversial, p c i is the predicted probability that the i-th post is controversial, and N is the size of training set.The limit of TPC-GCN is that the representation tends to be topic-related as Section 1 said.The limited generalizability of TPC-GCN makes it more suitable for intra-topic detection, instead of inter-topic detection.
Disentangled TPC-GCN
Intuitively, topic-unrelated features are more effective when testing on the posts from unknown topics (inter-topic detection).However, topic-related features can help when unknown topics are similar to the topics in the training set.Therefore, both of topic-related and topic-unrelated features are useful, but their weights vary from sample to sample.This indicates that the two kinds of features should be disentangled and then dynamically fused.Based on the above analysis, we propose the extension of TPC-GCN, Disentangled TPC-GCN (see Figure 2b), for inter-topic detection.DTPC-GCN consists of two parts: the two-branch multi-task architecture for disentanglement, and attention mechanism for dynamic fusion.Two-branch Multi-task Architecture To obtain the topic-related and topic-unrelated features at the same time, we use two branches of TPC-GCN with multi-task architecture, denoted as R for topicrelated branch and U for topic-unrelated one.In both R and U , an auxiliary task, topic classification, is introduced to guide the learning of representation oriented by the topic.
For each branch, we first train the first layer of GCN with the topic classification task.The input of the topic classifier is fusion vectors from H (1) which are obtained with the same process of f i in TPC-GCN.The cross entropy is used as the loss function: where y t ik is a label with 1 representing the groundtruth topic and 0 representing the incorrect topic class, p t ik is the predicted probability of the i-th post belonging to the k-th topic, and N is the size of training set.The difference between R and U is that we minimize L t in Branch R to obtain topicdistinctive features, but maximize L t in Branch U to obtain topic-confusing features.
Then we include the second layer of GCN and train on two tasks, i.e., topic and controversy classification, for each branch individually.Branch U and R are expected to evaluate controversy effectively with different features in terms of the relationship with the topics.Attention Mechanism After the individual training, Branch U and R are expected to capture the topic-related and topic-unrelated features respectively.We further fuse the features from the two branches dynamically.Specifically, we freeze the parameters of U and R, and further train the dynamic fusion component.For the weighted combination of fusion vectors f U and f R from the two branches, we use the attention mechanism as follows: where W F is the weight matrix and b F is the bias term.v T is a transposed weight vector and F(•) outputs the score of the input vector.The scores of features from Branch U and R are normalized via a softmax function as the branch weight.The weighted sum of the two fusion vectors u is finally used for controversy classification.The loss function is the same as Equation 3.
Experiment
In this section, we conduct experiments to compare our proposed models and other baseline models.
Specifically, we mainly answer the following evaluation questions: EQ1: Are TPC-GCN and DTPC-GCN able to improve the performance of controversy detection?EQ2: How effective are different information in TPC-GCN, including the content of topics, posts, and comments as well as the topic-post-comment structure?EQ3: Can DTPC-GCN learn disentangled features and dynamically fuse them for controversy detection?
Dataset
We perform our experiments on two real-world datasets in different languages.Table 1 shows the statistics of the two datasets.The details are as follows: Reddit Dataset The Reddit dataset released by Hessel and Lee (2019) and Jason Baumgartner of pushshift.io is the only accessible English dataset for controversy detection of social media posts.This dataset contains six subreddits (which can be regarded as over-arching topics): AskMen, AskWomen, Fitness, LifeProTips, personalfinance, and relationships.Each post belongs to a subreddit and the number of attached comments is ensured to be over 30.The tree structure of the comments is also maintained.We use the comment data in the first hour after a post is published.
Weibo Dataset We built a Chinese dataset for controversy detection on Weibo3 in this work.We first manually selected 49 widely discussed, multidomain topics from July 2017 to August 2019 (see Appendix A).Then, we crawled the posts on those topics and preserved those with at least two comments.Here we rebuilt the comment tree according to the comment time and usernames due to the lack of officially-provided structure.Finally, annotators were asked to read and then annotate the post based on both of the post content and the user stances in the comments/replies.Each post was labeled by two annotators(Cohen's Kappa coefficient = 0.71).
When the disagreement occurred between the annotators, the authors discussed and determined the labels.In total, this dataset contains 1,992 controversial posts and 3,684 non-controversial posts, which is in line with the distribution imbalance in the real-world scenario.As far as we know, this is the first released dataset for controversy detection on Chinese social media.We use at most 15 comments of each post due to the computation limit.
In the intra-topic experiment: For the Weibo dataset, we randomly divided with a ratio of 4:1:1 in each topic and merged them respectively across all topics.For the Reddit dataset, we apply the data partition provided by the authors.The ratio is 3:1:1.
In the inter-topic experiments: For the Weibo and Reddit dataset, we still divided with a ratio of 4:1:1, but on the topic level.
Implementation Details
In the (D)TPC-GCN model, each node is initialized with its textual content using the pre-trained BERT 4 (BERT-Base Chinese for Weibo and BERT-Base Uncased for Reddit) and the padding size for each is 45.We only fine-tune the last layer, namely layer 11 of BERT for simplicity and then apply a dense layer with a ReLU activation function to reduce the dimensionality of representation from 768 to 300.In TPC-GCN, the sizes of hidden states of the two GCN layers are 100 and 2, respectively, with ReLU for the first GCN layer.To avoid overfitting, a dropout layer is added between the two layers with a rate of 0.35.We apply a softmax function to the fusion vector for obtaining the controversy probability.In DTPC-GCN, the size of hidden states of the first and second GCN layers in each branch are 32 and 16.The dropout rate between two GCN layers in each branch is set to 0.4.The batch size in our (D)TPC-GCN model is 1 (1 TPC graph), and 128 (posts and attached replies) in our PC-GCN model and baselines.The optimizer is BertAdam5 in all BERT-based models and Adam (Kingma and Ba, 2014) in the other semantic models.The learning rate is 1e-4 and the total epoch is 100.We report the best model according to the performance on the validation set.In those semantic models that are not based on BERT, we use two publicly-available big-scale word embedding files to obtain the model input, sgns.weibo.bigramchar 6for Weibo and glove.42B.300d7 for Reddit.
Baselines
To validate the effectiveness of our methods, we implemented several representative methods including content-based, structure-based and fusion methods as baselines.
Content-based Methods
We implement mainstream text classification models including TextCNN (Kim, 2014), BiLSTM-Att (bi-directional LSTM with attention) BiLSTM (Graves and Schmidhuber, 2005;Bahdanau et al., 2015), BiGRU-Att (bi-directional GRU with attention) (Cho et al., 2014),BERT (Devlin et al., 2019) (only fine-tune the last layer for simplicity).For a fair comparison, we concatenate the post and its attached comments together as the input, instead of feeding the post only.
Structure-based Methods
Considering that structure-based features of the post and its comment tree are rare and nonsystematic in previous works, we integrate the plausible features in (Coletto et al., 2017) and (Hessel and Lee, 2019).As the latter paper does, we feed them into a series of classifiers and choose a best model for classification.We name the method SFC.For a post-comment graph, the feature set contains the average depth (average length of root-to-leaf paths), the maximum relative degree (the largest node degree divided by the degree of the root), C-RATE features (the logged reply time between the post and comments, or over pairs of comments), and C-TREE features (statistics in a comment tree, such as maximum depth/total comment ratio).
Fusion Method
The compared fusion method from (Hessel and Lee, 2019) aims to identify the controversial posts with semantic and structure information.They extract text features of topics, posts, and comments by BERT and structural feature including the C-RATE and C-TREE features mentioned above.In addition, publish time features are also exploited.
Performance Comparison
To answer EQ1, we compare the performance of proposed (D)TPC-GCN with mentioned baselines on the two datasets.The evaluation metrics include the macro average precision (Avg.P), macro average recall (Avg.R), macro average F1 score (Avg.F1), and accuracy (Acc.).Table 2 and 3 show the performance of all compared methods for intra-topic detection and inter-topic detection respectively.
In the intra-topic experiments, we can see that 1) TPC-GCN outperforms all compared methods on the two datasets.This indicates that our model can effectively detect controversy with a significant generalizability on different datasets.2) The structure-based model, SFC, reports the low scores on the two datasets, indicating that the statistical structural information is insufficient to timely identify the controversy.3) The fusion models outperform or are comparable to the other baselines, which proves that information fusion of content and structure is necessary to improve the performance.
In the inter-topic experiments, we can see that 1) DTPC-GCN outperforms all baselines by 6.4% of F1 score at least, which validates that DTPC-GCN can detect controversy on unseen or dissimilar topics.2) DTPC-GCN outperforms TPC-GCN by 3.74% on Weibo and 4.00% on Reddit.This indicates that feature disentanglement and dynamic fusion can significantly improve the performance of inter-topic controversy detection.
Ablation Study
To answer EQ2 and part of EQ3, we also evaluate several internal models, i.e., the simplified variations of (D)TPC-GCN by removing some components or masking some representations.By the ablation study, we aim to investigate the impact of content and structural information in TPC-GCN and topic-related and topic-unrelated information in DTPC-GCN.
Ablation Study of TPC-GCN
We delete certain type of nodes (and the edges connect to them) to investigate their overall impact and mask the content by randomizing the initial representation to investigate the impact of content.Specifically, we investigate on the following simplified models of TPC-GCN: PC-GCN / TP-GCN: discard the topic / comment nodes.
(RT)PC-GCN / T(RP)C-GCN / TP(RC)-GCN: randomly initialize the representation of topic / post / comment nodes.From Table 4, we have the following observations: 1) TPC-GCN outperforms all simplified models, indicating that the necessity of structure and content from all types of nodes.2) PC-GCN uses no extra information (the information of other posts in the same topic), the performance is still better than the baselines (Table 2 and 4), showing the effectiveness of our methods.3) The models deleting comment information, i.e., TP-GCN and TP(RC)-GCN, experience a dramatic drop in performance, which shows the comment information is of the most importance.4) The effect of structural information varies in the different situations.Without the contents, the comment structure can individually work (TP(RC)-GCN > TP-GCN), while for topics, the structure has to collaborate with the contents ((RT)PC-GCN < PC-GCN on the Weibo dataset).
Ablation Study of DTPC-GCN
We focus on the roles of the U (topic-unrelated) branch and R (topic-related) branch: U branch only: Only U branch is trained to capture topic-unrelated features.R branch only: Only R branch is trained to capture topic-related features.
Table 5 shows that both of the two branches can identify controversial posts well, but their performances are worse than the fusion model.Specifically, the U branch performs slightly better than R, indicating the topic-unrelated features are more suitable for inter-topic detection.We infer that the two branches can learn good but different representation under the guide of the auxiliary task.(Refute) Don't think the cost can be reduced.The costs of new electronic devices and larger data system are not small.
Comments Attached to 1
Human traffickers are hateful.People's Congress Baoyan Zhang thinks that woman-and childtrafficking cases should be sentenced to death and the present sentence of five to 10 years in prison is not heavy enough.
Case Study
We conduct a case study to further answer EQ3 from the perspective of samples.We compare the attention weight of the U and R branch in DTPC-GCN and exhibit some examples where the final decisions lean on one of the two branches.
Figure 3 shows two examples in the testing set of the Weibo dataset.The DTPC-GCN rely more on the topic-unrelated features from Branch U when classifying Post 1 (0.874 > 0.126), while more on the topic-related features from Branch R when classifying Post 2 (0.217 < 0.783).The topic of Post 1, Cancel the Driving License, is weakly relevant to topics in training set, and the comments mostly use topic-unspecific words such as simple support and good proposal.Thus, the topic-unrelated features are more beneficial for judging.In contrast, Post 2 discusses the death penalty for women and children traffickers, relevant to one of the topics in the training set, Improve Sentencing Standards for Sexually Assault on Children.Further, both of the two topics are full of comments on death penalty.Exploiting more of the topic-related features is reasonable for the final decision.
Error Analysis
By conducting the error analysis on 186 misclassified samples in the Weibo dataset, we find three main types of samples that lead to the misclassification: 1) 22.6% of the wrong samples are with too much noise in the comments, including unrelated and neutral comments.2) 16.1% are with a very deep tree structure.This kind of structure is helpful for controversy detection (Hessel and Lee, 2019), but the ability of GCN to obtain information from this kind of structure is limited.3) 10.2% are with obscure and complex statements.These wrong cases indicate that better handling the noisy data, learning more deep structural features, and mining the semantic more deeply have the potential to improve the performance.
Conclusion
In this paper, we propose a novel method TPC-GCN to integrate the information from the graph structure and content of topics, posts, and comments for post-level controversy detection on social media.Unlike the existing works, we exploit the information from related posts in the same topic and the reply structure for more effective detection.To improve the performance of our model for inter-topic detection, we propose an extension of TPC-GCN named DTPC-GCN, to disentangle the topic-related and topic-unrelated features and then dynamically fuse them.Extensive experiments conducted on two datasets demonstrate that our proposed models outperform the compared methods and prove that our models can integrate both semantic and structural information with significant genaralizablity.
Figure 2 :
Figure 2: Architecture of (a) Topic-Post-Comment Graph Convolutional Network (TPC-GCN).(b) Disentangled TPC-GCN (DTPC-GCN).The upper post in the TPC graph is taken as an example to illustrate the methods.H (l) B Cancelling the physical driving license can bring much benefits: No punishment because of forgetting to carry the license; reduce the administrative costs; put an end to the use of fake licenses…Target Post 1Topic: Cancel the Driving License (Support) Yes! Just use the citizen's ID card for replacement.(Support) Good proposal!Support! (Refute) I don't support it.
Figure 3 :
Figure 3: Examples of controversial posts that rely more on one of the two branches.The attention weights of the two posts are on the horizontal bars (left: the U branch, right: the R branch).Post 1 rely more on U (0.874 > 0.126) while Post 2 more on R (0.217 < 0.783).
Target Post P Topic: A microblogger implies that Xiaomi's Mimoji copies Apple's Memoji.
(Support) C 1 :A rational fan appeared finally.Support you.(Support) C 2 : What you said is persuasive.(Refute) C 3
Table 1 :
Statistics of two datasets.
Target Post 2 Topic: Suggest Death Penalty for Woman-& Child-traffickers
Drug smugglers are sentenced to death, but so many people still do.If we use death penalty to traffickers, they may task crazier actions.Should think more carefully.
(Support) Directly sentence to death.Execute immediately!(Support) Those harboring traffickers also need death penalty!(Support) Support!All the child traffickers should be sentenced to death penalty!(Refute) | 2020-05-19T01:01:07.622Z | 2020-05-16T00:00:00.000 | {
"year": 2020,
"sha1": "7e82015c386726f4b8f6f686b6e6bb7d1e7564bb",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.acl-main.49.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "7e82015c386726f4b8f6f686b6e6bb7d1e7564bb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4785699 | pes2o/s2orc | v3-fos-license | Response of Pacific-sector Antarctic ice shelves to the El Niño/Southern Oscillation
Satellite observations over the past two decades have revealed increasing loss of grounded ice in West Antarctica, associated with floating ice shelves that have been thinning. Thinning reduces an ice-shelf’s ability to restrain grounded-ice discharge, yet our understanding of the climate processes that drive mass changes is limited. Here, we use ice-shelf height data from four satellite altimeter missions (1994–2017) to show a direct link between ice-shelf-height variability in the Antarctic Pacific sector and changes in regional atmospheric circulation driven by the El Niño-Southern Oscillation. This link is strongest from Dotson to Ross ice shelves and weaker elsewhere. During intense El Niño years, height increase by accumulation exceeds the height decrease by basal melting, but net ice-shelf mass declines as basal ice loss exceeds lower-density snow gain. Our results demonstrate a substantial response of Amundsen Sea ice shelves to global and regional climate variability, with rates of change in height and mass on interannual timescales that can be comparable to the longer-term trend, and with mass changes from surface accumulation offsetting a significant fraction of the changes in basal melting. This implies that ice-shelf height and mass variability will increase as interannual atmospheric variability increases in a warming climate.
glaciers and ice streams respond dynamically to internal instability mechanisms 5,7 . The predicted ice loss on timescales of decades to centuries from WAIS is about 1 m of global sea-level equivalent, with full ice-sheet loss within a few millennia 5,6 .
The acceleration of grounded ice loss in the Amundsen Sea (AS) sector of WAIS has been attributed to reduced backstress as the fringing ice shelves thin and their grounding lines retreat 5,8 . The rapid, sustained thinning of the AS ice shelves 9,10 appears to be caused by increasing wind-driven flow of warm Circumpolar Deep Water (CDW) into the ocean cavities beneath ice shelves, enhancing basal melting 10,11 . Models 12,13 and limited observations 14,15 suggest that ice shelves might respond to changes in CDW circulation on interannual timescales. However, a paucity of time series of ocean observations on the continental shelf offshore of AS ice shelves and in the sub-ice cavities limits our ability to confirm this hypothesis, leading us to seek indirect measures of the sensitivity of ice-shelf mass change to large-scale climate variability.
Climate variability in the Antarctic Pacific sector
The El Niño-Southern Oscillation (ENSO), the Southern Annular Mode (SAM, or Antarctic Oscillation), and variability of the Amundsen Sea Low (ASL) are well-known climate drivers of interannual changes in the Antarctic Pacific sector [16][17][18][19] . ENSO is the leading mode of ocean-atmosphere variability on timescales of 2-7 years in the tropical Pacific, and is the strongest interannual climate fluctuation at the global scale 20 . ENSO causes much of the observed variability of the atmosphere, ocean, and sea ice in the Amundsen-Bellingshausen Sea sector 12,13 , which exhibits the largest climate fluctuations around Antarctica 17,21 . Observed regional responses to ENSO include changes in snowfall [22][23][24] , surface air temperature 25 , sea ice extent 16,26,27 , upwelling of CDW near the front of Pine Island Glacier's ice shelf 14 , and variation in basal melting under Getz Ice Shelf 15 .
The SAM is a major driver of climate variability in the Southern Hemisphere, strongly influencing precipitation and temperature patterns from the subtropics to Antarctica 28,29 . SAM is usually strongest in austral spring and summer 30 . The phase of SAM influences the effect of ENSO in Antarctica, with the strongest Pacific sector response to ENSO when SAM is weak or in opposite phase 31 (i.e. with the combinations La Niña/SAM+ and El Niño/SAM− strengthening the atmospheric circulation anomalies in the mid-to-high latitudes).
The ASL is a persistent atmospheric low-pressure system located within the Amundsen and Bellingshausen seas, and plays the dominant role in determining the regional-scale pattern of atmospheric circulation across West Antarctica [17][18][19] . Through changes in strength (i.e. central pressure) and position, ASL determines the wind anomalies in all seasons, strongly influencing snowfall, temperature distribution and sea ice conditions near AS ice shelves 19 . Variations in the ASL position and strength are driven by tropical Pacific ocean-atmosphere variability (ENSO) and fluctuations in southern hemisphere pressure [17][18][19] . ASL central pressure tends to be lower during positive SAM conditions and La Niña years, and higher during El Niño years 17,31,32 .
In this study, we show that ice-shelf height and mass changes in the Pacific sector, in particular the AS sector ( Fig. 1), are correlated with interannual variability in regional atmospheric and oceanic circulation driven by ENSO.
Interannual changes in ice-shelf height
We merged data from four European Space Agency satellite radar-altimetry missions (ERS-1, ERS-2, Envisat, and CryoSat-2) to create individual records of ice-shelf height in the Pacific sector for the period 1994 to 2017. Each 23-year record samples every three months and represents an area of about 30 km × 30 km 9,33 . We detrended each height record using locally weighted regression (LOWESS) 34 . We then calculated interannual anomalies in ice-shelf height, δh(t), using 12-month running means.
We used 12-month running integrals of atmospheric and climate-index anomaly records (Methods) to represent climate forcing driving changes in δh. Correlating time-integrated forcing variables with δh(t) is analogous to correlating the climate drivers with dh/dt anomalies, while avoiding the high-frequency noise introduced by explicit differentiation of δh(t). We assessed the statistical significance of correlations from the 95% confidence intervals of 2000 bootstrap samples, taking into account autocorrelation in the time series 35,36 (Methods).
We evaluated an averaged δh(t) from 77 height records covering the AS sector ( Fig. 1), and correlated this time series with time-integrated climate indices (Table 1). AS-averaged δh(t) is moderately-to-strongly correlated (r = 0.47 to 0.61) with the Oceanic Niño Index (ONI) 37 , a measure of ENSO that tracks sea-surface temperatures in the east-central tropical Pacific. We also found a weak-to-moderate negative correlation between δh(t) and the ASL relative central pressure (r = −0.25 to −0.39). We found no significant correlation between δh(t) and SAM. The best fit between δh(t) and a linear combination of ONI and ASL indices (Fig. 1a) showed correlation values up to r = 0.67. Combining the time-integrated ONI and ASL central pressure into a single index lagged by 4-6 months with respect to δh(t) (Fig. 1a) slightly increased correlation values and their significance (Table 1); that is, some large changes in ice-shelf height that are unexplained by ONI (such as the height increase circa 2006) can be explained by the contribution from the ASL.
Correlation patterns between AS-averaged ice-shelf height anomaly and the time-integrated zonal (westerly) and meridional (southerly) components of the wind from ERA-Interim reanalysis 38 lagged by 4-6 months (Fig. 2), reveal a strong link between ice-shelf height variability and local environmental controls. Zonal wind anomalies modulate ice-shelf basal melting through upwelling and downwelling of coastal waters 13,14 , and meridional wind anomalies modulate the transport of warm moist air (and associated precipitation) from the ocean to the AS ice shelves. The same fields (zonal and meridional wind, plus sea-ice concentration) showed approximately the same spatial pattern and correlation values when correlated with ONI (Extended Data Fig. 2).
Ice-shelf height and mass changes during 1997-2000
The most rapid changes in δh(t) generally coincided with large changes in ONI (Fig. 1b), with some modulation of the response to ENSO by changes in the ASL (Fig. 1a). The largest change in ice-shelf-height anomalies in the AS sector occurred between 1998 and 2001 ( Fig. 1a), with more than 0.5 m of averaged surface lowering. This period encompassed one of the two strongest El Niños on record since 1950 (1997)(1998), followed by a strong and longlasting La Niña period between 1999 and 2001 (Fig. 1b).
ERA-Interim reanalysis revealed large differences in environmental conditions between the strong El Niño of 1997-1998 (EN1997) and the subsequent strong La Niña of 1999-2000 (LN1999) (Fig. 3). During EN1997 the westerly wind was stronger than normal (defined here as the 1979-2017 climatological mean), increasing upwelling of ocean waters on the continental shelf (Fig. 3a). The regionally-averaged air temperature was also warmer than normal, although with a cold anomaly present along the coast from Getz Ice Shelf to the western side of Pine Island Bay (Fig. 3b). Precipitation was higher than normal during EN1997, with the largest anomaly centered on Getz (Fig. 3c). There was an overall reduction in sea ice during EN1997 relative to LN1999, although its spatial structure was complex (Fig. 3d). Concurrently, AS ice-shelf height increased overall during EN1997 and decreased during LN1999, with respect to the longer-term trend (Fig. 3d).
We assessed the relative contribution of anomalies in atmospheric-driven accumulation and ocean-driven melt by partitioning the observed (tide-corrected) ice-shelf height change (Δh OBS ) between two epochs into the sum of height changes due to atmospheric pressure (the inverse barometer effect; Δh IBE ) 39 , buoyancy-compensated surface accumulation (Δh SMB ), basal melting (Δh BMB ) and ice dynamics (Δh DYN ), such that: We used Eq. (1) to estimate the relative magnitudes of accumulation-and melt-driven changes between EN1997 and LN1999 averaged for the AS ice-shelf area. We assume that Δh DYN is zero: within this short time interval (annual averages centered at 1998. 2 and 1999.5), Δh DYN is about one-to-two orders of magnitude smaller than Δh OBS (see Methods), and regional averaging smooths out non-coherent changes in ice dynamics across different ice shelves. We estimated Δh IBE from the ERA-Interim pressure field and Δh SMB from modeled surface mass balance (SMB) 40,41 and surface (firn) density. We obtained SMB from the regional climate model RACMO2.3 41 (which is forced by ERA-Interim), and from the ERA-Interim precipitation field directly. Estimates of SMB from both approaches are consistent (Extended Data Fig. 3). We estimated Δh SMB for a range of firn densities (a Normal distribution with μ = 495 and σ = 25 kg/m 3 ; Methods) that captures the likely densities of the top one-meter firn layer on AS ice shelves (Extended Data Fig. 4). We then estimated Δh BMB as the residual term in Eq. (1) while including uncertainties for processes such as compaction rate and ice dynamics through Monte-Carlo methods (Methods). The spatial variability of these terms is shown in Fig. 4.
We calculated the surface-mass anomaly from RACMO2.3, integrated over the AS ice-shelf area for the 1.3-year EN1997-LN1999 period, to be −42 ± 11 Gt. For the range of assumed surface densities, the basal mass-balance anomaly required to explain Δh OBS in Eq. (1) is about 207 ± 111 Gt. This positive basal mass balance (i.e. less mass during EN1997 relative to LN1999) is consistent with previous reports of increased melt rate during El Niño and decreases during La Niña [12][13][14] . The surface-mass anomaly (accumulation) is about 20% of the basal mass anomaly (melting) in this case. Mass-anomaly change rates during EN1997-LN1999 of −32 ± 8 Gt/yr for SMB and 159 ± 85 Gt/yr for BMB are comparable in magnitude to the reported 42,43 longer-term changes in ice-shelf mass balance for the AS sector, with a mean SMB of ~+55 Gt/yr and a total mass-loss trend of about 156-173 Gt/yr (Extended Data Table 1).
Coherence and extent of ENSO influence across the region
We assessed the spatial coherence of height changes in the AS by accounting for the covariation between the different locations, decomposing the set of 77 individual time series across the AS sector into principal modes of variability (i.e. empirical orthogonal functions; EOFs) using multivariate Singular Spectrum Analysis (SSA) (Methods) 44,45 . This spectral decomposition allowed us to detect small, coherent signals (with time-variable amplitude and phase) in short and noisy records while testing the significance of extracted modes 46 . This is a standard procedure for identifying the dominant frequencies of ENSO in time series of climate indices 44 .
The spectrum of changes in ice-shelf height (Extended Data Fig. 5), identified two statistically significant (at the 95% level) spectral regions: a band with characteristic periodicity of 4-5 years (1st and 2nd EOFs); and the less energetic annual cycle (3rd and 4th EOFs). We obtained the same dominant interannual mode (~4-5 year timescales) when we applied univariate SSA to the full ONI record (1950 to 2017), also represented by the leading pair of EOFs 44 (not shown). This agreement between dominant timescales for ONI and Δh OBS supports the presence of a causal link between ENSO and the variability of the AS sector ice shelves.
To quantify the extent of the ENSO influence across the region, we derived a "similarity index" (i.e. a distance metric) between the variance-normalized time-integrated ONI and δh(t) averaged over each ice shelf in the Pacific sector (Methods). Dotson and Getz ice shelves in the AS sector, and Nickerson, Sulzberger and Ross, have the greatest similarity with ENSO (Fig. 5). Getz Ice Shelf dominates the AS sector-averaged variability because it accounts for about 60% of the total AS ice-shelf area, is centered on the ENSO precipitation anomaly ( Fig. 3c and Fig. 4c), and has a large inferred ENSO-driven basal melt-rate anomaly ( Fig. 4d and Extended Data Table 1). These results suggest that Getz Ice Shelf is the largest contributor to ENSO-driven variability of freshwater flux to the AS.
Global and regional environmental controls on ice-shelf change
Besides ENSO and ASL, other regional modes of interannual variability (e.g. the Zonal Wave Three 47 ) may also contribute to observed changes in the Pacific sector ice shelves, even though their correlations with δh(t) might be negligible; e.g. SAM (Table 1). These other modes interact with ENSO and with each other to drive complex patterns of ice-shelf height change that cannot be solely explained by a linear combination of climate modes. Further, studies 17,19,27,32 correlating ENSO tropical forcing with Pacific sector climate indicators, such as the ASL strength, sea-ice extent and Antarctic Peninsula temperature, have found that correlations with ENSO are significant for some seasons while not for others, with reversals of the sign of the correlation from season to season in some cases. Changes in the ASL strength, location and extent, which all exhibit large interannual variability (Extended Data Fig. 8), means that each distinct ENSO event yields different responses in individual ice shelves, depending on their location. Our ice-shelf height and mass assessments for the AS sector ( Fig. 4; Extended Data Table 1) reflect the unique environmental conditions of the strong 1997-2000 ENSO period, during which the change in ASL between El Niño and La Niña cycles was exceptionally large. In contrast, the strong 2015-2016 El Niño, characterized by remarkably large changes in tropical sea-surface temperature, did not impact AS ice shelves in the same way (Fig 1a and Fig. 5). This was likely a result of El Niño interaction with a positive SAM during the development phase, a northward displacement of the ASL relative to previous strong El Niños 48 (Extended Data Fig. 8), and absence of a following strong La Niña.
The dominant effect of El Niño on AS ice-shelf mass is likely the increased basal melting associated with onshore flow of CDW and coastal upwelling as westerly wind stress intensifies (Fig. 3a) [12][13][14] . On interannual timescales, this basal mass loss anomaly, relative to the longer-term mass loss trend 9 , is partially offset by increased snowfall. This precipitation increase is consistent with the northerly wind anomaly during El Niño events (Fig. 3a), possibly including increased local moisture uptake from the coastal ocean due to a reduction in regional sea-ice concentration (Fig. 3d).
The reversed pattern of meridional-wind anomalies between the AS and Bellingshausen Sea-Antarctic Peninsula (BS/AP) sector (Extended Data Fig. 2) is consistent with modeled surface mass balance for WAIS (Extended Data Fig. 6) and with satellite-gravity measurements over the AS and AP sectors 22 . Precipitation increases during El Niño in the AS sector while it decreases in the BS/AP region, and vice versa during La Niña. The principal mechanism affecting local winds is the response of the ASL pressure system to ENSO and SAM 17,19,31 , with ASL consistently changing its intensity within El Niño and La Niña years (Extended Data Fig. 8). The rapidly thinning Pine Island Glacier ice shelf showed a weaker response to ENSO (Fig. 5), consistent with the weaker, and in some cases reversed, atmospheric and sea-ice anomalies in Pine Island Bay relative to the broader AS ( Fig. 2 and 3).
We have shown how modes of tropical Pacific variability affect the height and mass of ice shelves in the AS sector of WAIS. The response in height is the combined effect of two opposing processes, which are both intensified during El Niño events: surface snow accumulation and ocean-driven basal melting. The result is an overall height increase, but net mass loss, since the ice lost from the base has higher density than the fresh snow being gained at the surface. Ice-shelf response to ENSO variability is strongest between Dotson and Ross ice shelves, with a weak response in Pine Island Bay, the Bellingshausen Sea, and west of the Ross Sea. Given expected increases in total precipitation 49 and frequency of extreme ENSO events 50 as Earth's atmosphere warms, our results imply that interannual variability of ice-shelf height and mass will also increase, stressing the need to quantify surface accumulation relative to basal melting to project future changes in Antarctic ice shelves. The processing steps for the conventional altimeters (ERS-1, ERS-2 and Envisat) are fully described by Paolo et al. 9,33 . We performed the following processing steps for CryoSat-2, applied to ESA's SARIn L2 Baseline C product over the Antarctic ice shelves: we corrected for a 60-meter range offset for data with surface types 'land' or 'closed sea'; removed points with anomalous backscatter (>30 dB) 51 ; estimated heights with a modified (from McMillan et al. 52 ) surface-fit approach, with a variable rather than constant search radius to account for CryoSat-2's heterogeneous spatial sampling; removed height estimates < 2 m above the Eigen-6C4 geoid 53 to account for ice-shelf mask imperfections near the calving front; applied all the standard corrections to altimeter data over ice shelves (e.g. removed gross outliers, and residual heights (with respect to mean topography) >15 m; ran an iterative three-sigma filter; and minimized the effect of variations in backscatter 9,33 ).
Ice-shelf height
When merging the records, we used only grid cells that were sampled by all four satellites, only accepted time series that overlapped by at least 0.75 years to ensure proper crosscalibration, and removed all merged time series with standard deviation of residuals (with respect to the trend) >1 m. This removed records with, for example: satellite mispointing, anomalous backscatter fluctuations, grounded-ice contamination, high surface slopes and geolocation errors.
Sea-ice concentration
Daily time series of sea-ice concentration, averaged to a 25-km polar stereographic grid, are from the National Snow and Ice Data Center's Daily Polar Gridded Sea Ice Concentrations Version 1 (Near-Real-Time Defense Meteorology Satellite Program Special Sensor Microwave Imager/Sounder) 54 , processed with the NASA Team Sea Ice Algorithm for sea-ice concentration retrieval 55 . We averaged the daily records to monthly anomaly time series, i.e., the anomaly with respect to the climatological average (1979-2017) for each month.
Climate indices
The Oceanic Niño Index (ONI), which tracks the running three-month mean sea-surface temperatures in the east-central tropical Pacific with limits 5°N-5°S and 120°W-170°W (the Niño 3.4 region), is our primary metric of ENSO. This standardized series is derived from the Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4) 37,56 . We also reproduced the analysis using an alternative metric of ENSO, the Southern Oscillation Index (SOI), a standardized time series of observed sea-level pressure differences between Tahiti and Darwin, Australia. The Southern Annular Mode (SAM) is the station-based index from Marshall 28 , which tracks the zonally-averaged meridional pressure difference between 40°S and 65°S. We obtained the Amundsen Sea Low (ASL) relative central pressure from Hosking et al. 57 . Our 'Best Fit' index is the best fit (in the least-squares sense) of ONI plus ASL 12-month running integrals (lagged by 4-6 months) to the AS-averaged ice-shelf height record. The running integral (or running sum) on both indices ensures consistency between the climate records.
Atmospheric variables
Surface air pressure, wind fields, precipitation and temperature are from the ERA-Interim global atmospheric reanalysis 38 for 1979-2017 by the European Centre for Medium-Range Weather Forecasts (ECMWF). These products are provided on a grid with spacing of ~80 km. We converted the monthly averaged records to anomalies with respect to the climatological average (1979-2017) for each month. The ERA-Interim model estimates of ENSO precipitation variability agree well with ice-core records and in-situ radar from the catchments of Thwaites and Pine Island glaciers 24 . It also agrees well on a regional scale with satellite gravimetry observations from the GRACE mission 22 .
Ice-shelf surface mass balance and firn density modeling
Time series of surface mass balance (SMB) over the ice shelves are from a simulation based on output of a regional atmospheric climate model (RACMO2.3 41 ), which is forced by the ERA-Interim reanalysis, has a horizontal grid spacing of 27 km, and accounts for processes such as surface meltwater retention due to refreezing, evaporation, wind drift, and sublimation.
The modeled SMB, surface temperatures, and 10-m wind speeds have been used to force a transient run of a one-dimensional time dependent Firn Densification Model (IMAU-FDM) that models the vertical profile of snow and firn density at each location 58 . The product was evaluated by comparing the temporal mean of firn density against 750 in-situ measurements continent-wide (r = 0.88 40 ), and comparisons with radar-derived accumulation 24 .
Correlation analysis
We identified correlations between ice-shelf height changes and climate forcing by removing the long-term trend from each height record using locally weighted regression (LOWESS) 34 , with a smoothing span equal to 1/3 of the record (~7.7 years, a compromise between retaining decadal fluctuations in the trend and excluding the shorter ENSO time scales), and then computing 12-month running means (hereafter referred to as 'height anomalies', δh(t)).
For the wind components and climate indices we computed 12-month running integrals (the sum of the preceding 12 months). We correlated δh(t) with the time-integrals of forcing anomalies represented by ERA-interim wind fields and climate indices. This approach is analogous to correlating dh/dt with time-dependent anomalies and indices, but avoids the high-frequency noise amplification due to explicit differentiation of δh(t). We tested correlations with and without lags between ice-shelf height anomalies and the climate records. We constructed confidence intervals for each correlation coefficient, taking into account temporal autocorrelation effects, using the nonparametric stationary bootstrap 35 with an average block length proportional to the maximum estimated persistence time (τ) of each pair of time series being analyzed 36,59 : where ρ is the autocorrelation coefficient and dt is the time step.
Spectral analysis
We identified common oscillatory behavior in the AS sector by decomposing the set of 77 height-anomaly time series from the AS (Fig. 1) into orthogonal components, using multivariate Singular Spectrum Analysis (SSA) 44,45,60,61 . In SSA, pairs of eigenvectors (EOFs) with similar periodicity represent a phase-and-amplitude modulated temporal oscillation (e.g. EOFs 1 and 2 in Extended Data Fig. 5). Our spectral decomposition by SSA also included statistical tests to discriminate between potential oscillations and "white" (independent and identically distributed) noise through Monte Carlo simulations 46 . Free software for implementation of SSA, among other spectral techniques, is provided by the SSA-MTM Toolkit at http://www.atmos.ucla.edu/tcd/ssa.
We removed the long-term trend from each ice-shelf time series (without smoothing), and normalized (to unit variance) each record prior to applying SSA 44,62 . Reported results refer to an SSA window length of 6 years (window lengths up to 9 years provided similar results). The SSA identified two significant modes of variability in ice-shelf height: 4-5-year time scales, and the annual cycle (Extended Data Fig 5).
Similarity index
We examined the spatial pattern of ENSO influence on the Antarctic Pacific sector ice shelves, by evaluating the similarity of each ice-shelf-averaged height anomaly record to the ONI. We define the similarity index (analogous to correlation) as: where x i is the ice-shelf height anomaly record and y i is the ONI, both normalized to unit variance. For graphical clarity, we normalized the SI values to a range of 0 to 1.
Equivalent height and mass calculations
For mass calculations, we estimated changes in ice-shelf height due to: variations in sealevel atmospheric pressure (Δh IBE ) from ERA-Interim pressure fields, assuming that surfaceheight change is −0.01 m for +1 hPa change in pressure 39 ; variations in surface massbalance (Δh SMB ) from the RACMO2.3 model output 41 (Fig. 4) and assumed surface densities (see next paragraph); and variations in snowfall from ERA-Interim precipitation directly (Extended Data Fig. 3), which is natively provided in the product as meters of liquid-water equivalent per day. For each model, we converted surface mass-balance and precipitation values (anomalies with respect to the 1979-2015 and 1979-2017 means, respectively) to buoyancy-compensated snow/firn height, Δh SMB , using surface snow/firn density values randomly sampled from a Normal distribution with mean and sigma equal to 495 and 25 kg/m 3 , respectively (see 'Uncertainty assessment' below), and ocean water density of 1028 kg/m 3 .
We chose the range of surface density values based on the limited information available from these ice shelves, for which direct surface measurements are scarce. Results from a semi-empirical FDM suggest that surface densities along the WAIS coast are higher than in the interior; some of the highest values in Antarctica are found along the AS coast ( Fig. 7 in Ligtenberg et al.) 58 . These high densities are due to a combination of relatively high temperatures, strong winds and elevated precipitation rates. Extended Data Fig. 4 shows density profiles from 0 to 1 m depth over Pine Island Glacier as represented by the FDM. Modeled density ranges from ~480 kg/m 3 at zero depth to ~510 kg/m 3 around 1-m depth. Our range of likely surface (firn) densities (495 ± 25 kg/m 3 ) for the AS ice shelves, extends ± 10 kg/m 3 beyond the model-derived range.
Further justification for our choice of densities comes from the fact that radar altimeters are relatively insensitive to fluctuations in low-density surface mass balance. Part of the radar signal penetrates through fresh snow, and changes in surface density (due, for example, to seasonal variations in radiation and air temperature) can affect the radar backscatter and derived height measurement 33 . We found no correlation between changes in height and changes in backscatter at interannual time scales (12-month running means) for the AS ice shelves, suggesting that interannual estimates of surface height are not affected by changes in surface properties. This is not the case at seasonal time scales where correlations were significant 33 . The layer being tracked by the RA as the "surface" must be sufficiently compacted to limit radar penetration and backscatter-height correlated changes. This result implies that height changes driven by surface accumulation may be higher than our RA measurements show, which would lead to a more important role of SMB relative to BMB in the height variability.
Phase lag between height changes and climate drivers
Height changes lag the climate drivers by about 4-6 months, based on correlation estimates for a range of lags (with maximum correlations for 4 to 6 months). We suggest three potential causes of this phase lag. First, the oceanic ENSO signal in BMB will be affected by the lagged and filtered dynamic response of the coastal ocean to the atmospheric forcing, caused by advection time scales over the continental shelf, ocean mixing, and response to anomalies in surface-buoyancy forcing as sea-ice volume changes. A typical advection time scale is given by T = L/U, where L is a characteristic length scale for the continental shelf (~500 km) and U is a typical mean advective velocity of CDW (~5 cm/s) 63 , giving T ~ 4 months. Second, the RA measurement of ice-shelf height is some integration of backscatter within the surface snow and firn layer, with a time lag being induced by radar signal sensitivity to firn compaction (the development of a subsurface strong reflector) that takes place over some time interval after the precipitation signal. Third, the peak signals of interannual climate modes including ENSO and SAM tend to be short-lived (roughly one year for large El Niño events (Fig. 1)), so that the effect of each event on an ice shelf depends on its phasing relative to the strong annual cycles of atmospheric, oceanic and seaice conditions. We accounted for the measured phase delays by calculating the mass budget with Δh OBS lagged by 6 months relative to the Δh SMB in Eq. (1).
Estimates of mass change between EN1997 and LN1999 for individual ice shelves are presented in Extended Data Table 1. The estimates for basal mass change are sensitive to the choice of surface density. Our estimated change in melt rate between El Niño and La Niña for Getz Ice Shelf is between 3.1 and 3.5 m/yr (Extended Data Table 1). These values are consistent with the magnitude of observed variability in Getz's melt rate, with interannual changes on the order of 3 m/yr reported by Jacobs et al. 15 .
Uncertainty assessment
We assessed the sensitivity of our results to measurement uncertainty using a Monte Carlo (MC) approach. Through an iterative process, we scaled each spatial field used in Eq. (1) by a percentage drawn from a Normal distribution with sigma equal to the field's prescribed error. We also draw surface density values from a Normal distribution with mean and sigma equal to 495 and 25, respectively, as justified above. Our reported values with uncertainties are then the mean and standard deviation from 2000 MC realizations. This approach has the advantage of directly quantifying the impact on the results of prescribed errors in the input fields, propagating these errors according to the relationship between the variables.
On interannual timescales, the main components of the uncertainty in Δh OBS are: the statistical error (e.g. precision of the instrument, poor geophysical corrections, improper range estimation from the return radar waveform); surface compaction rate (changes in firnair content); and neglecting the dynamic term (Δh DYN ) in Eq. (1).
Since the spatial field used for Δh OBS in Eq. (1) is the result of averaging tens-to-hundreds of crossover values (height differences) per ~30×30 km grid cell 9,33 , and we further average height-change estimates over 12-month time intervals, the statistical error on each grid cell is small (<10%) 33 .
Ku-band (~13.5 GHz) radar altimeters mostly penetrate the top layer of fresh snow. Radar measurements are, therefore, less sensitive to compaction rates than laser-derived measurements, which track the air-firn interface. Using laser-altimeter observations, Moholdt et al. 64 estimated compaction rates across Ross and Filchner-Ronne ice shelves on the order of −9 ± 10 mm/yr and 23 ± 10 mm/yr, respectively. If we assume that all of the compaction rate (an upper bound for uncertainty due to compaction) translates to the radar-measured height change, the largest of these values is less than 8% of the EN1997-LN1999 height change.
Accurate estimation of Δh DYN in Eq. (1), as applied to the transition from EN1997 to LN1999, would require annual velocity changes for the 1997-1999 period over the entire AS ice-shelf area; such data do not exist. We can, however, estimate the order of magnitude of the height change that would result if all the measured ice-flux increase in the AS 65 went into dynamic thickening (i.e., an extreme upper bound), using: where ΔQ/Δt is change in flux, A is ice-shelf area, ρ i and ρ w are the densities of solid ice (917 kg/m 3 ) and sea water (1028 kg/m 3 ), respectively, and δt is the time interval in question (1.3 years). The only estimate of ice-flux anomalies (Q) around the EN1997-LN1999 period comes from Mouginot et al. 65 , which provides Q for the glaciers that feed Pine Island (PIG), Thwaites, Crosson, and Dotson ice shelves at 1996.1 and 2001.0. Using values for PIG (+8 Gt over 4.9 years, and ice-shelf area of ~6000 km 2 ), we obtain Δh DYN ≈ 0.042 m, which is ~13.5% of the EN1997-LN1999 measured height change if we assume that all ice shelves in the AS sector have accelerated coherently at the same rate as PIG (which is clearly not the case 65 ). Using PIG, Thwaites, Crosson and Dotson combined (+10 Gt over 4.9 years, and area of ~18,700 km 2 ), we obtain Δh DYN ≈ 0.017 m, which is ~5% of the EN1997-LN1999 measured height change.
While variability in precipitation is well represented in reanalysis products and surfacemass-balance models, its magnitude has substantial uncertainty 24,41 , especially at the low elevations of ice shelves where surface mass fluxes are much larger than at higher altitudes of the ice-sheet interior. For the AS Embayment, previous studies 66, 67 have attributed a 15% uncertainty (based on field measurements 68 To ensure that we captured all these uncertainties in our error assessment, we assumed large (upper bound) errors, and derived the (one-sigma) ranges used in our MC calculation as: Δh OBS ± 30%, Δh SMB ± 25%, Δh IBE ± 15%, and ρ firm ≈ 495 ± 25 kg/m 3 .
We note that if accumulation values were higher, or if surface density was higher than the values we used based on a range of modeled densities, then precipitation-driven change in mass would be more important relative to melt-driven change than we report here. If we assumed a wider range of surface densities (for the "compacted" firn layer tracked by the RA), our estimated range for ENSO-driven change in basal melt would be wider. However, for all reasonable ranges that we have tested (range of firn densities up to 420-550 kg/m 3 ), the fundamental conclusions of this study remain the same.
Complexity of oceanic and atmospheric forcing interaction
The changes in ice-shelf height and mass at interannual time scales arise from the combination of direct atmospheric terms, predominantly precipitation, and oceanic forcing affecting basal melt rates. These terms can be closely coupled through the ocean circulation's response to wind stress and buoyancy forcing, both of which are also correlated with precipitation. However, ocean response may be delayed and filtered through advection (e.g. the time taken for CDW at the continental shelf edge to reach an ice shelf) and mixing. While our results specifically for ENSO indicate a clear and dynamically consistent relationship between SMB and BMB variability, this relationship will not necessarily hold for oscillations associated with other climate phenomena such as the Southern Annular Mode (SAM) and the Zonal Wave Three mode (ZW3). Our results from a relatively short (compared to typical ice-sheet and climate timescales of variability) 23-year observational window, only apply to time periods in which ENSO dominates.
Data availability
The Red line is the best fit between the height record and a combination of (12-month running integral) Oceanic Niño Index (ONI) and Amundsen Sea Low relative central pressure (ASL), both lagged by 4-6 months (preceding height); b, ONI; colored areas denote moderate-to-very-strong El Niños (red) and La Niñas (blue) as defined by NOAA (http://ggweather.com/enso/oni.htm).
Figure 2. Spatial pattern of correlation between wind and ice-shelf height anomalies
Correlation between 12-month running means of ice-shelf height anomalies in the AS (ice shelves in dark red) for the period 1994-2017 and 12-month running integrals of wind anomalies from ERA-Interim reanalysis for: a, the zonal (westerly) component,; b, the meridional (southerly) component. Wind anomalies (preceding ice-shelf height changes) were lagged by 4-6 months. The arrows indicate the direction of the wind anomaly that correlates with increase in ice-shelf height (e.g. in panel b, negative correlation between meridional wind and ice-shelf height means that onshore wind correlates with increase in height). Annual-average anomalies in local oceanic, atmospheric and ice-shelf conditions for 1997-1998 (left) and 1999-2000 (right). From top to bottom: a, wind velocity vectors, and vertical transport of coastal waters (upwelling/downwelling) derived from the wind-stress curl; b, air temperature; c, precipitation rate; d, observed sea-ice concentration, and ice-shelf height anomaly (six months after peak ENSO activity; see Methods). Fields in panels a-c are from ERA-Interim reanalysis. Surface mass balance and sea-level pressure during these conditions are shown in Extended Data Figs. 6, 7 and 8. a Altimetry-derived ice-shelf height changes (La Niña minus El Niño); b height change due to atmospheric pressure, derived from ERA-Interim (pressure decrease: height increase); c surface mass balance from RACMO2.3, for visualization purposes, surface mass balance was converted to buoyancy-compensated height-equivalent using surface density of 490 kg/m 3 (accumulation decrease: height decrease); d basal mass change inferred from the previous three: d = a -b -c (melting decrease: height increase).
Correlation between Amundsen Sea ice shelves and climate indices
Correlation between interannual anomalies in ice-shelf height ( Fig. 1) with the Oceanic Nino Index (ONI), Southern Annular Mode (SAM), Amundsen Sea Low relative central pressure (ASL), and linear combination of ONI and ASL: δh fit = a ONI + b ASL ('Best Fit'; red curve in Fig. 1). ONI, SAM and ASL are 12-month running integrals (Methods). Values in parentheses are correlations with ONI and ASL lagged by 4-6 months (preceding ice-shelf height). | 2018-04-03T05:30:12.599Z | 2017-11-27T00:00:00.000 | {
"year": 2017,
"sha1": "15ec3adc8ae6ae326b1be5e3e41ea232714fc2e0",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5758867?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebe35b9a47493f21d40745970c77eccfbaa7c26d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
} |
136450479 | pes2o/s2orc | v3-fos-license | Use of Polyurethane Insulated Panel for Heat Infiltration in Refrigerated Vehicles
: This study investigates the overall heat transfer coefficient of reinforced composite panel with the aim of being used as the internal and external cover sheet of the insulated panel. The results indicate that the overall heat transfer coefficient, U (W/m 2 K) value of the composite reinforced with 10%wt. of fibre at 0 o orientation (G 10 E) offers the lowest U value of 0.386950W/m 2 K and 0.196680W/m 2 K for 50mm and 100mm insulation thicknesses, respectively. This further shows their potential of being used as internal and external cover of insulation materials which may prolong the ageing period of insulation and also the shelf life of fresh raw fruits. © JASEM
Climate change and the emerging environmental conditions have raised the global emission to unprecedented scenario considering its contribution to environmental problems; acid precipitation, ozone depletion, and the greenhouse gas effect (Çomaklı and Yüksel, 2004). The effect of climate change seems to have permeated across continents and these have raised a lot concerns as the earth temperature continues to rise with no specific solution in sight. These effects have resulted in the spreading of desert, rise in sea level and most importantly the biodiversity loss. Fossil fuel energy has been identified (Bakos, 2000) as the most contribution factor to ozone depletion and the demand for this convectional energy is predicted to rise as modern day technology still depend largely on fossil fuel. A significant quantity of fossil fuel is used to maintain and transport fresh raw fruits in refrigerated vehicles and as countries develop their cold chains, this will result in more energy demand (James and James, 2010). Refrigerated vehicle is important as it serves to prolong the safety and quality of harvested raw fruits and enabling food to be conveyed to an increasingly urbanised cities (Tassou et al., 2009). Recently, there have been a quite number of literature reports (Browne et al., 2005, Luè et al., 2014, Balat, 2006 on the role of climate change on ''access to food" and ''food security" and key of such concerns have dwelled largely on increasingly fossil fuel consumption and its impact on the environment. The depletion of ozone layer have raised the earth's surface temperature to about 0.6 o C over the last two decades and with the prediction of attaining 2 o C as affirmed in the last Copenhagen conference on Climate change., and, as a result, the sea level is reported to have risen by almost 20 cm in the last decades (García-Álvarez et al., 2013).
List of Symbols
L1=L3 thickness of the cover sheet K thermal conductivity value W/m 2 k L2, thickness of the insulation S mean surface(the geometric mean of external and internal surface m 2 T temperature Q heat flux W U overall heat transfer coefficient W/m 2 K h1 =h2 individual heat transfer coefficient W/m 2 K CO 2 emission from refrigerated vehicles, for instance, is released into the environment causing further changes in the atmosphere with carbon reacting with the oxygen resulting in greenhouse gas and depletion of natural forest and biodiversity loss. Emission from CO 2 has increased largely since the industrial revolution from 1950 and has increased at a rate of 0.4% per year in the last two decades with a projected life time of about 100 years in the atmosphere. These volume of CO 2 remain in the atmosphere depleting the nature, enhancing the intensity of acid rain and thereby causing environmental disaster (Burniaux and Chateau, 2014). The majority of refrigerated vehicles are conducted with semi-trailer insulated rigid boxes and presently, there are about 4 million refrigerated vehicles worldwide with the forecast that by 2030, global road refrigerated vehicles will have multiplied by nearly 2.5% in a year (Glouannec et al., 2014). In order to minimise energy demand in refrigerated vehicles and its resulting environmental impact, it would be of great interest to improve the insulation design (Gvozdenac, 2015). Insulated panels of vehicles transporting perishable raw fruits are normally defined by the value of overall coefficient of heat transfer (k-value) and by extension body porosity. This overall coefficient of heat transfer predicts the rate of transfer of heat or ambient temperature in relation to unit surface of the external walls, when it is subjected under steady conditions and the unit difference of temperature exists between the internal and external atmospheres (Gvozdenac, 2015). Thus, overall heat transfer coefficient is defined by equation 1 as contained in the mathematical expression developed by Dragano et al. (2009). ( ) The solar irradiation through the body wall with polyurethane used as the insulating materials will degrade the optimum performance refrigerated chamber due to high thermal load from the refrigerating unit. In view of the low thermal conductivity value (k) of polyurethane, the interaction between the body wall as a result of temperature rise, causes infrared radiation through conduction into the polyurethane thereby reducing its overall performance efficiency. In this study, attempts were made to replace the external metallic wall of the insulated panel with the composite materials adopting the literature values of already developed composite sheet in order to ascertain the effectiveness of light weight composite polymer material as a potential substitute for metallic reinforced sheet.
MATERIAL AND METHOD
In this study, light weight composite materials were fabricated from epoxy resin and E-glass fibre and the thrust of the research is to adapt these composite materials as external wall of refrigerated vehicles. Part of the concept of this study is to analyse overall heat transfer coefficient of each of the composite materials developed for thermal insulation. The five (5) specimen composite panel were developed and the manufacturing parameters were based on fibre enhancement and orientation. Successive fibres were oriented with respective to base fibre at 30 o and 60 o orientation basically to study in impact of orientation on overall heat transfer reduction. Some of the key issues that remain unresolved in this manufacturing process is the percentage of fibre contents that is predicted to reach an optimum thermal conductivity value. Measurement of thermal conductivity of these composite specimen were done with a Leinsis TB 100 equipment as illustrated in the graph presented in Figure 1 which was later followed by the estimation of overall heat transfer coefficient of these composite panel. It is also very important to note that insulated body of refrigerated vehicles is being apprised as per heat transfer through external wall by overall coefficient of heat transfer (U-value)
RESULTS AND DISCUSSION
The computation of the overall heat transfer coefficient is quite important especially for the light weight composite materials in order to analyse their potential to limit heat infiltration when compared with the aluminium sheet and iron metal. The thickness of the insulation is also x-rayed in this study as per their effectiveness in maintaining optimal cold cycle chain. Insulation and low thermal conductivity composite materials have been widely reported to reduce the air infiltration and to maintain cooling in refrigerated chamber, especially in severe climates and in line with this concept, the Table 1 provided the literature data that was adapted in the analysis of the overall heat transfer coefficient as exemplified in equation 2 which has also been used in many literatures (Adekomaya et al., 2017, Mohsen and Akash, 2001, Mahlia et al., 2007 to estimate overall heat transfer coefficient. Figure 2 shows the plots of coefficient of heat transfer of all the composite sheets at various degree of insulation thickness in the range of 50mm to 100mm and it can be seen from this plot that significant difference is not noticed among the panels regardless of fibre loading and orientation. The implication of these may inform more insulation thickness to reduce heat infiltration through the external wall. It can also be noticed that G 10 E appears to have the lowest heat transfer coefficient, as depicted in the plot, further interpretation to this result indicates that the successive fibre loading at any degree of orientation may not necessarily reduce heat transfer through the external wall. The plot shown in Figure 3 has further given credence to past works as it is crystal clear that using aluminium material as cover sheet for heavily insulated panel may not necessarily lower the overall heat transfer coefficient as the percentage of heat infiltration between these regions is insignificant. The new composite materials developed, exhibited a promising view in line with the low thermal conductivity estimate. It is also very important to illustrate that the heat transfer coefficient of new materials, slightly improve when compared to iron and metallic sheets. Significant difference in the heat transfer coefficient is not noticed in these new materials when compared with metallic sheet, but the overall heat rejection using these new materials as external wall will culminate into energy saving in refrigerated vehicles. Further analysis of this result showed that with 50mm as insulating thickness and using G 10 E as external cover sheet, there is likely to be 0.6% heat reduction over aluminium sheet, which to a larger extent, must have saved significant energy requirement in refrigerated vehicles.
Conclusion: his experimental study has shown clearly the magnitude of overall heat transfer coefficients of composite panels developed on the basis of fibre loading and orientation. It is also important to note that while the difference in metallic reinforced panel and the composite panel developed seem to have shown little difference as demonstrated in the plots, further study may be required to achieve lower overall heat transfer coefficients. Although, the application of nano-particles in composite formation appears to have improved the mechanical properties as reported in many literatures, this study futures the introduction of nano-materials as reinforcement in these composite for more lower overall heat transfer coefficient values. | 2019-04-29T13:17:48.758Z | 2017-08-18T00:00:00.000 | {
"year": 2017,
"sha1": "0625cffed3e7be86e33206aca955c0df8f991fb0",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/jasem/article/download/160204/149787",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "948694f5e73982d949d4ea995dbdbe555812c194",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
211090023 | pes2o/s2orc | v3-fos-license | Hydroelastic Response of an End Wall Interacting with a Vibrating Stamp via a Viscous Liquid Layer
The mathematical modelling of a vibrating stamp interaction with a flexible restrained end wall (bellows) of a narrow channel via a viscous incompressible fluid is carried out. The narrow channel formed by two parallel walls and filled with a viscous liquid is investigated. The liquid motion in the channel is considered as a laminar one. The bottom channel wall is absolutely rigid, and the top one is the absolutely rigid vibrating stamp. At the right edge of the channel, the end wall with flexible restraint (bellows) is installed, and at the left one, there is a free liquid flow. The problem of longitudinal hydroelastic vibrations of the channel end wall is formulated and solved analytically. The distribution laws of velocities and pressure in the fluid layer along the channel are found. The motion law of the channel end wall is determined. The amplitude frequency and phase frequency responses of the end wall for steady-state harmonic oscillations are constructed. The mathematical modelling has shown the possibility of damping the channel end wall (bellows) vibrations by changing the distance between the channel walls or by changing the liquid viscosity in the channel.
Introduction
Nowadays, investigation of the dynamic interaction of the liquid with flexible structural components is an actual problem for modern mechanical engineering, technology and production processes [1]. One of the first works in this field is [2], in which the free oscillations of a circular plate interacting with an ideal liquid were considered. The hydroelasticity model of a beam filled with an ideal fluid for the study of a pipeline transverse vibration was considered in [3]. In reference [4], a hydroelastic oscillations model of a cylinder liner of an internal combustion engine with water cooling was proposed. The nonlinear behaviour of an elastic pipe conveying fluid and supported at both ends was investigated in [5]. In reference [6], the analogous problem was studied for the case the pipe possesses flexibly supported ends. The dynamic behaviour of an annular channel conveying ideal fluid and formed by two cylindrical shells for the case it subjects to the external gas flow was considered in [7]. In reference [8], the walls hydroelastic vibrations of an annular channel filled with pulsating viscous incompressible fluid were considered. The study on the outer wall oscillations of an annular channel filled with a viscous liquid under the inner channel wall vibration for the case this channel is surrounded by elastic medium was carried out in [9]. Oscillations of a circular plate surrounded by an ideal fluid located in a rigid cylinder were studied in [10]. The dynamics and stability of a plate, which is part of the wall separating two viscous liquids, were considered in [11]. The vibration damping of a beam lying on a viscous liquid layer was studied in [12]. The bending vibrations problem of a [13]. A similar problem for a piezoelectric beam in a viscous incompressible flow was considered in [14]. In reference [15], a mathematical model was developed for the study of plate streamwise vibrations in parallelwalled channel conveying a viscous fluid due to forced transverse vibrations of this plate. Longitudinal and transverse vibrations of a flexibly restrained wall of a narrow tapered channel with a viscous fluid were studied in [16,17]. The influence simulation of the end seal presence and the end discharge features of the viscous fluid on the disturbing moments in the float gyroscope was carried out in [18]. However, the parallel-walled channel possessing flexibly restrained end wall (bellows) was not considered in the above-mention papers.
Statement of the Problem
Let us consider a narrow channel formed by two rigid parallel walls in accordance with the scheme shown in Fig. 1. We associate the Cartesian coordinate system сenter with the bottom motionless wall of the channel. The upper channel wall is a stamp oscillating along the z-axis. The sizes in the plan view of the bottom and upper channel walls are 2ℓ×b, let us assume that b >> 2ℓ and consider the plane problem. The distance between the channel walls is δ 0 and 2ℓ >> δ 0 . The channel is filled with a viscous incompressible fluid. The oscillation amplitude of the stamp is z m << δ 0 . At the channel left end, the viscous fluid discharges into the same fluid possessing constant pressure p 0 . At the right one, there is an end wall with flexible restraint (bellows), i.e. this end wall can move along the x-axis with the amplitude of x m . Further, we will take into account that the pressure in cross section at the channel left end coincides with the constant one p 0 . At the channel right end, the volume flow rate coincides with one due to the end wall vibration (volume flow rate in the bellows). According to [19], for the plane problem, the motion equations of a viscous fluid are where V x , V z are the liquid velocity components along the coordinates axes, p is the pressure, ρ is the liquid density, ν is the kinematic viscosity coefficient of the liquid, t is the time.
The fluid motion equations (1) are complemented by boundary conditions, i.e. no-slip conditions at the bottom and upper channel walls In addition, we formulate the conditions at the ends of the channel. These conditions represent the pressure coincidence at the channel left end with the pressure of the surrounding liquid, as well as the volumetric flow rates equality at the channel right end and at the right cavity, i.e. equality of volumetric flow rates at the channel end and the bellows
Determining of the End Wall Response
Let us introduce dimensionless variables and small parameters of the problem , , , where m is the end wall mass, n is the stiffness coefficient of the end wall flexible restraint. Solving (5), (6) using the iteration method we find that According to (8) the pressure in the channel cross section at the right end is Taking into account (9) Eq. (7) we write as Note that equation (10) is true both for an arbitrary law of stamp vibration and for a harmonic one. Next, we consider the harmonic law of stamp vibration, i.e. Using this solution, we construct the amplitude and phase responses of the end wall, i.e. we write (11) in the form
Calculation results
We carried out calculations of the channel end wall amplitude responses using the developed mathematical model. The following data were used in the simulation ℓ = 0,1 m, δ 0 /ℓ = 1/10 b/ℓ = 5, m = 0,5 kg, n = 300 kg/sec 2 . Two liquids with different physical properties are considered. The thirst one is hydraulic oil (AMG-10) with the following parameters ρ = 840 kg/m 3 , ν = 2·10 -5 m 2 /sec, and the second one is water for which ρ = 1000 kg/m 3 , ν = 10 -6 m 2 /sec. In the course of modelling, the dimensionless frequency was introduced as the ratio of the current frequency ω to the eigenfrequency one for the system without damping Thus, the dimensionless amplitude responses were considered The calculation results of A(η) for hydraulic oil (AMG-10) with decreasing dimensionless channel width δ 0 /ℓ are shown in Fig. 2. Similar calculations of A(η) for water are shown in Fig. 3.
The calculation results showed the occurrence of significant vibration amplitudes of the channel end wall near the eigenfrequencies of the system without damping. In addition, the possibility of damping these oscillations due to decreasing the liquid layer thickness or increasing the kinematic viscosity is shown.
Summary and Conclusion
Thus, the mathematical model has been developed for the study of longitudinal vibrations of the end wall with flexible restraint (bellows) due to vibrations of the stamp (the channel upper wall). The elaborated mathematical model can be used both for the harmonic law of stamp vibration and for arbitrary one. In the latter case, if the stamp motion law is given non-linearly, the motion equation of the end wall with flexible restraint must be solved by appropriate methods, for example, numerically. The obtained results can be used in practice to predict the resonant frequencies of bellows vibrations and calculate the amplitudes of bellows vibrations at resonance, as well as to study the fluid pressure distribution along the channel in lubrication systems, hydraulic drive, cooling, etc. | 2020-01-16T09:09:44.591Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "d02c32ae26c763edc10d4c9e7c0c23d7bb7d2641",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1441/1/012108",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fd09c98bc05f463e6addbe461d8f6566223f5640",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
103147019 | pes2o/s2orc | v3-fos-license | Quantitative analysis by UV-Vis absorption spectroscopy of amino groups attached to the surface of carbon-based nanoparticles
Carbon-based nanoparticles must be modified due to their wide array of applications, especially when they are used as biomaterials. After modifying, quantitative analysis of the functional group is essential to evaluate a number of the available functional groups applied for further functionalization. In this study, we modified the carbon-based nanoparticles by amino group using submerged arc discharge in different liquids. The attached amino groups were then characterised and quantified by UV-Vis spectroscopy. This amino group functionalization was also confirmed by Fourier transform infrared (FTIR) spectra. The FTIR spectra of amine-modified nanoparticles show the definitive absorption peaks of N—H amine, C—H, C=O, C—N and Fe—O at 3418.97; 3000–2850; 1700–1600; 1400–1100; and 480-550 cm-1, respectively. The amine groups have different performance signals between the amine-modified and unmodified nanoparticles. The FTIR spectra results were correlated with the UV-Vis absorption spectroscopy method using acidic methyl orange. The UV-Vis absorption spectroscopy shows that the absorbance of methyl orange represented to amino groups number was 1.3 times higher when the pH of the solution was increased. The absorbance intensity was then used to estimate the quantity of amine groups attached.
Introduction
Fe3O4 materials have been studied in multiple reports due to their several phases such as ferromagnetic magnetite (Fe3O4), maghemite (γ-Fe2O3) and hematite (α-Fe2O3) [1,2]. Furthermore, iron magnetic nanoparticles can be synthesised by various methods, such as electron beam irradiation methods, coprecipitation methods and emulsion precipitation methods [3][4][5]. The iron nanoparticles are also coated using other carbon materials to increase the absorption ability of the nanoparticles. Moreover, carbon coating can be used as an adhesion point for functional groups.
The magnetic-based nanocarbon material can also be synthesised by the arc discharge method. In the conventional arc discharge method, the vacuum pump and gas flow can be replaced using a liquid medium become a simple, easy and economical arc discharge method [6]. According to Sano et al. (2010), the arc discharge methods in the liquid produce nanomaterial products with high yield and lower impurities [7,8] We previously reported the changing of the hydrophobic property of the carbon surface of TiO2/C photocatalysts being hydrophilic during submerged arc discharge by using ethanol liquid with ethylenediamine or urea added [9,10]. One of the successful surface modifications of TiO2/C with NH2 groups was performed in the medium of ethanol:ethylenediamine (1:1 v/v) and qualitatively analysed by N─H and C─N peaks shown in its Fourier transform infra-red (FTIR) spectrum [10]. However, a quantitative study has not yet been performed. Therefore, this study merged FTIR characterisation and chemical derivatization analysis using UV spectroscopy techniques to qualitatively and quantitatively examine the successful surface modification of amine groups attached to the Fe3O4/carbon nanoparticles prepared by submerged arc discharge using the ethanol/ethylenediamine medium.
Fabrication of amine-modified nanoparticles
Amine-modified nanoparticles were fabricated by the arc discharge method using carbon electrodes. The anode used was a carbon electrode filled with a mixture of carbon, Fe3O4 and fructose binder with a weight ratio C:Fe3O4 of 3:1. Before filling, the mixture was sonicated for 480 s. After filling, the filled carbon electrodes were heated in a furnace at a temperature of 180ºC for 6 h. The cathode was a sharp pencil-like graphite electrode, and the cathode and anode were positioned at a very close distance (less than 2 mm separation) to produce stepping electrons. Both the cathode and anode were submerged in a 600 mL glass beaker containing 300 mL ethanol 50%:ethylenediamine 50% (1:1 v/v). A current of 10 A (~30 V) then was passed through the electrodes, which causes arcing to occur. During this process, the nanomaterial was formed as a black powder dispersed in the liquid. For comparison, the arc discharge also was performed in a solution of only ethanol.
Characterizations of amine-modified nanoparticles
Black powder obtained from the previous section was characterised using FTIR instruments to qualitatively analyse the amine groups attached to the surface of the nanoparticles. The other analysis was performed by UV-Vis spectroscopy that followed the procedure in [11]. The amine-modified nanoparticles (0.005 g) were treated with 10 mL acidic methyl orange solution (0.05% methyl orange in a 0.1 m solution of sodium dihydrogen phosphate in water) for 2 minutes in a vortex mixer and then centrifuged at 8000 rpm for 5 minutes. After removing the excess dye by rinsing, the attached methyl orange was resolved by 5 mL of a 0.1 M potassium carbonate solution. The filtrate separated by centrifugation was then tested using UV-Vis absorption spectroscopy to quantitatively analyse the amine groups attached on the nanoparticle surfaces at a wavelength of 464 nm. The molar extinction coefficient of the methyl orange was ε = 21600 L/(mol cm).
Results and discussion
The modified Fe3O4 nanocomposite with carbon was successfully fabricated by using the arc discharge method in a liquid media of ethanol/ethylenediamine. Thus, the material product comprised aminemodified nanoparticles.
During arc discharging, a section of the tip of the filled carbon electrodes submerged in liquid was destructed and then evaporated in the arc area. The carbon vapour that moved away from the arc area then condensed on the Fe3O4 particle, resulting in carbon encapsulated iron oxide (Fe3O4/C) nanoparticles, which are illustrated in Figure 1. During the carbon layer condensation, the amine group from the ethylenediamine liquid was attached, covalently linked to sp 3 carbon atoms. The ethyl group possibly helped the hexagonal based structure carbon to further form as graphite layers. The amine functional groups attached on the surface were responsible for the dispersion quality of Fe3O4/C in water, and the modified nanoparticle surface with the amine group is illustrated in Figure 2. To characterise the presence of nitrogen-containing functional groups, FTIR characterisation was performed to identify the successful attachment of amine groups on the nanoparticle surface. The FTIR characterisation shown in the spectra in Figure 3 confirmed that the studied method synthesised The presence of the N-H amine indicates the successful surface modification of Fe3O4/C nanoparticles by amine groups provided from the breakdown of the liquid medium of ethanol and ethylenediamine during the arc discharge process. The changes of surface characteristics of the particle due to the addition of a functional group derived from the liquid medium when synthesised in plasma arc discharge is illustrated in Figure 2. The amine groups were attached on the carbon layer via covalent bonds, and the covalent bonds were linked with carbon atoms in the hexagonal structure of graphite with the nitrogen atom of NH2.
To quantitatively analyse the amine groups on the nanoparticle surfaces, the chemical derivatization of amine group was performed by a UV-Vis spectrometer using an acidic methyl orange (MO) solution. MO covalently bonds to the amine groups in the lower pH, and the bound MOs were released when the solution was in higher pH controlled by the potassium carbonate addition. The released MOs were easily detected by UV-Vis spectroscopy. Figure 4 shows the spectra of the MO released from the modified and unmodified nanoparticles. The spectra in the figure show that the absorption peak of MO was significantly different between two of those samples. The MO absorbance of amine-modified nanoparticles was 1.3 times higher than that of unmodified nanoparticles. Consequently, the amine group concentration presented on Fe3O4/C-ED was equal to 0.022 x 10 -3 mol NH2 per one gram nanoparticles or 0.13 x 10 20 NH2 groups per one gram nanoparticles, if the initial value is multiplied by Avogadro's number.
Conclusion
The amine group being attached on the surface of the nanoparticles was successfully confirmed by FTIR analysis and UV-Vis spectroscopy techniques. The FTIR spectra of Fe3O4/C synthesised in the liquid medium ethanol/ethylenediamine revealed transmission peaks representing N─H amine, C─H, C═O, C─N and Fe─O at 3418.97; 3000-2850; 1700-1600; 1400-1100; and 480-550 cm -1 , respectively. The UV-Vis absorption spectroscopy technique shows that the peak of Fe3O4/C produced in the arc discharge in ethanol/ethylenediamine was 1.3 times higher than the peak of Fe3O4/C produced in only ethanol. | 2019-04-09T13:09:05.619Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "3189ca29d929b6c9b6fc4541129c7a4e7b985b20",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/333/1/012027",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b1af103e24dfa3ce19fa9b0bc069630e6fd6a501",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
220431429 | pes2o/s2orc | v3-fos-license | Sunlight Exposure and Vitamin D Levels in Older People-An Intervention Study in Swedish Nursing Homes
Older people are recommended to take oral vitamin D supplements, but the main source of vitamin D is sunlight. Our aim was to explore whether active encouragement to spend time outdoors could increase the levels of serum 25-hydroxyvitamin D (25(OH)D) and increase the mental well-being of nursing home residents. A cluster randomized intervention trial. Nursing homes in southern Sweden. In total 40 people >65 years. The intervention group was encouraged to go outside for 20–30 minutes between 11 a.m. and 3 p.m. every day for two months during the summer of 2018. We analyzed serum 25(OH)D before and after the summer. Data from SF-36 questionnaires measuring vitality and mental health were used for the analyses. In the intervention group, the baseline median (interquartile range (IQR)) of serum 25(OH)D was 42.5 (23.0) nmol/l and in the control group it was 52.0 (36.0) nmol/l. In the intervention group, the 25(OH)D levels increased significantly during the summer (p=0.011). In the control group, there was no significant change. The intervention group reported better self-perceived mental health after the summer compared to before the summer (p=0.015). In the control group, there was no difference in mental health. Active encouragement to spend time outdoors during summertime improved the levels of serum 25(OH)D and self-perceived mental health significantly in older people in nursing homes and could complement or replace oral vitamin D supplementation in the summer.
Introduction
Vitamin D deficiency is common in older people living in nursing homes in the Scandinavian countries (1)(2)(3)(4). The Swedish Food Agency recommends that all people over 75 years take an oral daily vitamin D supplement of 20 micrograms to ensure an adequate intake (5). A Dutch study with community-dwelling older adults showed that sun exposure, vitamin D intake and vitamin D-related genes substantially contributed to the variation in serum 25-hydroxyvitamin D (25(OH)D), and of those three factors, sun exposure was the most important (6). Another study of older nursing home residents showed that irradiation from an artificial UVB source was as effective as oral vitamin D supplementation for increasing the levels of serum 25(OH) D (7), but artificial UV radiation is not a standard care in nursing homes. Previous intervention studies about the effect of increased natural sunlight exposure on the 25(OH)D levels in older people in residential care have shown inconsistent results. Reid et al. studied the effect of 0, 15 and 30 minutes of daily sun exposure over four weeks in 15 older people in residential care in Auckland, New Zealand (8). They found increased vitamin D concentrations and positive effects on the calcium metabolism in the group with 30 minutes sun exposure, suggesting this as a method to prevent osteomalacia. Sambrook et al. studied the effect on vitamin D status and fall incidence of an intervention with sunlight exposure for 30-40 minutes, five days/week over 12 months and calcium supplementation in older people in residential aged care facilities in Sydney, Australia. The adherence to the intervention was low, and overall, showed no significant increase in serum 25(OH)D or reduction in fall risk (9). In a sub study, the UV doses were estimated. The UV dose had a small, but significant positive effect on the vitamin D levels after six months only if the sunlight sessions started in spring or autumn, but still the 25(OH)D levels were below 50 nmol/l after the summer (10).
To our knowledge, there have been no previous intervention studies confined to older nursing home residents at northern latitudes on the effect of increased natural sunlight exposure on vitamin D levels. Therefore, our primary aim was to explore whether an intervention with active encouragement to spend time outdoors during summertime could increase the levels of vitamin D in people living in Swedish nursing homes. Furthermore, we aimed to investigate whether the intervention could increase the mental well-being of the study population.
Methods
This was a cluster randomized intervention trial confined to nursing home residents in Jönköping in southern Sweden. To be included in the study the residents had to live in the selected nursing homes, be expected to stay in the nursing homes over the summer and be > 65 years. Exclusion criteria for participants were: planned to have a short-term stay, undergoing palliative care/ with short expected survival, other reasons to believe that the resident was not expected to go outdoors, and inability to give informed consent to the study. Residents with severe dementia were not asked to participate.
We obtained written informed consent from all participants.
The study was approved by the Regional Ethical Review Board, Linköping, Sweden and complied with the principles in the Declaration of Helsinki.
Measurements before the summer
The intervention group consisted of individuals living in two selected small nursing homes and the control group were individuals living in another selected large nursing home. The study participants were consecutively included between April and May, 2018. Age, gender and smoking habits were registered. The participants' height was measured to the nearest centimetre and the weight to the nearest kilogram. The waist circumference was measured between the lowest rib and the hip bone to the nearest centimetre. The blood pressure was measured in a sitting position after five minutes of rest. The functional level was estimated with the second subscale Physical activity in the modified Norton scale (11). Data about medical history and current medication were collected from the medical records.
SF-36
The participants answered the nine questions regarding vitality and mental health in the short form health survey SF-36, Table 1, which is a standardized questionnaire measuring selfreported health (12). The answers were compiled according to a standardized scoring protocol (13). The SF-36 has been evaluated in Sweden and found to be reliable (14).
Laboratory methods
A venous blood sample was taken in the non-fasting state between April and May 2018, before the start of the intervention and was used for routine laboratory analyses at Ryhov County Hospital, Jönköping, Sweden. Serum 25(OH) D was analyzed with a 1-step delayed chemiluminescent microparticle immunoassay technique with an Architect i2000SR immunoassay analyzer, Abbott Laboratories, Abbott Park, Illinois, USA.
The intervention
The participants in the intervention group were encouraged, orally and in writing, to go outside for 20-30 minutes between 11 a.m. and 3 p.m. every day for two consecutive months, starting on any day between May 15 and July 1, 2018. The staff were asked to continuously remind the intervention group to go outside on a daily basis regardless of the weather. The intervention group received written advice from the Swedish Radiation Safety Authority about sun protection. The control group lived as usual. All participants were instructed to record the days with time spent outdoors in a diary.
Measurements after the summer
In September/ October 2018, a venous blood sample for follow-up on serum 25(OH)D was drawn and the participants answered the SF-36 questionnaire regarding vitality and mental health again.
Food
The number of times fish/ seafood was on the menu in the nursing homes during the intervention period was recorded. In one of the nursing homes for the intervention group, fish/ seafood dishes were served 56 times, in the other nursing home for the intervention group 52 times, and in the nursing home for the control group 58 times.
Weather
We collected data on sunlight time from the Swedish Meteorological and Hydrological Institute, measured in Växjö, Sweden, located approximately 106 km from the nursing homes. Sunlight time was defined as the number of hours with direct solar radiation above 120 W/m2 measured with a contrast sensor. The summer of 2018 was sunny in southern Sweden. The number of sunlight hours was 352 in May, 308 in June, 357 in July and 188 in August (15).
Statistics
We used IBM SPSS Statistics 25 (International Business Machines Corporation, New York, USA) for the statistical analyses. Non-parametric tests were used as the variables were not normally distributed and the population was small. A Mann-Whitney U test and a related samples Wilcoxon signed-rank test were used to compare median levels of continuous variables between groups. Regarding the results of SF-36, t-tests were used to compare mean levels of continuous variables between groups. We used the Chi-square test to investigate associations between categorical data. Fischer´s exact test was used when more than 20% of the cells had an expected frequency below 5. Statistical significance was defined as p<0.05.
Results
We included 20 people in the intervention group and 22 people in the control group, as shown in Figure 1. Eighteen people in the intervention group and all of the 22 participants in the control group remained for analyses before the summer. The baseline characteristics of the study population are described in Table 2. There were no differences in age, sex, BMI, waist circumference, blood pressure, smoking habits or physical activity levels between the groups. The participants in the control group had lower plasma glucose than the intervention group. No differences were seen in levels of parathyroid hormone, haemoglobin, albumin-corrected calcium, creatinine or estimated glomerular filtration rate between the groups. There was extensive co-morbidity in the population but no differences between the groups regarding diagnoses or medication. 9. did you feel tired?
The health concept of vitality was measured using questions 1, 5, 7 and 9, and mental health was measured using questions 2, 3, 4, 6 and 8; The items were assessed from 1 to 6: 1= all of the time, 2= most of the time, 3= a good bit of the time, 4= some of the time, 5= a little of the time and 6= none of the time.
Serum 25(OH)D before the summer
For all participants, the median (interquartile range (IQR)) of serum 25(OH)D was 45.0 (28.0) nmol/l before summer. In the intervention group, the median (IQR) of serum 25(OH)D was 42.5 (23.0) nmol/l. In the control group, the median (IQR) of serum 25(OH)D was 52.0 (36.0) nmol/l. No difference in serum 25(OH)D was seen between the groups. serum 25(OH)D was seen between the intervention group and the control group after the summer.
SF-36
The results of the SF-36 are shown in Table 3. The participants in the intervention group reported better selfperceived mental health after the summer compared to before the summer (p=0.015). No difference was seen in vitality scores. In the control group, there were no differences in vitality or mental health scores. Data are presented as means and standard deviations. The independent samples t-test was used to compare mean levels in scores between the intervention group and the control group. The paired samples t-test was used to compare mean levels in scores before and after the summer. A p-value of <0.05 was considered significant (*).
Discussion
In this intervention study confined to older people living in Swedish nursing homes, we found that active encouragement to spend time outdoors during summertime improved their serum 25(OH)D levels significantly. The serum 25(OH)D levels increased both in the intervention group and in the control group during the summer, but in the control group the difference was not significant. The intervention group reported improved self-perceived mental health after the summer compared to before the summer. However, there was no change in mental well-being in the control group. Compared to the Swedish population in the age group >65 years (mean score for vitality 66.4 and for mental health 80.3 (13)), our study participants had significantly lower vitality scores both before the summer (p<0.001) and after the summer (p=0.015). Regarding mental health, our study participants had significantly lower scores before the summer (p=0.007), but not after the summer (p>0.05) compared to the older Swedish population.
We know of two previous studies on the effect of natural sunlight exposure on vitamin D levels in older people in residential care, but both studies are from the southern hemisphere, one from Auckland, New Zealand, latitude 37°S (8) and the other from Sydney, Australia, 34°S (9). Our study was conducted in Jönköping, Sweden, latitude 57°N, where the climate is very different. Our results are in line with the results of the study of Reid et al. (8) but Reid´s study was very small and of short duration, which is why the generalizability may be unclear. Our findings are in contrast to the Australian study, which showed no significant increase in serum 25(OH) D after a 12-month intervention period with increased sun exposure (9). However, a sub study showed that the UV dose had a small, but significant positive effect on the vitamin D levels after six months for those who started their sunlight sessions in spring or autumn (10). In Australia, dermal vitamin D production is possible all year while in Sweden it only occurs during the summer months, so a 12-month intervention period in Sweden would not be clinically relevant. The adherence to the intervention in the Australian study was poor and declined during the study. The duration of our study was two months, which might have helped to sustain the motivation to adhere to the intervention.
The strengths of this study are the randomized design and that older people living in nursing homes were included. This is an important, but difficult patient category to study, not least due to the high co-morbidity in this population, which was evident in our study. Many residents in the nursing homes were not eligible for participation due to severe dementia, and many residents declined participation, which significantly reduced the number of study participants. Despite this, we still found that the intervention with active encouragement to spend time outdoors was feasible and thus clinically relevant in this population. Ultraviolet radiation increases the risk of skin cancer, but the time in the sun needed for the desirable cutaneous synthesis of 25(OH)D is short. In this study, the time for the outdoor stay was set to 20-30 minutes per day and the participants received information about sun protection.
Although larger than several others in the field, a limitation of our study is its small size. A further limitation is the open label design, i.e. that the study participants and the staff were not blinded to the intervention.
What does this study add to the knowledge in the field? Firstly, to the best of our knowledge, this is the first study of frail older people showing that an intervention with encouragement to spend time outdoors resulted in both increased 25(OH)D levels and improved self-reported mental health. Secondly, the intervention is feasible, inexpensive and thus clinically relevant. However, from this study we cannot identify the reason for the improved mental health in the intervention group, nor tell if there is any association between 25(OH)D and mental health. Possible explanations for the improved mental health after the summer, apart from increased 25(OH)D level, may include increased social interaction and more physical activities when spending time outdoors.
Both the intervention group and the control group reached serum 25(OH)D levels above 50 nmol/l immediately after the summer, but it is doubtful if the achieved values would be high enough to maintain adequate levels throughout the consecutive dark season without oral vitamin D supplements. Though, during the summer months active encouragement to increased time outdoors, leading to increased natural sunlight exposure and increased vitamin D levels, could be a complement to or a replacement of oral vitamin D supplementation. This is appealing as polypharmacy is a common problem for many older people (16). Other possible effects of increased time outdoors such as improved mental well-being and more social interaction could follow. If encouragement to spend time outdoors becomes a practical measure to increase the vitamin D levels in older people, it is important to have balanced information about the benefits and risks of sunlight exposure.
In conclusion, active encouragement to spend time outdoors during the summer improved serum 25(OH)D and selfperceived mental health significantly in older people in nursing homes and could be a complement to or a replacement of oral vitamin D supplementation in the summer. These results should be taken into consideration when issuing updated recommendations about the activities and staff numbers in nursing homes for older people. | 2020-07-10T15:20:52.169Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "a105b8231159de2ed44bc00c62646f86f713ede9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12603-020-1435-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "a105b8231159de2ed44bc00c62646f86f713ede9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16093159 | pes2o/s2orc | v3-fos-license | On Kato's method for Navier--Stokes Equations
We investigate Kato's method for parabolic equations with a quadratic non-linearity in an abstract form. We extract several properties known from linear systems theory which turn out to be the essential ingredients for the method. We give necessary and sufficient conditions for these conditions and provide new and more general proofs, based on real interpolation. In application to the Navier-Stokes equations, our approach unifies several results known in the literature, partly with different proofs. Moreover, we establish new existence and uniqueness results for rough initial data on arbitrary domains in ${\mathbb R}^3$ and irregular domains in ${\mathbb R}^n$.
Introduction
Let Ω ⊂ R n be a domain, i.e. an open and connected subset. In this paper we study the Navier-Stokes equation in the form (NSE) The equation (NSE) describes the motion of an incompressible fluid filling the region Ω under "no slip" boundary conditions, where u = u(t, x) ∈ R n denotes the unknown velocity vector at time t and point x, p = p(t, x) ∈ R denotes the unknown pressure, and v 0 denotes the initial velocity field which is also assumed to be divergence-free, i.e. ∇ · v 0 = 0. Of course, the boundary condition is not present in case Ω = R n . Observe already that ∇·u = 0 allows to rewrite (u·∇)u = ∇·(u⊗u). Initiated perhaps by Cannone's work ( [8]) there has been a lot of interest in the last decade in mild solutions of (NSE) (see e.g. [3,26,27,29,33,40]) for initial data in so-called critical spaces. All these results rely on variations of Kato's method ( [15]) which allows to obtain global solutions if the initial data are small by a fixed point argument (which is based on Banach's fixed point principle or, equivalently, on a direct fixed point iteration).
The fixed point equation is obtained from (NSE) by first applying the Helmholtz projection P to get rid of the pressure term (1) The operator −P∆ with Dirichlet boundary conditions is, basically, the Stokes operator A which -hopefully -is the negative generator of a bounded analytic semigroup T (·), the Stokes semigroup, in the divergence-free function space X under consideration. Then the solution to (1) is formally given by the variation-ofconstants formula (2) u = T (·)v 0 − T (·) * P∇ · (u ⊗ u) + T (·) * Pf.
If one can give sense to the Helmholtz projection P, the Stokes operator A and the Stokes semigroup T (·), this is a fixed point equation for u. A mild solution to (NSE) is a solution to (2). The non-linearity is quadratic and may be rewritten using the bilinear map F (u, v) := P∇ · (u ⊗ v). The natural space for a fixed point argument yielding global solutions would be C([0, ∞), X), but this rarely works for critical spaces. The idea of Kato's method for the critical space X = L 3 on Ω = R 3 is to use an auxiliary space Z = L q with q ∈ (3, 6] and a weighted sup-norm with a polynomial weight t α and to carry out the iteration scheme in a suitable function space with norm In our paper we the note L p α ((0, τ ), X) the space of all X-valued measurable functions f such that f L p α ((0,τ ),X) := t → t α f (t) L p ((0,τ ),X) < ∞. As Cannone observed ( [8], see also [27]), Kato's approach leads to Besov spaces in a natural way. On suitable domains Ω = R n , Amann's work ( [3]) underlined the fundamental role of real interpolation and of abstract extrapolation and interpolation scales. The present paper takes up this point of view.
We start our main results with an abstract version of Kato's method for parabolic equations with quadratic non-linearity (Theorem 3.1), which clearly isolates the properties one has to check for in order to obtain local solutions for arbitrary data or global solutions for small initial data. These properties [A1], [A2], and [A3] only concern linear problems.
In the literature, there is an abstract version of Kato's method due to Weissler [49], formulated for parabolic equations with quadratic non-linearity. The approach, however, is different already for the bilinear term (see Remark 3.4), and in extension to Weissler's result we do not only consider weighted sup-norms for functions with values in an auxiliary space, but also weighted L p -spaces with polynomial weights t α for p ∈ [2, ∞] (the restriction p ≥ 2 is due to the quadratic nature of the nonlinearity). Moreover, in our second main result (Theorem 3.6) we give necessary and sufficient conditions for the properties [A1], [A2], and [A3]. We were led to these results by our previous work on linear systems of the form (3) x ′ (t) + Ax(t) = Bu(t), t > 0 y(t) = Cx(t), t > 0 x(0) = x 0 Theorem 3.1 is actually a result on a quadratic feedback law u(t) = F (y(t), y(t)) for (3). In (3), C and B are unbounded linear operators (in the application to (NSE) they are the identity on suitable spaces, see below), and [A1] and [A2] simply mean that they are admissible in the sense of linear systems theory for the corresponding weighted Bochner spaces. The conditions in Theorem 3.6 (a) and (b) are generalisations of our results in [18] to the case of not necessarily densely defined operators A. Moreover, we give here new and very transparent proofs based on real interpolation (see Section 5) whereas the proofs in [18] relied on H ∞ 0 -functional calculus arguments.
In Section 4 we apply our abstract results to obtain mild solutions to (NSE). On R n we reobtain Cannone's result ( [8]) on initial values in Besov spaces (see Subsection 4.1). In Subsection 4.2 we show that, in close analogy to Subsection 4.1, one may likewise use weak Lebesgue spaces as auxiliary spaces Z which leads to mild solutions for initial values in Besov type spaces that are based on weak Lebesgue spaces. This result is new. In Subsection 4.3 we use Morrey spaces as auxiliary spaces and obtain results similar to those in Kozono and Yamazaki [27,Theorem 3] on initial values in Besov type spaces based on Morrey spaces. Our approach allows to reproduce their result even under weaker conditions for the initial value. In Subsection 4.4 we give a variant of a result due to Sawada ([40]) on time-local solutions for initial values in Besov spaces B −1+ǫ ∞,p with p ∈ (n, ∞), but with a different proof. For the quadratic term we simply use the product inequality for Hölder continuous functions whereas the keystone of the proof in [40] was a Hölder type inequality for functions in general Besov spaces. Subsection 4.5 studies mild solutions for arbitrary domains Ω ⊂ R 3 , and we improve results due to Sohr ([43,Theorem V.4.2.2]) and Monniaux ([34,Theorem 3.5]). Moreover, our approach allows to compare both results. In Subsection 4.6 we assume that Helmholtz projection and Stokes semigroup act in a scale of L q -spaces, q ∈ [q ′ 0 , q 0 ], and investigate how the value of q 0 > 2 affects existence of mild solutions for certain initial values. It turns out that, already under these relatively weak assumptions, a larger q 0 allows for more initial values, where the case q 0 > max(4, n) needs an additional gradient estimate for the Stokes semigroup. In any case, these new results make very clear which properties one has to check for the Stokes semigroup in order to obtain mild solutions for "rough" initial values, i.e. for initial values in suitable extrapolation spaces. We mention that there are other approaches to the Navier-Stokes equations for rough initial data or on general domains (see e.g., [25,27,26,14]), and we shall comment on them at the end of each subsection in Section 4.
The paper is organised as follows. In Section 2 we collect basic facts on the Helmholtz decomposition and the Stokes semigroup for arbitrary domains. Those are the basis for applications of the abstract results to (NSE) in Section 4. In Section 3 we present our abstract results, a part of the proofs is relegated to Section 5.
In an appendix we have gathered facts on Besov spaces based on weak Lebesgue spaces that are needed in Subsection 4.2 and facts on Morrey spaces that are needed in Subsection 4.3.
Acknowledgement:
The authors thank the unknown referee for several suggestions which helped to improve the article, in particular for drawing our attention to [27] and interpolation spaces of Morrey spaces.
Preliminaries
Let n ≥ 2 and let Ω ⊆ R n be an arbitrary open and connected subset. We start with basics on the Helmholtz decomposition in L q (Ω) n where q ∈ (1, ∞). To this end we defineẆ 1 q (Ω) := [u] = u + C : u ∈ L q loc (Ω) and ∇u ∈ L q (Ω) n with norm u Ẇ 1 q (Ω) := ∇u L q (Ω) n . Although the Navier-Stokes equations involve real valued functions, we consider complex function spaces here, since our abstract arguments below shall deal with complex Banach spaces.
Remark 2.4. This was first noticed by Lions [30, p.67] who resorted to a result due to de Rham [11,Theorem 17',p.114]. We refer to [42] for more details and an elementary proof. Now we are able to prove the following representation of the space L q σ (Ω) which is often taken as the definition. The argument in the proof is the same as in [30, p.67].
Proof. If N q is bijective then its inverse N −1 q is bounded by the open mapping theorem, the projection P q has the desired properties and we obtain L q (Ω) n = L q σ (Ω) ⊕ G q (Ω).
In [41], the operator −N q is interpreted as a weak version of the Neumann-Laplacian on Ω. For q = 2, N 2 is always bijective, and P 2 is the orthogonal projection from (L 2 (Ω)) n onto L 2 σ (Ω). This follows from Remark 2.2 or from Theorem 2.6 via Lax-Milgram.
Function spaces. For q ∈ (1, ∞), we use the usual notation and write W 1 q,0 (Ω) = D(Ω) φ is continuous for ∇ · q ′ } with the natural operator norm. ThenV q (Ω) is a Banach space and V q (Ω) is a Banach space for the norm of W 1 q (Ω) n and a dense subset ofV q (Ω). Lemma 2.8. The set D σ (Ω) is dense in (V q (Ω), · W 1 q ) and in (V q (Ω), ∇ · q ). Proof. By definition it suffices to consider V q (Ω). It is clear that D σ (Ω) ⊂ V q (Ω). Now take φ ∈ (W 1 q,0 (Ω) n ) ′ such that φ vanishes on D σ (Ω). Notice that φ is a distribution on Ω. By Theorem 2.3 there exists h ∈ D ′ (Ω) satisfying φ = ∇h and h is unique up to a constant. Since φ can be represented as a sum of partial derivatives of L q ′ -functions, we conclude that we can assume h ∈ L q ′ (Ω).
For u ∈ V q we choose a sequence (u k ) in D(Ω) n such that u k → u in W 1 q,0 (Ω) n , and we finally obtain Remark 2.7). This ends the proof.
Coming back to the Navier-Stokes equation we notice that, for u ∈ L q (Ω) n , we have u ⊗ u ∈ L q /2 (Ω) n×n and ∇ · (u ⊗ u) ∈Ẇ −1 q /2 (Ω) n . Applying the Helmholtz projection to get rid of the pressure term ∇p in (NSE) thus needs extensions P q of the Helmholtz projection P q toẆ −1 q (Ω) n , q ∈ (1, ∞). Those are defined by restriction (as in, e.g., [43], [34]): Observe that this is meaningful sinceV q ′ (Ω) ⊂Ẇ 1 q ′ ,0 (Ω) n . Moreover, P q is linear and continuous. We show that P q and P q are consistent.
Proof. It suffices to check equality on The Stokes operator. We define the Stokes operator in L 2 σ (Ω) by the form method. To this end we let V := V 2 = L 2 σ (Ω) ∩ (W 1 2,0 (Ω)) n and define the closed sesquilinear form The operator A associated with a is the Stokes operator on Ω (with Dirichlet boundary conditions). It is well-known that D(A 1 /2 ) = V with equivalent norms (see [24]; for the definition of fractional domain spaces see Section 3). HenceV := V 2 = (V, ∇ · 2 ) ∼ equals the homogeneous spaceḊ(A 1 /2 ) and the dual spacė Observe that, by Lax-Milgram, a suitable extension A of the operator A acts as an isomorphismV 2 →V −1 2 . The operator −A generates the bounded analytic semigroup (T (t)) = (e −tA ) in L 2 σ (Ω), the Stokes semigroup. L q -theory. If there is q 0 ∈ (2, ∞) such that the Helmholtz projection P q0 is bounded in L q0 (Ω) n and there is a bounded analytic semigroup T q0 (·) in L q0 which is consistent with the Stokes semigroup in the sense that (Ω), then T q0 (·) is called Stokes semigroup in L q0 σ (Ω) or simply in L q0 and its negative generator A q0 is called the Stokes operator in L q0 . Observe that by interpolation and self-duality of the Stokes semigroup we then obtain for any q ∈ [q ′ 0 , q 0 ] that the Helmholtz projection is L q -bounded and that the Stokes semigroup extends to a bounded analytic semigroup in L q σ (Ω).
Abstract Kato method
Sectorial operators. For 0 < ω ≤ π we denote by S(ω) := {z = re iφ : r > 0, |φ| < ω} the open sector of angle 2ω in the complex plane, symmetric about the positive real axis. In addition we define S(0) := (0, ∞). Let A be linear operator on a Banach space X. The resolvent set of A is denoted by ̺(A) and its spectrum by σ(A). The operator A is called sectorial of type ω, if σ(A) ⊆ S(ω) and if for all ν ∈ (ω, π) there is a constant M with λ(λ + A) −1 ≤ M for all λ ∈ S(π−ν). The infimum of all such angles ω is referred to as the sectoriality angle of A.
Inter-and Extrapolation spaces. Given a sectorial operator A on a Banach space X, for each n ∈ N, the space X n := (D(A n ), (I + A) n · X ) is a Banach space.
There are other scales of inter-and extrapolation spaces. We give the definitions we need in the sequel, resorting to a construction in [17]: Let A be an injective sectorial operator on X. As above, endow D(A k ) with the norm (I + A) k · and R(A k ) with the corresponding norm (I + A −1 ) k · . Let L := A(I + A) −2 . Then L(X) = D(A) ∩ R(A) and the sum norm on D(A) ∩ R(A) is equivalent to the norm L −1 · = (2 + A + A −1 ) · . Endowing X 1 := D(A) ∩ R(A) with this norm and letting X 0 := X makes L : X 0 → X 1 an isometric isomorphism. Hence, by abstract nonsense we can construct a Banach space X −1 and an embedding ι : X 0 → X −1 together with an isometric isomorphism L −1 : X −1 → X 0 making the diagram commute. Identifying X 0 and ιX 0 we regard L as a restriction of L −1 . The operator A −1 := L −1 −1 AL −1 is an extension of A and again injective and sectorial of the same type in X −1 .
We define recursively for k ∈ N spaces X −k and injective sectorial operators A −k in X −k , and obtain isometric isomorphisms L −k : X −k → X −k+1 , k ≥ 1. In this framework, we now define homogeneous inter-and extrapolation spaces for k ∈ N: where, for each n ∈ Z, a suitable restriction of A n acts as an isometric isomorphisṁ X n+1 →Ẋ n . For each k ∈ N, we also let X −k := (I + A −k ) k (X) with natural norm and denote by A −k the part of A −k in X −k . This gives rise to a scale . . . ֒→ X n ֒→ . . . ֒→ X 1 ֒→ X 0 := X ֒→ X −1 ֒→ . . . ֒→ X −n ֒→ . . .
where, for each n ∈ Z, the operator I + A n acts as an isometric isomorphism X n+1 → X n .
Notice that X −k = X +Ẋ −k and X k = X ∩Ẋ k , k ∈ N. Moreover,Ẋ k + X −k = X −k andẊ k ∩Ẋ −k = D(A k ) ∩ R(A k ) =: X k , k ∈ N (see [17,19] for more details). We remark thatẊ k = X k for all k ∈ Z if 0 ∈ ̺(A). In any case we have . Finally we mention that, if A is densely defined with dense range, we can define the spaces above by completion, i.e.
for each k ∈ N, (see [22,23,28]). In this case, we shall also use the notationḊ(A) in place ofẊ 1 to make clear with respect to which operator the homogeneous domain space is taken.
Abstract Kato method. Let X, Z, W be Banach spaces and let τ ∈ (0, ∞]. Let −A generate a (not necessarily strongly continuous) bounded analytic semigroup T (·) on X.
Let B ∈ B(W, X −1 ) and C : X → Z be a closed linear operator that is bounded X 1 → Z. Finally let F : Z × Z → W be a bilinear map satisfying F (y, y) ≤ K y y for some K > 0. We consider the abstract problem We seek for mild solutions x(·) in the space C([0, τ ), X), i.e. for functions x satisfying We shall use the notation L p α ((0, τ ), X) := {f measurable : t α f (t) ∈ L p ((0, τ ), X)}. When X = C we also write L p α (0, τ ).
We assume . Then, under the above assumptions on the operators B, C and F , for any initial value x 0 ∈ X ♭ := D(A) (the closure being taken in X) there exits η ∈ (0, τ ] such that the abstract problem (4) has a unique local mild solution x in C([0, η), X ♭ ) satisfying Cx ∈ L p α ((0, η), Z). Moreover, if x 0 X is sufficiently small, then the solution exists globally.
An essential ingredient for the proof is the following lemma taking care of the non-linearity (see e.g. [ The lemma is shown by resorting to Banach's fixed-point theorem on a small ball within E. .
becomes small if η > 0 is small enough. In the case that p = ∞, notice that by x 0 is bounded near the origin and α > 0. Thus, for all p ∈ [2, ∞] and x 0 ∈ X ♭ , we can make y E arbitrarily small choosing η > 0 small enough. If, on the other hand, τ = ∞ and x 0 is small enough, assumption [A1] allows to take η = ∞.
In any case Lemma 3.2 applies and shows existence of a solution z ∈ E satisfying z = y + B(z, z). Now put Then, by [A2] x ∈ L ∞ ((0, η), X). By definition of x and the fixed-point equation satisfied by z, z(t) = Cx(t) (recall that C is closed as operator X → Z). Thus, x(·) is a mild solution of the abstract problem (3.1) as claimed.
(Continuity) To see that x is continuous with values in X ♭ we go again through the fixed-point argument. We employ the following ad-hoc notation: for a Banach space Y let , Y ) and y(0) = 0}, endowed with the supremum norm, and let The norm estimate is clear by [A1]. Strong continuity of the semigroup implies that for x 0 ∈ D(A ♭ ), the trajectory T (t)x 0 is bounded and continuous within [D(A ♭ )] and so t α T (t)x 0 defines an element of C 0 ([0, min(τ, r)), [D(A ♭ )]) for all r > 0. Since C ∈ B(X 1 , Z) the first claim follows by letting r → ∞ and using the density Since continuity is a local property we may assume τ < ∞ without loss of generality.
for all t > r, and so strong continuity of the semigroup on X ♭ shows left continuity. A similar argument yields right continuity and so the first claim follows. The second is shown similarly: by [18, Theorem 1.7, /p+α−1 for t > 0 which implies that the convolution is an absolutely convergent integral within Z. Moreover, for small ǫ > 0 , T (·)B * u is absolutely convergent within W 1−ǫ and so Z ≤ Ct α+ 1 /p−1−ǫ is integrable at the origin, left continuity follows. A similar argument for right continuity shows the second claim. We conclude by the fixed point equation (5) that the solution x is continuous in X ♭ .
Remark 3.3. Notice that in a setting of linear systems theory, assumptions [A1]
and [A2] of the theorem mean weighted admissibility conditions for the observation operator C and the control operator B. We refer to [18] for more details.
Remark 3.4. In the applications to (NSE), the operators C and B are suitable identity operators. Weissler's result [49] assumes continuity of the bilinearity Z × X → W .
Observe that, for general operators B and C, this leads to a different setting, whereas we are working entirely with the space Z for the fixed point argument. In applications to (NSE) this has the advantage that (tensor) products need only be defined for elements of Z, and that we can allow for spaces X with very rough initial data (see Section 4).
Corollary 3.5. Let additionally to the situation in Theorem 3.1 Banach spaces W (1) , . . . , W (m) and operators B j ∈ B(W (j) , X −1 ) be given and consider the abstract problem are bounded. for all j = 1, . . . , m. Then time-local mild solutions always exist in case p < ∞. In case p = ∞ or in order to obtain global solutions a smallness condition on the norms of the functions f j has to be imposed (j = 1, . . . , m).
Proof. As for Theorem 3.1 but with
Before coming to applications in Section 4 we sum up necessary and sufficient conditions for the assumptions [A1] -[A3] in Theorem 3.1. Notice that [A j 2] and [A2] are of the same type whereas [A3] is a special case of [A j 3]. Throughout the rest of this article, for any interpolation couple (E, F ) of Banach spaces we denote by (E, F ) σ,p the real interpolation space between E and F . For references and details on the real interpolation method see e.g. [5,31,47].
The converse is true in case α > 0 or in case A proof of the theorem and some additional results will be provided in Section 5.
As the proof will actually show, the restriction p > 2 (instead of p > 1) and α + 1 / p < 1 / 2 (instead of < 1) in the above formulation is only due to the bilinear structure which forces to consider the parameters p / 2 and 2α in part (b). We mention that in part (a) (and in part (b) in case α = 0) of the theorem the embedding assumption on the space X is optimal. This follows by choosing C = A α+ 1 /p and B = A 1− 2 /p , see also the discussion in [18, Section 1]. Remark 3.7. As mentioned above, condition [A j 3] contains condition [A3] as special cases letting p j = p / 2 and β j = 2α. Here, and for one direction of part (b) in case α > 0, the proof is based on a classical one-dimensional convolution estimate due to Hardy and Littlewood, see Lemma 5.7.
The rôle of maximal regularity for recovering the pressure terms. From now on we shall always use C = Id Z and B = Id W , i.e. we suppose X 1 ֒→ Z and W ֒→ X −1 . Consequently, the semigroup T (t) acts (pointwise as a bounded operator) X → Z and W → X. In this setting, the abstract Cauchy problem (4) takes the form Proposition 3.9. The embedding assumptions in Theorem 3.6 (a) and (b) concerning the spaces Z and W are equivalent to pointwise growth estimates for the semigroup. Indeed, for σ, θ ∈ (0, 1) one has Proof. Since the semigroup is bounded and analytic, one has the elementary estimates for n ∈ Z. Therefore, the embedding properties for Z and W imply the growth estimates of the semigroup acting X → Z and W → X by interpolation. Conversely, the estimate which finishes the proof.
Given an abstract Cauchy problem of the form We refer to [2,10,12,19,28,48] for this relation, the problem of maximal regularity, characterisation results, and further references on the subject.
Lemma 3.11. Let the injective operator −A generate a (not necessarily densely defined) bounded analytic semigroup on W and let U be a Banach space, such that Remark 3.12. By Theorem 3.10 one can give a sense to the differential equation in (8) for a.e. t > 0 in the time interval under consideration. In applications to the Navier-Stokes equations (NSE) this means that for a.e. t > 0. Interpreting A as −P ∆ and the operator P as restriction P : [34,43]) we are led to (Ω) at a fixed time t > 0. Now the pressure term ∇p can be recovered by Theorem 2.3 which passes from (1) back to (NSE).
The equality A = −P ∆ is no problem in case Ω = R n since then ∆ commutes with P . If Ω ⊆ R n is bounded or an exterior domain with ∂Ω ∈ C 1,1 , equality where ∆ D denotes the Dirichlet Laplacian on Ω. On arbitrary domains Ω ⊆ R 3 , equality A = −P ∆ holds on V = V 2 , see [34].
Application to the Navier-Stokes equations
In this section we apply the abstract result to the Navier-Stokes equations (NSE), where (4) corresponds to (1) and (5) corresponds to (2). In these applications we always have C = Id Z and B = Id W which means that the necessary conditions in Theorem 3.6 (a) and (b) boil down to continuous embeddings or via Proposition 3.9 to decay estimates for the semigroup.
It turns out that the choice of the "auxiliary space" Z is most significant. The structure of the map F then determines the space W , and one can calculate the exponent γ for which T (t) W →Z ≤ C t −γ holds. Depending on the context, this may hold on (0, ∞) or on bounded time intervals (0, τ ) where τ < ∞. Observe that an application of Theorem 3.6 (c) (and Remark 3.7) requires γ ∈ ( 1 / 2 , 1) and restricts α and p to α + 1 / p ≤ 1 − γ for local solutions and to α + 1 / p = 1 − γ for global solutions. Nevertheless, we have some freedom for the choice of α and p. Once α and p are fixed, Theorem 3.6 (a) and (b) allow to adjust the space X for initial values appropriately.
In the sequel we discuss various choices of Z on R n and on domains. The common approach covers some known results, provides new proofs for other known results, but it also yields new results on R n and on domains.
4.1. Lebesgue spaces on R n . Here and in the following subsections we consider the Navier-Stokes equations (NSE) on R n , n ≥ 2. For simplicity we shall omit R n and superscripts n or n × n in notation. On R n the Helmholtz projection commutes with the Laplacian ∆ and is bounded on L q for 1 < q < ∞.
We sum up the above considerations in the following theorem.
Remark 4.4.
(a) In case n = 3, q > 3, and p = ∞ we have α = 1 2 − 3 2q and reobtain a result similar to Cannone [8, Theorem 3.3.4]. There smallness is measured in X but the initial value u 0 is taken in L 3 and the solution is required to belong to C b ([0, τ ), L 3 ). Since the action T (t) : W =Ḣ −1 q /2 → L 3 is needed, this leads to the restriction q < 6 (see [8]). (b) The general case n ≥ 2, q > n and p = ∞, α = 1 2 − n 2q is due to Amann [3] whose proof is similar to taking W =Ḣ −1− n /q q for Z = L q . However, [3] also covers the case of (sufficiently smooth) domains Ω ⊂ R n , we shall come back to this in Section 4.6 below. Other results on R n with somewhat different approaches are due to Kato and Ponce [25], who used commutator estimates for the bilinear term to achieve existence and regularity result for initial values in Bessel potential spaces, and to Koch and Tataru [26], who proved an existence result for initial values in BMO −1 and where the structure of proof is more involved than via our Theorem 3.1. where q ∈ (n, ∞], p ∈ [1, ∞] and ǫ ∈ (0, 1] (thus X = B 0 ∞,∞ is included). Observe that Theorem 4.1 yields, for q ∈ (n, ∞) and p ∈ [ 2q q−n , ∞], local solutions for divergence-free initial values in the space X =Ḃ −1+ n /q q,p (i.e. for ǫ = 0) and global solutions for small initial data (which is not covered by the result in [40]). Moreover, the proof in [40] relied on a Hölder type inequality for products of Besov space functions whereas our proof simply uses the Hölder inequality for the product of two L q -functions. We also remark that we can obtain time-local solutions for the inhomogeneous space X = B −1+ n /q q,p by considering τ < ∞ and W = H −1 q/2 . We shall discuss the case q = ∞ of Sawada's result in Subsection 4.4 below.
4.2.
Weak Lebesgue spaces on R n . In this section we consider as space Z the weak Lebesgue space L q,∞ for a fixed q ∈ (n, ∞). For the definition of weak Lebesgue spaces and subsequently used embedding and interpolation results, see the Appendix in Section 6. Concerning the Helmholtz projection we remark that, since we are on R n , it commutes with the Laplacian ∆ and that it is bounded on weak Lebesgue spaces by real interpolation. Similarly, using [16,Corollary 6.7.2] and the interpolation results of the appendix, the Helmholtz projection is bounded on spacesḂ s (q,∞),p (R n ). The analysis now follows the lines of Section 4.1. For u, v ∈ L q,∞ , clearly u ⊗ v ∈ L q /2,∞ and therefore W : 2 by bounded analyticity of the semigroup. By (42) in the proof of Lemma 6.2 we have the embeddingḢ n /q q /2,∞ ֒→ L q,∞ . Thus δ = n / q yields the estimate T (t) W →Z ≤ c t −γ , t > 0, required in Theorem 3.6 (c) and Remark 3.7 with γ = 1 2 + n 2q . Choosing α and p such that By the embedding property (41) (see Appendix, page 31) the first space embeds into Z = L q,∞ for s = 0 which holds by (13). Consequently [A1] is satisfied. Similarly, for verification of assumption [A2], we observe that W embeds into the second space by Lemma 6.2 provided that α + 1 / p = 1 2 − n 2q which again holds by (13).
4.3.
Morrey spaces on R n . In this section we consider as space Z the Morrey space M q,λ (R n ) for fixed q ∈ (n, ∞) and λ ∈ (0, n / q ). For the definition and some basic properties of Morrey spaces, see the Appendix in Section 6.
4.4.
Hölder spaces on R n . We seek for time-local solutions in this case. In view of Remark 3.8, we can assume A to be boundedly invertible which simplifies the calculation of inter-and extrapolation spaces.
Observe that, in a canonical way, ∇ · (Ċ ǫ ) n×n equals (∇ · (Ċ ǫ ) n ) n , and that W is a space of distributions althoughĊ ǫ is not. Since Riesz transforms are bounded onĊ ǫ (see, e.g. [16, Corollary 6.7.2]), they are bounded on W , and therefore the Helmholtz projection is bounded on W (the basic idea is that the origin, in which the symbol ξ k /|ξ| is not differentiable, plays no rôle when considering homogeneous Besov spaces).
Theorem 4.10. Let n ≥ 2, p ∈ (2, ∞], and ǫ ∈ (0, 1). Let α > 0 be such that α + 1 / p < 1 / 2 . Then the Navier-Stokes equation (NSE) admits a time-local mild ). Remark 4.11. The result by Sawada [40] also covers time-local solutions for initial values in spaces B −1+ǫ ∞,p for p up to ∞. However, the space for uniqueness does not involve L p α (C ǫ ) but L ∞ β -spaces with values in certain Besov spaces. This is due to the fact that the key stone in [40] is a Hölder type inequality for products in (inhomogeneous) Besov spaces which is proved there by means of Littlewood-Paley decomposition and paraproducts. Our proof uses the simple product inequality in C ǫ instead, and we obtain the second index p in X by taking L p in time. So, in our proof, improvement comes from a better understanding of the linear ingredients for the problem whereas in [40] it comes from a new insight for the non-linearity. We remark that [40] includes the case ǫ = 1. Our results allow to discuss both approaches, to compare them, and to improve them.
Let Ω ⊆ R 3 be an arbitrary domain. Since there is no regularity assumed for ∂Ω, existence of the Stokes semigroup (T (t)) = (e −tA ) is only guaranteed in L 2 σ (Ω) or in interpolation and extrapolation spaces that are associated to L 2 σ (Ω) and the Stokes operator A.
Thus, we obtain
there exists a unique mild solution u to the Navier-Stokes equation (NSE) satisfying
We have τ = ∞ if these norms are sufficiently small.
We want to discuss the result in [34] a bit further. The approach there corresponds to taking Z = V = D(A 1 /2 ). For u, v ∈ Z, one has u · ∇v ∈ L 3 /2 (Ω) 3
there is a unique mild solution to the Navier-Stokes equation (NSE) satisfying
, and we have τ = ∞ if these norms are sufficiently small. Remark 4.19. Concerning the relation of q and q 0 we remark that a calculation . Hence we have q = q 0 for q 0 ∈ {2, 4}, and the special cases q = 12 / 5 for q 0 = 3 and q = 3 for q 0 = 18 / 5 . We did not use any further properties besides boundedness of the Helmholtz projection in L q0 and bounded analyticity of the Stokes semigroup in L q0 σ . Once one has 2q ,p actually grow with q. Since A − 1 /2 is an isometry, it is sufficient to show that (L q σ (Ω),Ḋ(A q )) 3 2q ,p grows with q. To see this, we let q 1 ∈ (q, q 0 ], write out the norms and use the semigroup property in the last step, which in turn follows by interpolation of the action T (t) : L q1 σ → L q1 and T (t) : L 2 σ → L q1 , recall that q 1 ≤ 4 < 6. It is clear that, besides bounded analytic action of the Stokes semigroup in L q σ (Ω) and L q1 σ (Ω), the estimate (24) is all that is needed to prove the desired inclusion in more general cases.
An approach to unbounded domains Ω of uniform C 1,1 -type based on L q (Ω)spaces is due to Farwig, Kozono, and Sohr [14]. Here, L q = L q + L 2 for q ≤ 2 and L q = L q ∩ L 2 for q > 2. Resorting to these spaces, difficulties that arise for an L q -theory from the behaviour of Ω at infinity could be overcome. We refer to [14] for details.
Proof of Theorem 3.6
For a sectorial operator in a Banach space X, real interpolation spaces between X and inhomogeneous spaces X k are well-studied, see [5,31,47]. In this section we provide the results on real interpolation of homogeneous spaces needed for the proof of Theorem 3.6. The following result is an analogue of [47, Theorem 1.14.2].
Proof. Let E m := X +Ẋ m . Clearly, E m = (I + A) m (Ẋ m ) and e Em = A m (I + A) −m e X . We borrow a decomposition technique inspired by [28,Proposition 15.26]: let a j be defined by (all operators are bounded). Call the first term in brackets V 0 (λ)x and the second one V 1 (λ)x. A direct calculation shows quasi-linearisability. Hence, Notice that For the estimate "≤", notice that where the first expression is bounded by sectoriality of A.
Finally, let f (λ) := λA m (1 + λ 1 /m A) −m . Then for x ∈ E m and x = y + z with y ∈ X and z ∈Ẋ m , Taking the infimum over all such decomposition yields f (λ)x X ≤ c K(λ, x, X,Ẋ m ) and the proof is finished.
The following result corresponds to [47,Theorem 1.14.5] and gives another equivalent norm on (X,Ẋ m ) θ,p in the case of analytic semigroups. We omit the proof since it is identical to the non-homogeneous case.
Proposition 5.2. Let T (·) be a bounded and analytic semigroup on a Banach space X and −A its generator. If A is injective, then for p ∈ [1, ∞] and θ ∈ (0, 1), an equivalent norm on (X,Ẋ m ) θ,p is given by The following result is an analogue of [47, Theorem 1.14.3 (a)]. Lemma 5.3. Let X be a Banach space and A be an injective, sectorial operator on X. Then, for k, j, m ∈ Z with k < j < m, we have Therefore, by sectoriality of A, The second embedding follows also by sectoriality of A from which is true for all x ∈Ẋ j .
The assertion does hold for arbitrary interpolation indices θ ∈ (0, 1) but we shall not introduce fractional homogeneous spaces (see [17,19]) since the above version is sufficient for our purposes.
Results on assumption [A1]
. In this section we discuss boundedness of the map for p ∈ [1, ∞] and α ∈ (− 1 / p , 1 − 1 / p ). We start our considerations with a simple necessary condition for boundedness of Ψ ∞ : boundedness of the set in B(X, Z). Indeed, writing the resolvent of A as Laplace transform of the semigroup and using Hölder's inequality we have for x ∈ X 1 and λ > 0 where the number K depends only on p and the norm of Ψ ∞ . Next we treat the special case p = ∞.
Proposition 5.4. Let A be an injective sectorial operator of type ω < π / 2 on a Banach space X and let C ∈ B(X 1 , Z). For α ∈ (0, 1 − 1 / p ) the following assertions are equivalent: Proof. From the necessary condition (32) it follows directly that (i) implies (ii).
Finally, let (iii) hold. By Proposition 5.2 we then have The second estimate used the fact that, for bounded analytic semigroups, the operators (tA) T (t), t > 0 are uniformly bounded.
Theorem 5.5. Let 1 ≤ p ≤ q ≤ ∞ and A be an injective sectorial operator of type ω < π / 2 on a Banach space X and let C ∈ B(X 1 , Z). Let α ∈ (− 1 / p , 1 − 1 / p ). Then the following assertions hold: and if X ֒→ (Ẋ −1 ,Ẋ 1 )1 /2,p , then it is also bounded X → L p α (Z). Theorem 3.6 (a) is a corollary of this result letting q = ∞. Before giving a proof we point out its main argument, a simple reiteration observation.
Under the assumptions on α, β, γ, p, q all of the above conditions are satisfied.
Thus, K α,γ , given by k α,γ (t, s) = ½ (0,t) (s)(t−s) −γ s −α for s, t ∈ (0, τ ) is bounded L p (0, τ ) → L ∞ (0, τ ) provided that one of the following conditions holds: Observe that condition (iv) implies α > 0 and γ > 0. In case α > 0 the assertion of Theorem 3.6 (b) now follows immediately. In case α = 0 we follow a different strategy. Instead of analysing boundedness of the convolution operator T (·) * we can study boundedness of the map for p ∈ [1, ∞] and α = 0. Therefore in some sense we are in a dual situation to the discussion of assumption [A1] and indeed similar methods can be employed. We shall discuss boundedness of Φ τ in (37) for general α and then deduce the remaining step for Theorem 3.6 (b) as a special case.
Proposition 5.8. Let A be an injective sectorial operator of type ω < π / 2 on a Banach space X. Let B ∈ B(W, X −1 ) and let α ∈ (0, 1). Then the following assertions are equivalent: Proof. The implication (i) ⇒ (ii) is similar to Proposition 5.4. Let (ii) hold, and let u ∈ U . By Proposition 5.1, we have Hence (ii) implies (iii). Finally, if (iii) holds, then, by Proposition 5.2, which proves the last implication.
In the sequel we study function spaces on R n constructed on Lorentz spaces. When lifting (38) with (I − ∆) − s /2 one obtains the spaces H s q,r as corresponding real interpolation spaces of H s q -spaces: We have thus x H s q,r ∼ (1 − ∆) On the other hand, homogeneous and inhomogeneous Besov-and Triebel-Lizorkin type spaces on R n may be constructed from Lorentz spaces by replacing the L qnorm in the space variable by the corresponding Lorentz norm · L q,r . We denote these spaces by B s (q,r),p and F s (q,r),p and in the homogeneous case byḂ s To define homogeneous spacesḢ s q,r we employ the same technique as above but lift with (−∆) − s /2 instead of (I − ∆) − s /2 . One obtains the analogue identityḢ s q,r = F s (q,r),2 . The proof is an immediate consequence of a retraction/co-retraction argument that boils down the problem to an interpolation of vector-valued Lebesgue spaces.
Moreover, considering the heat semigroup T (·) in M q,λ (R n ), we have by translation invariance of the space that T (·) is bounded analytic in M q,λ (R n ). Denoting the generator by ∆, we then have D(∆) = M q,λ,2 (R n ) and, still by Calderón-Zygmund theory, (M q,λ ) · s,−∆ (R n ) =Ṁ q,λ,2s (R n ) for any s ∈ R. Riesz potential operator and embeddings. Let I s be the Riesz potential operator given by the convolution kernel |x| s−n . Then the following Sobolev inequality for Morrey spaces holds. This is used in Subsection 4.3. | 2008-07-03T11:41:12.000Z | 2007-09-13T00:00:00.000 | {
"year": 2008,
"sha1": "384be2ce5a3777b8fa4df9b322586efe74472339",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0709.2067",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "55f18b87aa65a0b07313a97ddabe922bceb0e2aa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237635517 | pes2o/s2orc | v3-fos-license | Real-time multimode dynamics of terahertz quantum cascade lasers via intracavity self-detection: observation of self mode-locked population pulsations
Mode-locking operation and multimode instabilities in Terahertz (THz) quantum cascade lasers (QCLs) have been intensively investigated during the last decade. These studies have unveiled a rich phenomenology, owing to the unique properties of these lasers, in particular their ultrafast gain medium. Thanks to this, in QCLs a modulation of the intracavity field intensity gives rise to a strong modulation of the population inversion, directly affecting the laser current. In this work we show that this property can be used to study the real-time dynamics of multimode THz QCLs, using a self-detection technique combined with a 60GHz real-time oscilloscope. To demonstrate the potential of this technique we investigate a free-running 4.2THz QCL, and observe a self-starting periodic modulation of the laser current, producing trains of regularly spaced, ~100ps-long pulses. Depending on the drive current we find two regimes of oscillation with dramatically different properties: a first regime at the fundamental repetition rate, characterised by large amplitude and phase noise, with coherence times of a few tens of periods; a much more regular second-harmonic-comb regime, with typical coherence times of ~105 oscillation periods. We interpret these measurements using a set of effective semiconductor Maxwell-Bloch equations that qualitatively reproduce the fundamental features of the laser dynamics, indicating that the observed carrier-density and optical pulses are in antiphase, and appear as a rather shallow modulation on top of a continuous wave background. Thanks to its simplicity and versatility, the demonstrated technique is a powerful tool for the study of ultrafast dynamics in THz QCLs.
As pointed out in a number of works, a key ingredient leading to FM-type of mode-locking in QCLs is the fact that the gain recovery dynamics unfolds on a timescale much shorter than the roundtrip time. In this case, intensity oscillations of the intracavity field due to multimode beatings, generate a strong modulation of the population inversion [16,17]. As shown in Refs. [13,18] thanks to the fact that QCLs have a finite linewidthenhancement factor (LEF), i.e. a sizeable phase-amplitude coupling, oscillations of the population inversion result into a strong non-linearity at all orders including the 3 d (Kerr-like). The latter, critically depending on the value of the group-velocity dispersion, can trigger a self-starting FM comb. If, on the one hand, the condition that the gain recovery time is much shorter than the roundtrip time is clearly satisfied in mid-IR QCLs, with their ~ps-long upper state lifetime, the situation is less clear-cut for THz QCLs. Reported gain recovery times for THz QCLs are rather in the 10-50ps [19][20][21][22] which appears to favour a more conventional amplitude-modulated (AM), or at least a mixed AM-FM type of mode-locking [23][24][25].
Regardless of the type of mode-locking mechanism and the type of the considered QCL lasers, the emission of self-starting QCL combs has been found to always present a sizeable amount of AM, in some cases, under the form of ps-long pulses with high contrasts [11,13,15,26]. Such modulation is at the basis of so-called beatnote spectroscopy techniques [1], among which the shifted wave interference Fourier transform (SWIFT) spectroscopy is the most powerful [26,27]. These techniques rely on the measurement of the difference frequency beatnote between adjacent longitudinal modes of the laser spectrum to assess their mutual coherence, and therefore determine the level of coherence of the entire comb spectrum.
Measuring the beatnote signal between adjacent modes, or fundamental beatnote, requires a quadratic detection process with a bandwidth (BW) at least of the order of the cavity free spectral range (FSR). Besides the use of a sufficiently fast external detector, such as a quantum well infrared photodetector [28] or a Schottky diode, it is well known that this function can be performed by the QCL itself simply by monitoring its current [29]. Indeed, as pointed out above, the intermodal optical beating generates, through partial gain saturation, a modulation of the population inversion Dn [16,17]. This, in turn, produces a modulation of the laser current, which, at first approximation, is directly proportional to Dn [29,30].
Since over two decades, intracavity self-detection of the fundamental beatnote has been routinely used for various applications, including studying the coherence of multimode THz QCLs [5,29,31]. However, it is clear that limiting the measurement to the fundamental oscillation at the FSR, somewhat undermines the potential of this technique. Indeed, the fact that the gain dynamics of THz QCLs unfolds on a time scale of few tens of ps, determined by intersubband relaxation, leads to self-detection BWs in the ~10-100GHz range [29,32]. Hence, by monitoring the QCL current with fast electronic acquisition, one could observe in real time the temporal evolution of the intensity of the intracavity field propagating across the QCL resonator as a result of the coherent superposition of many lasing modes. To this end we note that although techniques such as SWIFT spectroscopy can indeed be used to reconstruct the time-dependent intensity of the laser field, nevertheless the resulting timetrace is inevitably averaged over tens of billions of periods.
Typically, the FSR of QCLs lies in the 10GHz, range (3-4mm cavity length), which, in terms of detection system BW requirements, poses serious challenges if one wants to observe the real-time evolution of a QCL determined by cavity modes separated by several FSRs. An attempt to observe in the time domain the field intensity emitted by an actively mode-locked 2.5THz QCL with a ~10GHz FSR was done by using a ~30GHz BW Schottky diode mixer, thus limiting the detection to the 3 d FRS harmonic [29]. To circumvent this limitation, here we have fabricated an exceptionally long THz QCL, based on a 15mm single-plasmon ridge waveguide, and relying on an ultra-low threshold, 4.2THz active-region design [33,34]. The resulting multimode emission spectrum consists of ~40 longitudinal modes separated by a FSR of ~2.4GHz. The freerunning QCL was driven in continuous wave (CW), and we monitored the ac current gain-induced modulation with the help of a microwave probe connected to a real-time oscilloscope, yielding an overall measurementsystem BW of ~45GHz. This BW allowed us to probe in real time the mutual coherence of ~20 modes.
With this real-time acquisition technique, we perform a direct and complete analysis of the laser dynamics vs drive current using the collected time-dependent intensity waveforms (WFs, allowing to discriminate between amplitude and phase noise. Depending on the drive current, we observe a stable comb-regime characterised by two current pulses per roundtrip, alternating with wider current regions characterised by a highly unstable regime, with one pulse per roundtrip, and much more pronounced amplitude noise and timing jitter.
On the theoretical side we manage to reproduce a large part of the experimental results by integrating a set of Effective Semiconductor Maxwell-Bloch Equations already able to describe e.g. the alternation between locked and unlocked multimode regimes in a THz-QCL and the spontaneous formation of pulse trains in both the intracavity electric field intensity and carrier's density [18]. Notably, here we find the spontaneous generation of shallow optical and carrier's density pulses, on top of a CW background. These pulses, also designated as "pulsations" to distinguish them from traditional, high-contrast, mode-locked laser pulses, are consistent with the master equation approach presented in Ref. [13] and by the analytical results recently derived by D. Burghoff within an approximated mean field theory [14]. We believe that the demonstrated technique and results will pave the way to the study of the real time dynamics in THz QCLs.
II. QCL dc characteristics and self-detection setup
The active region of the QCL studied in this work is based on 76 repeated periods of an Al0.25Ga0.75As-GaAs bound-to-continuum design, with a resonant-phonon extraction stage [32]. The precise layers sequence can be found in Supplementary material, and is a slight variation of the one reported in Ref. [35]. The heterostructure is grown on top of a semi-insulating GaAs substrate, and the active region is sandwiched between 400nm and 50nm thick bottom and top contact layers, n-doped at levels of 3 x 10 18 and 5 x 10 18 cm -3 respectively. Devices were processed in 15mm-long, 100µm-wide single-plasmon waveguides.
In Fig.1(a) we report the dc Voltage/Current (V/I) and Ouput Power/Current (P/I) characteristics of the QCL studied here, measured at T=30K. These were obtained by a continuous up-sweep of the drive current from 0A to 2.65A, followed by a down-sweep from 2.65A to 0A. We observe a clear hysteretic behaviour in the V/I characteristic. As discussed in detail in Ref. [35] this is attributed to the presence of a space charge accumulation layer, progressively moving across the active region as the current is changed, and dividing the latter into a low and a high field-domain. Thanks to the very low threshold current density (60A/cm 2 ), the threshold current is of only 1A, despite the extremely long ridge-waveguide (THz QCL cavity lengths are typically of the order of only a few mm). The measured output power, of 0.2-0.3mW, is fairly constant, and we observe a very broad lasing current range extending up to 2.5 times the threshold and beyond. A few representative emission spectra, measured with an FTIR spectrometer, are reported in Fig.1(b). Emission covers up to ~200GHz, with the centre of the spectrum blue shifting from ~4.23THz to ~4.3THz from low to high current. In Fig. 2(a), we report a schematic of the experimental setup used for the self-detection experiment. The QCL under study is mounted on the cold-plate of a cryogenic probe station and kept at a temperature of 30K. A broadband, 67GHz-BW coplanar probe of 125µm-pitch is positioned at one end of the QCL ridge cavity, at approximately 300µm from the facet. The coplanar probe is connected to a 45GHz-BW RF amplifier with a ~35dB gain (RF-Lambda RLNA00M45GA), whose output is monitored simultaneously with a 60GHz-BW real-time oscilloscope (Keyseight Infiniium), and a 67GHz-BW spectrum analyser. The dc-bias is applied to the QCL using 4 standard dc probes: two probes are positioned on the top contact, and two probes on the lateral ground contacts (note that we obtained virtually the same data using a wire-bonded QCL, showing that the observed behaviour is independent from the type of electrical wiring).
To experimentally test the amplifier, we have connected it to a fast InGaAs photodiode with a ~30GHz ,3dB BW, that we illuminated with a train of ~150fs mode-locked pulses from a commercial fiber-laser. The resulting electrical pulse, measured with the real-time oscilloscope, is displayed in Fig2(b). Its ~28ps time-length is consistent with the response time of the photodiode, showing that our detection-system bandwidth is of at least ~35GHz.
In Fig.2(c) we report the S12 parameter of the 15-mm long QCL ridge waveguide, measured with two coplanar probes at both ends of the device and a vector network analyser (the electric field profile of the microwave guided mode, computed with a finite element solver is shown in Supplementary material). The QCL was operated at 30K, with a drive current of 1.5A. From 0 to 40GHz we observe a very strong power attenuation, between 40dB and 50dB, i.e. in the range ~2.5-3.3dB/mm, up to ~3.3-4dB/mm above 40GHz. This is attributed to the combined effect of the device conductance and free carrier absorption in the doped contact layers and metal contacts [36]. As a consequence of such a strong absorption, the propagation of the electrical pulse is heavily damped, preventing microwave resonant effects. In other words, as shown pictorially in the inset of intracavity field. They show that the QCL is operating in a regime of self-mode-locking, with a degree of temporal coherence that can be determined thanks to the real-time capability of our experimental technique.
A first striking feature is that while WF-FN shows one pulse per roundtrip, i.e. a repetition rate of 2.43GHz, corresponding to the cavity FSR, WF-HM shows two pulses per roundtrip, with a repetition rate at ~4.83GHz, at twice the FRS [23,37]. Red and blue dots in Fig. 3 indicate the points on the QCL I/V curve with WFs showing respectively 1 or 2 pulses per roundtrip. These two regimes were triggered by randomly changing the QCL drive current upward or downward in a continuous way. As shown in the Figure, we observe multiple operating points, with the same drive current and different voltages, lying on top but even inside the hysteresis loop obtained by a full continuous up-sweep followed by a down-sweep (same as Fig.1(a)). In some cases (green arrows in Fig. 3), and within the error of the voltage measurement, the single and double pulsing regimes are superimposed, i.e. one regime or the other can be obtained at virtually the same bias and current, depending on the "current history". Thanks to real-time acquisition, in Fig.4, Fig.5 and Fig.6, we provide a complete analysis of the measured WFs, revealing two completely different regimes. Fig.4(b) shows a time-trace of WF-FN recorded in a single-shot (no averaging) in the time interval -1µs/+1µs, corresponding to ~5000 periods. We observe a strong quasi-random modulation of the signal amplitude, or amplitude noise. In addition, the shape of the current pulses dramatically changes with time. This is shown in the panels of Fig.6(a), displaying three ~2.5ns-long time windows of the time-trace of Fig.4(b), at different delays from the trigger event (~-1µs, ~0µs or~+1µs). An alternative view of the extent of such phenomenon can be found in Fig.6(b) and Fig.6(c), showing WF-FN recorded for two different values of the trigger level, and with the oscilloscope set in persistence mode. In this way 2ns-long time-traces are recorded in sequence and shown simultaneously on the display: more frequent WFs appear with a brighter colour intensity. As can be seen, for a given trigger level, despite keeping a relatively regular periodicity, the amplitude and shape of the pulses fluctuate wildly. Changing the level of the trigger produces an even larger change (compare Fig.6(b) and Fig.6(c)), revealing the quasi-chaotic nature of the radiation emitted by the QCL (see also Movie 1). We systematically observed such type of behaviour when the QCL was operated in correspondence to the red dots of Fig.3, i.e. when emitting one pulse per roundtrip (see Supplementary material for more traces). The Fourier transform of the time-trace of Fig.4(b) is displayed in Fig.4(d). As expected we find a comb of lines separated by the cavity FSR, resulting from the coherent beating of the THz modes. From their number, we conclude that modes separated by up to at least ~17 FSRs contribute to the observed current pulse.
In Fig.4(c) we report the time-trace of WF-FN obtained by averaging over 256 time-traces in the interval -1µs/+1µs (we verified that by increasing further the number of averages the time-trace did not change).
Contrary to the single-shot acquisition of Fig.4(b), in this case the amplitude of the signal undergoes a fast decrease with time. This results from the averaging process: as the pulse train loses its coherence, due to the strong amplitude noise and pulse shape fluctuations, the time-trace averages out. Within ~20ns from the trigger event (~50 periods) the pulse amplitude has decreased by a factor 2. In the frequency domain such loss of coherence gives rise to a large noise pedestal around the comb lines, of approximately 20MHz FWHM (see The picture changes completely in the case of two pulses per roundtrip (WF-HM and blue dots in Fig.3).
As shown in Fig.5(b) and Fig.5(c), in this case the pulse amplitude is virtually unaffected by the averaging process, and the fast decrease found in Fig.4(c) has disappeared. Amplitude fluctuations, whose magnitude is reduced compared to Fig.4(b), are mostly due to a coherent modulation of unknown origin (clearly visible in the averaged time-trace shown in Fig.5(c)). As shown by the three panels of Fig.6(d) and by the persistencemode acquisition shown in Fig.6(e), the strong pulse-shape fluctuations found on WF-FN ( Fig.6(a)) have completely disappeared (see also Movie 2). We systematically observed such type of behaviour when the QCL was operated in correspondence to the blue dots of Fig.4(e). In the left inset we report a spectrum of the same line acquired with the spectrum analyser (see Fig.3(a)). The linewidth of ~50kHz is limited by the resolution bandwidth (30kHz), indicating a coherence time of the pulse train > 30µs.
To verify that WF-HM is dominated by timing jitter, i.e. that the related comb spectrum is dominated by phase-noise, we have measured the noise at ~130MHz from each comb line in the spectrum obtained by (d)).
An important finding of our measurements is that the duration of the current pulses is always in the range ~90-110ps, regardless of the operating regime (single or double pulse per roundtrip). Following Fig.2(b) we exclude that this duration is affected by the measurement system bandwidth. At the same time the observed pulse length is somewhat longer than what would be expected from the THz emission spectra of Fig. 1 in the case of negligible or low phase dispersion, which points towards an FM-type mode-locking operation (see next Section).
IV. Simulations and discussion
The experimental results reported in the previous Section could be reproduced with a good agreement by means of simulations based on a model developed in [18], and already successful in describing the multimode dynamics in a THz QCL based on a set of Effective Semiconductor Maxell Bloch Equations. With this approach we properly take into account peculiar features of the radiation-matter interaction in a semiconductor active medium, such as, most importantly, a non-null phase-amplitude coupling provided by the LEF. Its fundamental role in affecting both the stability of the continuous wave solution (single mode operation) and thus the multi-mode competition emerging thereof, as well as the self-phase locking leading to comb formation in a QCL has been very recently demonstrated [39]. Remarkably, in agreement with what demonstrated in Ref. [18] we show that the unidirectional ring configuration considered by our model is capable of reproducing the fundamental phenomenology of the experimental findings, although it does not encompass Spatial Hole Burning (SHB) associated to the superposition of forward and backward propagating fields in a FP resonator. This allows for a radical numerical simplification and potentially allows for a simplified analysis of the self-mode locking phenomenon.
The set of nonlinear partial differential equations that describe in a semiconductor laser, and thus in particular in a QCL, the spatio-temporal evolution of the slowly varying envelopes of the electric field E, the macroscopic polarization P and of the carrier density D in the rotating wave and low transmissivity approximations can be cast in the form: Table 1.
All simulations were performed adopting the parameter values reported in Table 1 which are fully compatible with the experimental configuration except for the FWHM of the gain lineshape (</UP V ≅ 100GHz) that is smaller than the typical expected values of ~400-500GHz (see below) [33,34].
In Fig. 7 we report the results of the numerical integration of Eqs. (1)-(3) done by using a split step method based on a pseudospectral and a Runge Kutta algorithms. In particular Fig. 7(a) Concerning the pulse lengths (FWHM), regardless of the operating regime (stable or unstable) we obtain values in the range ~50-70ps (see Fig.7), i.e. shorter than what found experimentally (~90-110ps, see previous Section). This parameter depends on the value of the carrier's lifetime P R . We found that to reach the observed ~100ps pulse length, the value of P R must be of about 100ps, which is nevertheless larger than the typical gain recovery times measured in the literature, in the range ~10ps-50ps [19][20][21][22].
From Fig.7 it emerges that the pulses appear as a shallow modulation (≲ 20%), or pulsation [14], of the total carrier density, i.e. the optical power is dominated by a large continuous wave (CW) background. From Fig.8 we also find that the optical and carrier density pulses are ≅180deg out of phase. This finding and the observation of shallow pulses sitting on a CW background are expected since P R is much shorter than the roundtrip time, which has been shown to hamper the generation of strong pulses as in standard mode-locking, and rather lead to a CW dominated, FM-comb type of emission [16,17]. We note that in the actual selfdetection configuration (see Fig.2) it is clearly not possible to establish experimentally the presence of a CW background. Indeed such background gives rise to a dc voltage offset that is indistinguishable from the externally applied dc bias.
As discussed previously, the results of Fig.7 and Fig.8 are obtained with a gain FWHM of 100GHz. Indeed, while for this value we obtain a good agreement with the experimental results in terms of (i) the extension of the current regions showing stable/unstable pulsed emission, (ii) the number of pulses per roundtrip and (iii) the pulse shape, we found that an increased gain FWHM in the range from 200 to 400GHz (more in agreement with the experimental data [31]), while leading to the formation of temporal structures of similar shape, produced a regime with more than two localized pulses per roundtrip. Further increasing the FWHM to 500GHz showed the onset of either CW emission or of an irregular multimode regime, flagging a higher mode competition. We argue that this discrepancy might be ascribed to the unidirectional character of the field propagation in our model. In fact, in FP resonators, bidirectional propagation with inclusion of SHB evidenced modified single-mode instability and mode coupling, favoring mode-locking with a large gain FWHM, as recently shown in a mid-IR QCL [13][14][15]. Nevertheless, we expect that the qualitative dynamical picture should not be radically modified, showing a first transition from single-mode emission to multimode regimes, with alternating windows of localized pulses and unlocked, irregular emission [15,18]. We plan to expand our model in the near future to include a FP configuration, following what done in Ref. [15] in the mid-IR, to achieve a more quantitative agreement with the experimental results presented here.
A closer inspection of Fig. 7(a) in the case of two-pulses per roundtrip (G = 1.5, 1.8, 1.9), reveals that the two pulses are not equally spaced in time, but rather appear as doublets separated by the roundtrip time. This fact is in disagreement with the experiment, where the pulses have a period equal to half the roundtrip time.
The effect is progressively more pronounced when the value of µ is increased (see the bottom WF in Fig.7(a)).
While we do not have an intuitive explanation for this phenomenon, we note that the solitonic character of the peaks, theoretically predicted in Refs. [14,18], may lead, at steady-state, to an adjustment of the separation between localized structures arising from mode-locking. In the experiment, additional confinement mechanisms, not encompassed in the model, may be at work. As a result, and contrary to the experimental RF spectra, where only the lines with frequencies equal to even multiples of the cavity FSR are visible (see Fig.5(d) and Supplementary material), in the computed RF spectra, both even and odd multiples are present, with an attenuation of ~20 dB between the intensities of the first and second beatnote at µ = 1.5 (see Supplementary material).
A final remarkable evidence provided by the model is the confirmation of coexisting regimes of 1-pulse and 2-pulse emission for the same parameters (bias current, in particular). Simulations show that either regime Fig.7), and associated intracavity optical power (black) in the interval -0.5ns/+0.5ns, for G = 1.5 . may appear at steady state, by using different random and noisy initial conditions. This exactly matches the experimental evidence, where the operating points showing coexisting regimes are reported by the green arrows in Fig.3 and are obtained by randomly ramping the current up and down, as described in the previous Section. We believe that this is a preliminary indication of the basic solitonic characteristics of the pulses emerging from the mode-locking onset, as already highlighted in Refs. [14,18,39]. Here, in addition, we can directly observe the carrier's density shape in the time domain and correlate it with our simulations. To the best of our knowledge, this is the first time that such a coupled experimental/numerical observation is provided in a THz-QCL.
V. Conclusions
Understanding in depth the temporal dynamics of THz QCLs is proving crucial for the development of frequency combs based on these unique laser sources. To this end, the possibility to observe their behavior unfolding in real-time on timescales of tens of ps or lower, may bring to new insights. In this work we make an important step in this direction by demonstrating a self-detection technique based on broad BW electronic acquisition. So far, the scarcity of versatile, sensitive, and sufficiently fast THz detectors has hampered realtime measurements. On the contrary our technique does not employ an external detector, and can therefore be applied to any THz QCL, regardless of its emission wavelength, using the same experimental setup, which makes it an extremely appealing and flexible tool.
To demonstrate the potential of this technique we investigate a multimode THz QCL with a 15mm-long ridge cavity. By operating the QCL in free-running, we observe the spontaneous generation of ~100ps-long (not limited by the electronic system's bandwidth), regularly spaced current pulses propagating back and forth inside the cavity. The latter are the result of a modulation of the laser population inversion by the intensity of the QCL intracavity field, and are produced by the coherent interference of ~20 FP modes. As such they are the manifestation of so-called dynamic population gratings, or population pulsations [16,31,40,41], whose spatiotemporal evolution was never observed in real time.
Depending on the pump current we find two clearly distinct dynamical regimes of operation. A first dominant regime shows one pulse per cavity roundtrip, and is characterized by high level of amplitude and phase noise, yielding coherence times of tens of ns. A second regime is instead observed predominantly at higher currents on a more limited current range, and shows two (evenly-spaced) pulses per roundtrip. In this case we find a stable pulse train, characterized by a much lower amplitude and phase noise, with coherence times of tens of µs.
The observation of regular and irregular multimode regime in different current regions, yielding respectively narrow and broad beatnotes at the cavity FSR is typical of multimode QCLs [1][2][3][4][5][6][7][8][9][10]18]. In this work we have provided for the first time a real-time analysis of such regimes. Our experimental findings are corroborated by simulations obtained using a set of Effective Semiconductor Maxwell-Bloch Equations that includes a non-null phase-amplitude coupling provided by the LEF [18]. Despite the fact that our simplified model does not include SHB, we managed to reproduce the most salient experimental finding, namely the appearance of unstable/stable operating regions with one or two pulses per roundtrip respectively. The simulated carrier density (i.e. current) pulses are found to be 180deg out of phase [17] with respect to the intracavity optical pulses, and the QCLs shows a predominantly FM-type dynamics, with both (optical and carrier density) pulses appearing as a shallow modulation on top of a CW background. Importantly, our model confirms the coexistence of regimes with one or two pulses per roundtrip found experimentally at specific bias points (Fig. 3, green arrows), which we interpret as an indication pointing towards the solitonic character of the observed QCL dynamics. Indeed, a definite proof of this character would require further simulations, e.g.
showing that the pulses can be excited independently within the same parameter choice by appropriately tailoring the initial conditions, or injecting pulses via a coherent external field. Further work will be aimed at elucidating these properties. | 2021-03-25T01:15:43.923Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "3e13840e93a7b46a15f2757d0baedd3b38766952",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3e13840e93a7b46a15f2757d0baedd3b38766952",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
202252902 | pes2o/s2orc | v3-fos-license | Quantitative Methods and its Contemporary Directions
Field of business and economics has seen a surge in the big data applications in recent years. These advances have impacted the research direction significantly for both academics and practitioners as methods and theory underlying these applications are also changing. This is because factors affecting these applications are dynamic in both regional and international context (Davies & Hughes, 2014).
Quantitative Methods and its Contemporary Directions
Sheraz Alam Malik 1 https://10.29145/2017/jqm/010101 Field of business and economics has seen a surge in the big data applications in recent years. These advances have impacted the research direction significantly for both academics and practitioners as methods and theory underlying these applications are also changing. This is because factors affecting these applications are dynamic in both regional and international context (Davies & Hughes, 2014).
As complex social, political, economic and business factors affect quantitative research, a critical debate is needed to understand and apply them in evidence based scenarios. Therefore, a platform is needed to develop and understand the complexity involved in the interplay of these factors. JQM attempts to serve this important platform. However, we need to first briefly understand these factors before embarking on this important journey.
Starting from the social aspect of quantitative research, data science is impacting at different levels of human life (Bryman, 2015). This includes health sciences and related policy making, the debate around sustainability, environmental protection initiatives and cultural changes due to the digital revolution. These multi facet aspects have a profound impact on the future research
Political factors play yet another role in shaping our lives by changing the debate around developmental initiatives in regional and international context, customer engagement on key policy issues, human development and budgetary allocation (Punch, 2013). Regional shifts in public mood affect the consumer behavior affecting the economy and business of the region. This requires in-depth investigations by academics to study its effects in these associated fields.
Similarly, economic factors like currency fluctuations due to the consumer based economy, the impact of major incidents on knowledge management and higher education sector, development of human resource due to big data insights and entrepreneurial ventures due to shared economy. All these diverse areas of knowledge require a strong quantitative approach to come up with meaningful answers and solutions along with new theoretical underpinnings (Sekaran & Bougie, 2016).
Business factors are the most relevant part of this platform as it is directly related to its core field of application of social sciences disciplines i.e. management and economics. Business models across the world are adapting to growing needs of society, customers, governments and environment surrounding them. This means no single theory or method is robust enough to explain these changes. Thus, a multidisciplinary approach is the only way to address these ever-changing phenomena and its possible investigation.
As business and economics are strongly interrelated, therefore viewing it from a single lens of quantitative research can result in an inaccurate analysis and flawed assumptions (Sekaran & Bougie, 2016). The more systematic approach will be to treat this discipline from the inherent complexity and interconnectivity point of view. This will give more leverage in seeing different strands of literature and its relevance to these two important discipline.
Journal of Quantitative Methods
Volume 1(1): 2017 I will conclude by highlighting the fact that the focus of this research platform is futuristic both in theory and applications. This means the divergent and diverse point of views will be entertained resulting into new avenues of research and scholarship. This journal will advance the debate in every discipline of social science spanning from analytics, marketing, supply chain management to finance, human resources and economics. Evidence based theories will be in the very DNA of this platform and it will encourage originality both from academics and professionals from the fields of social sciences and its associated disciplines. | 2019-08-18T22:12:36.023Z | 2017-08-31T00:00:00.000 | {
"year": 2017,
"sha1": "6454241143cc2042bdd5e60ea2bd65342daeaa65",
"oa_license": null,
"oa_url": "https://ojs.umt.edu.pk/index.php/jqm/article/download/5/8/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aefdfa6f97b4af9c465313aab388a4c8118fec6e",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Economics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
119101368 | pes2o/s2orc | v3-fos-license | Intrinsic hole mobility and trapping in a regio-regular poly(thiophene)
The transport properties of high-performance thin-film transistors (TFT) made with a regio-regular poly(thiophene) semiconductor (PQT-12) are reported. The room-temperature field-effect mobility of the devices varied between 0.004 cm2/V s and 0.1 cm2/V s and was controlled through thermal processing of the material, which modified the structural order. The transport properties of TFTs were studied as a function of temperature. The field-effect mobility is thermally activated in all films at T<200 K and the activation energy depends on the charge density in the channel. The experimental data is compared to theoretical models for transport, and we argue that a model based on the existence of a mobility edge and an exponential distribution of traps provides the best interpretation of the data. The differences in room-temperature mobility are attributed to different widths of the shallow localized state distribution at the edge of the valence band due to structural disorder in the film. The free carrier mobility of the mobile states in the ordered regions of the film is the same in all structural modifications and is estimated to be between 1 and 4 cm2/V s.
1-Introduction
The carrier mobility of polymer semiconductors has improved tremendously over the past few years. Field-effect mobility as high as 0.1 cm 2 /V s has recently been measured in regio-regular poly(thiophenes). [1][2][3] Transport characteristics are strongly dependent on the degree of order of the polymer semiconductor at the dielectric interface. Because the structure of the polymer depends on the processing conditions, it is not uncommon to find in the literature widely differing mobility values obtained for nominally the same polymer. In particular, the energy of the dielectric surface prior to the deposition of the polymer, 4-7 the solvent evaporation rate, 2 the molecular weight of the polymer 8 and thermal post-processing of the film 9 all influence the carrier mobility.
There is no general consensus on the mechanism of charge transport in these amorphous or poly-crystalline materials. A complete model of the electrical properties should include a description of the energy distribution of the carriers and how the conduction varies as a function of energy. Disorder-induced localized states are clearly important for the transport, and the essential question is the relation between atomic structure, electronic structure and the transport. Generally, charge transport in disordered materials is described either as hopping between localized states, or trapping and release from localized states into a higher energy mobile state. 10 The degree of structural order may change the mechanism even within the same class of polymer.
Because the electronic structure of semiconducting polymer films is not known experimentally, a simplified model has to be assumed. The model proposed by Bässler 11 assumes that the energy distribution of the carriers is Gaussian due to the random disorder in the material. The standard deviation of the Gaussian (typically 0.1 eV) increases with increasing disorder in the material. To simplify the calculations, electronic structures comprising an exponential tail in the bandgap are often used as well. Tanase et al. 12 have unified these two descriptions by showing that for field-effect transistors (FETs), where the areal charge density in the semiconductor is high, an exponential density-of-states (DOS) is a good approximation of the Gaussian DOS.
The aim of this paper is to explore the relation between structural order and electronic conduction in polythiophene thin film transistors, in order to understand the electronic structure and transport mechanisms. We modify the polymer structure by different thermal processing, and can vary the mobility by a factor of 25. The carrier mobility also depends on the drift field, the carrier density and temperature. [13][14][15][16][17][18][19][20] In field-effect transistors (FETs) operating in the linear regime, the dependence of the mobility on the drift field can be neglected since the geometry of the device generates small lateral fields (~ few 10 4 V/cm) for typical device geometries (channel length>5 µm). We use the dependence of the mobility on carrier density and temperature to determine the transport mechanism and other properties of the polymer, such as its free carrier mobility and the shape of the DOS. In polymers however, transport measurements are limited to a narrow temperature range because of material degradation at moderate temperatures, which makes it difficult to test transport theories. Section 2 describes experimental results, obtained by measuring the characteristics of regio-regular poly(thiophene) transistors as a function of temperature for different processing conditions. Section 3 compares a variable range hopping model with a mobility edge (ME) model with an exponential density of shallow traps. In section 4 we argue that the ME model is preferred, and extract the materials parameters accordingly.
The I-V curves of our devices as a function of temperature are simulated using these parameters and compared to the experimental curves. We estimate the carrier mobility of the mobile states, and show that the variation of mobility with different processing conditions can be attributed to a different extent of trapping in disordered states. The free carrier mobility in all the devices is approximately constant and we estimate it to be between 1 and 4 cm 2 /V s.
2-1 Device fabrication and characterization
Coplanar thin-film transistors (TFTs) were fabricated on a doped Si wafer that acted as a common gate electrode. The gate dielectric was a 100 nm layer of thermally grown oxide coated with a monolayer of octyltrichlorosilane or octadecyltrichlorosilane. 6 The surface treatment of the dielectric improved the mobility of the transistor by approximately three orders of magnitude. The devices were made by spin-coating on a chip with pre-patterned bottom Au contacts a solution of poly[5,5'-bis(3-alkyl-2-thienyl)-2,2'-bithiophene)] (PQT-12), a poly(thiophene) synthesized by Xerox Research Centre of the resulting polymer is regio-regular. The channel lengths varied between 40-100 µm and the widths varied between 500-1000 µm. The semiconductor thickness varied between 20-60 nm.
The TFTs were electrically characterized by measuring transfer and output curves in a vacuum probe station (P<1mTorr) equipped with a Joule-Thomson refrigerating stage for low-temperature measurements down to ~80 K (MMR Technologies). Two measurement modes were used. In the DC mode, the gate electrode was biased continuously during the voltage sweep. In this mode, the measurement of a transfer curve took several minutes.
Bias stress due to trapping of carriers at the semiconductor/dielectric interface was occasionally observed at low temperature after taking measurements in the DC mode.
Detrapping of carriers at room temperature in PQT-12 is relatively fast (~1 s) but can take longer at lower temperatures. 3 In order to effectively guarantee threshold voltage stability over the course of the temperature ramp, we took measurements in the pulsed mode, where only a short (~few ms) pulse is applied to the gate electrode. 22 No bias stress or long-lived threshold voltage shift were observed in the pulsed mode. Low currents (<100 pA) are not easily measured with the pulsed gate method: the subthreshold region and the onset voltage of the devices were therefore accurately determined only in the DC mode.
The TFT characteristics are described by an onset voltage when the current rises rapidly from the off-state leakage value, and a threshold voltage, V T , denoting, ideally, that the FET is fully turned on. The onset voltage is typically very close to 0 V indicating little residual doping in the material. A significant voltage between onset and threshold is a characteristic of disordered materials and is due to the localized states. Particularly at low temperature, it is not easy to identify V T , and the choice of V T affects the apparent mobility. We discuss this issue in more detail below and compare alternative analyses of the data.
2-2 Structural modifications of the poly(thiophene).
The microstructure of the poly(thiophene) material was modified by varying the processing conditions. The TFTs were first characterized immediately after spin-coating and drying the polymer film ("as-spun" devices). The film was then annealed at 80 ºC for one hour followed by 140 ºC for twenty minutes and then cooled to room temperature at approximately 1 ºC/min ("annealed" devices). We also characterized the TFTs after quenching the film from 150 ºC in liquid N 2 ("quenched" devices). This temperature was chosen because it corresponds to the isotropic temperature of the polymer, as confirmed by a blue shift in its absorption spectrum and a rapid degradation of its electrical properties. The electrical degradation was reversible upon slow cooling to room temperature.
These different processing conditions altered the crystallinity of the poly(thiophene) film. X-ray spectra of the as-spun and annealed films are shown in Fig. 1. The as-spun film shows a broad diffraction peak at an interplanar spacing of approximately (18.8±0.3) Å. Indeed, solution-processed poly(thiophenes) are known to spontaneously form polycrystalline films upon drying. 2 After annealing, the main diffraction peak narrows and shifts to slightly smaller spacing (17.8±0.3 Å), and a second order diffraction peak is also detectable in this spectrum. The results are consistent with the expectation that the annealed film has a higher degree of structural order. The quenched film did not display any x-ray diffraction peak, and is apparently amorphous.
The results of the room temperature TFT measurements for the three different structural modifications are shown in Table 1, and were reproducible over several samples. The annealed film has the highest mobility followed next by the quenched and then by the as-spun films, with an overall difference of a factor 20-25. The large difference between the onset and threshold in all the films indicates that the Fermi level moves through a substantial population of shallow donor-like states in the polymer as the devices turn on. It is notable that the quenched devices, whose polymer structure lacks crystalline order, have a higher mobility than the as-spun devices. As discussed further below, the transistor action occurs within about 1nm of the dielectric interface and therefore bulk measurements of structural order do not necessarily correlate with TFT performance. For instance, the chemically functionalized dielectric/semiconductor interface may perturb locally the ordering of the polymer.
2-2 Transport measurements as a function of temperature gate voltage, no significant bias stress effects are observed in the DC measurement mode.
The onset voltage of the device remains independent of temperature, as observed by Meijer et al. 23 At the lower temperatures, however, a high gate bias is helpful to obtain reliable current measurements. In these conditions, pulsed measurements are used to avoid bias stress and guarantee the stability of the onset voltage. As the temperature is reduced, the on-current -and consequently the mobility -decreases and the turn-on of the device is slower, as shown in Fig. 2a. There is an increase in the sub-threshold slope, and the threshold voltage extracted from the linear regime appears to become more negative with decreasing temperature (see Fig. 2b).
As a first approximation, we extracted the mobility from the linear regime fit of the drain current as a function of gate voltage. 25 The temperature dependence of the mobility of "as spun", "annealed" and "quenched" devices is shown in Fig. 3. In spite of the large differences in room temperature mobility, the general shape of all the curves is the same.
The temperature dependence of the mobility of PQT-12 transistors is not monotonic. We interpret the plateau between 220 K and 250 K and the sudden mobility increase at T>250 K as due to structural relaxation in the polymer. A pronounced hysteresis in the current vs. temperature measurements is also observed in this region. A more detailed analysis of this behavior will be published elsewhere. At T<200 K, the temperature dependence of the mobility follows an Arrhenius-like law, with an apparent activation energy, E A , of 40-60 meV and no hysteresis is observed. This activation energy is compatible with literature values for other high-performance polymer semiconductors 2 and indicates that only the temperature dependence of charge transport is measured in this regime.
Inspection of the transfer curves at T<200 K in Figure 4, reveals that a substantial curvature is present, even at high gate voltage, and therefore that the effective mobility of the device depends on gate voltage in addition to temperature. Dimitrakopoulos et al. 26 verified that the gate voltage dependence in organic TFTs is due to the varying charge density and not the gate field. According to standard FET theory, the mobility in the linear regime, as a function of temperature and charge density is: where L and W are respectively the length and width of the TFT channel and N is the charge density accumulated in the channel. When the TFT is in its on-state, the charge density in the channel is the gate voltage and V T is the threshold voltage). The operational definition of the threshold voltage consists of extrapolating the linear portion of the I DS vs. V G curve to I DS =0, in the limit where a small V DS is applied. The intercept at I DS =0 is equal to . When the mobility depends on charge density, this definition of V T cannot be applied since I DS does not depend linearly on V G . Physically, however, V T is the gate voltage at which enough charge is induced in the channel to allow the TFT to be strongly in its on-state. In our measurements, the turn-on voltage of the devices did not shift as a function of temperature. As a consequence, the amount of charge in the channel is determined only by the gate voltage and is independent of temperature. Thus, when the transistor is in its on-state, we have at all temperatures: where 0 T V is the threshold voltage measured at room temperature. Eq. 2 is valid when the device is switched on strongly (i.e. V G >>V T 0 ) and allows to extract the effective mobility from transfer measurements without having to calculate derivatives of I DS (V G ).
An example of µ as a function of 1/T at different V G , using eq. 2, is shown in Fig. 5a for the annealed devices. The dependence is Arrhenius-like with the apparent activation
3-Charge transport and device models
In this section we fit the temperature dependence data to models of the electronic structure and transport properties of the polymer. The variable range hopping model proposed by Vissenberg and Matters 29 is compared with a multiple ME model, 30 where σ 0 is the conductivity prefactor, α is the wavefunction overlap parameter, kT 0 is the width of the exponential tail of the DOS, ε s is the dielectric constant of the and B c is a constant (~2.8). The fitting parameters are σ 0 , α and T 0 . In this model, the mobility is essentially governed by the width of the DOS and the overlap parameter. Formally, the conductivity pre-factor represents the limit of the conductivity as 1/T tends to 0. This model however does not apply when 1/T tends to 0 therefore it is difficult to assign a physical meaning to the conductivity pre-factor. Vissenberg and Matters modeled the temperature-dependence of the field-effect mobility of polythienylene vinylene (PTV) and solution-processed pentacene. The fitted DOS tail width that they obtained from their devices was approximately 33 meV (T 0 =380 K) for both pentacene and PTV, even though the room temperature mobility of these materials differs by over two orders of magnitude. The authors attribute the mobility difference to the different tunneling rate between sites (i.e. to different values of the overlap parameter).
Eq. 3 was used to fit our mobility vs. T data for the three structural modifications of PQT-12, with σ 0 , α and T 0 as free parameters. The best fit to the data is obtained by allowing σ 0 to vary over a wide range. Since the physical meaning of this parameter is questionable, we prefer to fit our mobility vs. T data by varying σ 0 as little as possible.
Acceptable fits are obtained with the parameters listed in Table 2 along with those obtained by Vissenberg and Matters for PTV and pentacene. According to this model, the transport differences in our PQT-12 films are due mostly to differences in the overlap parameter α. Since the overlap parameter is directly related to the wavefunction decay, it is not obvious why it would vary in microstructural modifications of the same polymer.
We thus also attempted to fit our data with this model by keeping α constant (α -1 =1Å ).
The fit is visibly worse than in the previous cases. No correlation can be drawn between T 0 , which should be an indication of disorder in the material, and the device mobility (Table 2). Moreover, the conductivity pre-factor varies markedly between the three devices.
We therefore conclude that it is doubtful that our results can be explained by the Vissenberg and Matters hopping model and that a physical meaning can be attributed to the fitted parameters.
3-1 Mobility edge (ME) model
The ME model assumes that there is a defined energy (the mobility edge) in the DOS that separates mobile states from localized states. Trapped carriers become temporarily mobile by thermal excitation to the mobile states. The ME model applies intuitively to polycrystalline materials, where the mobile states are extended band states of the crystallites and the trapped states are located in the disordered regions between the crystallites. Hopping directly between trapped states is a competing transport mechanism. 31 Given our material properties however, at time scales longer than approximately 1 µs, transport through thermal excitation to mobile states is expected to dominate. 31 Thus, our experimental conditions are such that we can neglect direct hopping between trapped states.
In order to apply the ME model, we assume that the DOS of the polymer is described by bands with tails extending in the gap of the form shown in Fig. 6. The essential property of this DOS is that it varies "slowly" with energy inside the band near the band edge ( n E E D) ( ) 10 while it varies exponentially with energy inside the band gap. We define the mobility edge, E=0, at the top of the band-like states.
The assumption of an exponential tail is justified as an approximation of the Gaussian distribution generally accepted for organic semiconductors. Moreover, completely random disorder is not likely to occur in a polycrystalline material such as PQT-12, which shows texture by X-ray diffractometry, making a fully Gaussian DOS less preferable than in amorphous polymers. Finally, a DOS with an exponential tail has been successfully used to model temperature dependence of conductivity in other disordered materials such as hydrogenated amorphous silicon. 32 (4) where N tot is the total concentration of tail states and E b is the width (in eV) of the exponential tail. Holes in this part of the DOS are trapped and do not contribute to the transistor current.
The exact functional form of the band DOS is not known in polymer semiconductors.
Here we use the simplest form for a free electron gas in three dimensions, where the DOS is proportional to E 1/2 . Holes at energies below the mobility edge E 0 are mobile and assumed to have a constant mobility µ 0 , while holes located at energies above the mobility edge have zero mobility.
As the gate bias varies, the Fermi level of the semiconductor moves in the DOS as a result of charge accumulated in the polymer. Taking the simple approximation of a fixed depth of the conducting channel, the Fermi energy E F in the polymer semiconductor satisfies the following equation: where h is the channel dimension normal to the dielectric surface (assumed to be 1 nm), f(E F ,E) is the Fermi-Dirac distribution, and D(E) is the DOS of the polymer. Solving Eq.5 numerically allows us to determine E F (V G ,T). The concentration of mobile carriers is then: and the effective mobility is: 3-2 ME model fit to the data A density of states distribution D(E) based on eqs 5-7 that fits the whole set of temperature dependent data is found by successive iterations using N tot , E b and µ 0 as fitting parameters. The parameter values are shown in Table 3 and the fit to the temperature dependence of the mobility is shown in Fig. 7a-c, for the three different structural modifications. A good fit to the data is obtained, and parameters show that the band tail width is the main difference between the structural modifications.
The above analysis assumes uniform conduction in a fixed channel. In a TFT, however, the effective mobility is a function of the charge density, which decreases as a function of distance from the semiconductor/dielectric interface. In order to improve the model, and to simulate our I-V data, a transistor model was used. 36 Table 3. The band mobility and the band tail widths are similar to those obtained by the simple ME model.
The difference in the total concentration of tail states, which is less than a factor of 3, reflects the different assumptions of the two models. We conclude that the parameters are not very sensitive to the details of the model assumptions.
4-1 Transport models
The ME model is widely used for inorganic amorphous and polycrystalline materials. 32 While it is easy to justify the existence of a mobility edge separating extended mobile states from localized band tails in amorphous covalently bonded semiconductors (e.g. amorphous silicon) or in polycrystalline inorganic and organic semiconductors, this assumption may be more controversial in polymer films. In conjugated polymers, it is usually assumed that charge transport is limited by π-π intrachain coupling. Charges are delocalized within a conjugation length along the chain but are laterally confined in the polymer chain. A hopping model between chains seems the appropriate mechanism to account for bulk transport. Furthermore, polymers exhibit polaron behavior, in which holes are self-localized by structural relaxation, and the polaron binding energy can be enhanced at distortions in the polymer structure. Polarons are associated with low mobility transport, and with a hopping mechanism having moderately high activation energy.
However, the high mobility regio-regular poly(thiophenes) such as PQT-12 or poly(3hexylthiophene) (P3HT) do form polycrystalline films with strong π-π coupling within the crystallites. Our X-ray diffraction spectra show that the PQT-12 films certainly have a polycrystalline microstructure. Brown et al. 42 and Österbacka et al. 43 The ME and hopping models are actually difficult to distinguish purely based on the fit to the experimental data. The mobility calculated with the hopping model of Eq. 3 has an Arrhenius-like temperature dependence within the restricted temperature range over which measurements of organic TFTs are typically made. Our preference for the ME model is based on the expected transport mechanism of polycrystalline films, and on the unphysical parameter extraction for the hopping model. Since both models share the same functional dependence of mobility on temperature, we applied our model to fit the PTV data used by Vissenberg and Matters and the results are shown in Table 3. 44 Similar values as PQT-12 are obtained for the intrinsic mobility and the total concentration of donor states. The lower mobility of PTV (~2x10 -3 cm 2 /V s) compared to PQT-12 is primarily attributed to the width of the exponential tail of the DOS. We therefore suggest that the ME model may be more appropriate for PTV.
4-2 Parameter extraction
The ME model discussed here is simplified but nevertheless captures the essential features of the temperature and gate voltage dependence of the transport properties of PQT-12. The key assumptions are that a reasonably sharp mobility edge can be defined in this material 10 and that the DOS can be modeled as a slowly varying distribution of band states followed by a rapidly varying tail. The values of N tot , E b , and µ 0 are bound by physical requirements. The free carrier mobility, µ 0, extracted from the model, is larger than the intersection point of the Arrhenius-like fits of the data at 1/T=0 (~0.6 cm 2 /V s) because the statistical shift (i.e. T E F ∂ ∂ ) must be negative in this case, where E F moves towards the lower DOS as the temperature is increased. 30 In order to correctly fit the experimental data, E b must be of the order of E A , the activation energy in the Arrheniuslike fit of the experimental results. Finally, N tot must be such that at the charge concentrations imposed by the experimental gate voltages, the Fermi level stays in the exponential tail of the distribution in order to insure Arrhenius-like temperature dependence of the effective mobility. Although we are obviously not able to explore the whole parameter space and cannot guarantee that the parameters shown in Table 3 are a unique set, their values satisfy the previously listed requirements and are physically reasonable. Finally, if intrinsic mobilities higher than ~4 cm 2 /V s are used to fit the data (using The ME model indicates that the free carrier mobility is ~1 cm 2 /V s, and the uncertainties of our model allow us to give only an estimate of ~1-4 cm 2 /V s for the upper limit of the mobility of the PQT-12 films. It is interesting to consider if the measured value corresponds to the mobility in a "single crystal" transistor made with PQT-12, or whether grain size scattering effects limit the free carrier mobility in the polycrystalline films. To date, the highest mobility measured in an organic single crystal is 8 cm 2 /V s. 45 The intrinsic carrier mobility is the upper bound of the field-effect mobility of all PQT-12 films whose ordered regions share the same structure as the ones examined here.
In order to obtain the highest mobility polymer TFT however, it is not sufficient to design the molecular structure with the objective to maximize the intrinsic mobility. The microstructure of the film must also be such as to reduce the band tail energetic disorder as much as possible in order to minimize the number of trapped carriers.
4-3 Structural modifications
The room temperature mobility of the devices examined here varied by a factor of 20-25, and even more at low temperature. The ME model indicates that the difference originates mostly from the different widths of the exponential tail of the DOS. In other words, for our material and within our range of processing conditions, the mobility is governed essentially by the amount of trapping in shallow donor-like tail states. Indeed, the intrinsic mobility is approximately the same for all the devices examined here. A wider band tail is associated with a more disordered material that will therefore have a lower mobility. This result agrees with the view that at the dielectric interface the films are all made of ordered regions having approximately the same structure, which governs the intrinsic mobility, and disordered regions, which govern the trap states. The three films processed differently have different distributions of trap states, which lead to different mobility. Nevertheless, the films must have essentially the same structure in the ordered regions since they have the same intrinsic mobility. It is not surprising that the as-spun film has the broadest DOS tail, since we would expect to find the largest variations in local structure in a film whose structure has not been allowed to equilibrate through an annealing step, in agreement with the x-ray diffraction data.
It was noted earlier that the X-ray diffraction of the quenched film indicates an amorphous structure, even though the mobility is higher than the as-spun film. The parameters extracted from the ME model suggests that the interface region where transport occurs actually has an ordered polycrystalline structure. It seems to us reasonable to suppose that the polymer material immediately adjacent to the dielectric has an ordered structure largely determined by the interface, even if the bulk material is disordered by the high temperature anneal and subsequent rapid quenching.
5-Conclusions
The field-effect mobility of polymer TFTs made with PQT-12 as the active material According to the ME model, however, these differences are mostly due to differences in the width of the exponential tail of the DOS, and not in the intrinsic mobility in the ordered regions of the material: the lower the mobility, the wider the donor tail width.
The width of the exponential tail of the DOS is usually associated with the degree of structural order of the material. We estimate the intrinsic mobility of our PQT-12 films to be ~1-4 cm 2 /V s. Tables Table I: Typical room temperature electrical characteristics of the PQT-12 films. Occasionally devices stored for a long time in an inert atmosphere showed a smaller subthreshold slope (~0.5 V/dec) and consequently a lower V T (~-5V) as the one shown in Fig. 2. Occasionally devices stored for a long time in an inert atmosphere showed a smaller subthreshold slope (~0.5 V/dec) and consequently a lower V T (~-5V) as the one shown in Fig. 2. | 2019-04-14T02:06:11.247Z | 2004-07-20T00:00:00.000 | {
"year": 2004,
"sha1": "1d6320aca3ba6bbe9bbca760f6b660e601cdb0dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0407502",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1d6320aca3ba6bbe9bbca760f6b660e601cdb0dc",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
252751079 | pes2o/s2orc | v3-fos-license | Roles for c-Abl in postoperative neurodegeneration
The nonreceptor tyrosine kinase c-Abl is inactive under normal conditions. Upon activation, c-Abl regulates signaling pathways related to cytoskeletal reorganization. It plays a vital role in modulating cell protrusion, cell migration, morphogenesis, adhesion, endocytosis and phagocytosis. A large number of studies have also found that abnormally activated c-Abl plays an important role in a variety of pathologies, including various inflammatory diseases and neurodegenerative diseases. c-Abl also plays a crucial role in neurodevelopment and neurodegenerative diseases, mainly through mechanisms such as neuroinflammation, oxidative stress (OS), and Tau protein phosphorylation. Inhibiting expression or activity of this kinase has certain neuroprotective and anti-inflammatory effects and can also improve cognition and behavior. Blockers of this kinase may have good preventive and treatment effects on neurodegenerative diseases. Cognitive dysfunction after anesthesia is also closely related to the abovementioned mechanisms. We infer that alterations in the expression and activity of c-Abl may underlie postoperative cognitive dysfunction (POCD). This article summarizes the current understanding and research progress on the mechanisms by which c-Abl may be related to postoperative neurodegeneration.
Background
Neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS) and frontotemporal dementia, are common central nervous system (CNS) diseases [1][2][3][4][5]. These diseases lead to the gradual impairment of CNS functions, such as activity, memory, learning, judgement and coordination. Besides, it often leads to postoperative cognitive dysfunction (POCD) after surgery with anesthesia in the elderly, which is more common after cardiac surgery than other types of surgery [6]. Developmental neurotoxicity induced by general anesthetics can cause acute widespread neuronal cell death affecting long-term memory and learning defects of the infants and young children [7][8]. The incidence of POCD decreases over time, with the rate being the highest (30% to 70%) at hospital discharge, 12% to 21% 3 months after anesthesia and noncardiac surgery, 20% to 30% 6 months after surgery, and 15% to 25% after 12 months of follow-up [9][10][11][12][13]. POCD can prolongs hospitalization and rehabilitation time, increases the incidence of disability, and decreases life quality and survival, thus resulting in a heavy economic burden on the health care system [5,[14][15]. Compared with common neurodegenerative diseases and POCD, not only do they share the same characteristic symptoms, at the same time brain structural changes are similar, including cerebral Ivyspring International Publisher white matter changes, central neuroinflammation, neuronal apoptosis, and so on [16][17][18][19][20].
Brain aging is a complex process that affects everything from the subcellular level to the organ level, starting early in life and accelerating with aging process [21]. Morphologically, brain aging is mainly characterized by brain volume loss, ventricular enlargement, cortical thinning, vitrification loss and white matter degradation. Pathophysiologically, brain aging is associated with neuron atrophy, dendritic degeneration, metabolic slowing, microglial activation, demyelinating disease, small vessel disease, and formation of white matter lesions (WML) [21]. WML is caused by cerebral hypoperfusion, observed in the elderly, related to cognitive decline, and prevalent in AD patients [22]. Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease of CNS that causes focal VML and diffuse neurodegeneration throughout the brain. Range of MS lesions is in relation to inflammatory processes [23]. Furthermore, apoptosis is a programmed cell death that plays a key role in nervous system development and chronic neurodegenerative diseases, including AD and MS [24][25][26]. Neuroinflammation and neurodegeneration were caused by surgery and general anesthesia. Patients with POCD had significantly more white matter lesions and greater gray matter loss (medial temporal lobe) [27]. Different anesthetics and neuroinflammation will cause the apoptosis of neurons and increase the incidence of POCD [28][29][30].
c-Abl was first discovered in Aberson murine leukemia virus by Ozanne et al. in 1982 [31]. c-Abl (ABL, Abl1) and Abelson (ABL)-related genes (Arg, Abl2) have been identified as members of the c-Abl family of tyrosine kinases. c-Abl family members are highly conserved among different species and are involved in many cell regulatory processes, including regulation of actin cytoskeleton, cell cycle, stressinduced apoptosis and cycle arrest [32]. Under normal circumstances, apoptosis is an important physiological process that maintains the stability of internal environment and ensures normal tissue development and growth. Previous research has found that activated c-Abl can participate in apoptosis by interacting with multiple factors [33]. The gene is inactive under normal conditions. Upon activated and overexpression, this kinase can cause cell apoptosis, cycle alterations and pathological changes that may be related to neurodegenerative diseases [32][33][34][35][36][37][38][39][40]. Other studies have shown that Abl family kinases, especially c-Abl, play a critical role in neurodevelopmental and neurodegenerative diseases. Selective inhibition of c-Abl expression and activity has neuroprotective effects [32][33][34][35][36][37][38][39]. In addition, abnormally activated c-Abl is associated with a variety of neurodegenerative diseases. In studies using AD and PD models, it was found that c-Abl inhibitors can promote amyloid clearance and reduce the neural inflammation, which are two key drivers of nerve cell death [41]. Therefore, a large amount of literature has proposed that this kinase is crucial in regulating neurodegenerative diseases. The main mechanism of anesthesia-induced POCD is very similar to the mechanism by which c-Abl is involved in the pathogenesis of neurodegenerative diseases.
c-Abl activation regulates neuronal death response to Aβ fibrils. Intraperitoneal administration of imatinib rescued cognitive decline, Tau phosphorylation, and caspase-3 activation in neurons surrounding Aβ deposits [42]. Imatinib reduces plasma β-Oligomers and brain features, such as Oligomer accumulation, neural inflammation and cognitive deficits. The results support the role of c-Abl in Aβ accumulation of neurodegenerative diseases, and the efficacy of imatinib in the treatment of these diseases [43]. Besides, the c-Abl-drp1 signaling pathway regulates oxidative stress-induced mitochondrial fragmentation and cell death, which may be a potential target for the treatment of neurodegenerative diseases [44]. Furthermore, previous studies have found that intravenous anesthetic propofol significantly reduces c-Abl expression, but reducing c-Abl expression by propofol did not impair learning or memory function [45]. This study illustrates the involvement of c-Abl in the POCD process.
Therefore, this paper hypothesized that the expression and activation of c-Abl are abnormally elevated in various neurodegenerative diseases (AD, PD, ALS, MS, POCD, etc.), mainly as a result of abnormality in the Tau and Aβ proteins caused by neuroinflammation and oxidative stress (OS). In addition, various c-Abl blockers (imatinib, nilotinib, dasatinib, ladotinib and other drugs) can effectively reduce abnormal proteins and ameliorate cognitive dysfunction. As shown in Figure 1, this article summarizes the current understanding and research progress on the mechanisms by which c-Abl may be related to neurodegenerative diseases.
Neuronal cell development
ABL and Arg tyrosine kinases play an important role in the development and function of neuronal systems. Abl1 and Abl2 are downstream targets of ABL family to regulate cell growth and transformation. They may have unique and common functions in the development of CNS. Changed phosphorylation and molecular weight of ABL protein that occur during the maturation of CNS suggest that Abl1 and Abl2 may be involved in the signaling events that are responsible for regulating neural cell development [46]. Koleske and others found that ABL and Arg kinases are expressed at the highest levels in the neuroepithelium at Embryos day9 (E9) [47]. Arg is more abundant in the adult mouse brain, especially in the synaptic-rich regions. Arg deficient mice develop normally but show several behavioral abnormalities, indicating the existence of brain defects in these mice [47]. Embryos lacking both ABL and Arg develop neurodevelopmental defects [47]. ABL kinase-mediated signal transduction from different cell surface receptors also regulates cell proliferation and survival during cell development and homeostasis [48]. In addition, during normal axonal development, axonal growth is promoted by the binding of kinesin-1 to c-Abl and their interaction. The c-Jun interacting protein-1 (JIP1) is an important regulator of axonal development and a key target of c-Abl-dependent pathway in controlling axonal growth [49]. In addition, during neurite growth, c-Abl binds to and activates cyclin-dependent kinase 5 (CDK5), affecting neuronal migration and growth [35]. In addition to its typical functions in the pathogenesis of leukemia, c-Abl is also considered to play an important role in neuronal development, neuronal migration, axon extension and synaptic plasticity [37][38][39]. Miller et al found that c-Abl and ataxia-telangiectasia mutation (ATM) are very important for development and survival, especially after genotoxic stress, and that they have obvious selectivity for the developing nervous system [50]. In addition, c-Abl functionally interacts with p53 during cell development, and mice lacking them are unlikely to be viable [51]. Therefore, c-Abl is associated with a variety of cellular processes, including the regulation of cell growth and survival, as c-Abl deficient mice are embryonic or neonatal lethal [52,53].
Neurodegenerative diseases
A large number of studies have found that c-Abl is activated in human neurodegenerative diseases, especially AD [32][33][34][35][36][37][38][39][40][41]. Bowser's team found that the phosphorylation of c-Abl at Y412 and granulovacuolar degeneration (GVD) in the brain are common in patients with AD [54]. Some studies have also found in PD that c-Abl expression is increased in the striatum and that the tyrosine phosphorylation is also increased [55]. However, the pathogenesis of AD and POCD is still unclear. The possible mechanisms underlying the diseases may involve the following processes. First, some scholars believe that amyloid cascade leads to the formation of Aβ fibrils. Increased aggregation of these fibrils leads to neuroinflammation, which in turn alters the physiology of neurons and induces oxidative stress in these cells, ultimately leading to kinase activation, the formation of tangles and reduced nerve cells. In addition, a study suggested that the mechanism of AD may be related to an abnormal neuroinflammatory response caused by initial injury [56]. The signaling pathway activated by c-Abl may be related to growth factors, cell adhesion and OS [46][47].
Oxidative stress
Upon aging and disease development, the body's ability to deal with OS and deoxyribonucleic acid (DNA) damage during normal cell processes is weakened, resulting in the accumulation of oxygen free radicals and DNA damage. OS can regulate neurodegeneration through various mechanisms, including protein and lipid-related processes, Aβ deposition, cytokine production, mitochondrial dysfunction, proteasome dysfunction, and the formation of advanced glycation products, oxidation of nucleic acids and activation of glial cells [47]. c-Abl generally exists in cells in inactive and activated states, with the activation of c-Abl being tightly regulated by intramolecular bonds. c-Abl protein complexes can also be linked to the membrane through an amino-terminal myristoyl group. Reactive oxygen species (ROS) may activate c-Abl by triggering ATM kinase activation through OS. In addition, a subtype of protein kinase C can activate and phosphorylate c-Abl under hydrogen peroxide (H2O2) stimulation [41,46,49]. Some authors even believe that ROS can directly activate c-Abl [40]. c-Abl can eventually cause cell dysfunction through DNA damage and increase the level of free radicals under stress conditions [32]. Alvarez et al. found that c-Abl expression is upregulated in response to OS and the presence of Aβ fibrils in cultured neurons [57]. In addition, c-Abl can be activated under OS, dopaminergic stress, and genotoxic stress, and exposure to these stresses leads to reduced and destructed neuronal cells [58][59]. Therefore, OS is considered a potential therapeutic target for the prevention and treatment of neurodegenerative diseases through regulation of oxygen free radicals or alleviation of their harmful effects [60].
Tau protein phosphorylation
Tau is a microtubule-associated protein and a major component of neurofibrillary tangles (NFTs). It is normally involved in the formation of microtubules and cytoskeletal dynamics, and abnormal phosphorylation of Tau protein can cause microtubule entanglement and instability. Hyperphosphorylation of wild-type Tau protein results in the formation of NFTs, which is one of the hallmark pathologies of individuals with AD [61]. Excessive accumulation of toxic Tau protein can cause death and dysfunction of nerve cells and glial cells, thereby causing disease symptoms. Studies have shown that the exposure of cultured neuronal cells to various Aβ peptides can activate tyrosine kinases, in turn causing tyrosine phosphorylation of Tau protein. The E3 ubiquitin ligase is a common player to play a neuroprotective role in AD and PD through scavenging misfolded proteins, such as Aβ peptides and phosphorylated Tau protein [41]. However, during OS response in neurodegenerative diseases, ABL translocates to mitochondria, where it phosphorylates Parkin, causing activity loss of E3 ubiquitin ligase. This causes abnormal accumulation of Aβ peptides and Tau protein and is responsible for neuronal apoptosis and cognitive dysfunction [41]. Furthermore, previous studies have found that oligomeric Aβ peptides are present in the brain cells of AD patients during the induction of Tau protein hyperphosphorylation [62]. In addition, Derkinderen and colleagues pointed out that c-Abl phosphorylates Tau protein at Y394 [63]. It was also found in AD brain that activated ABL is present in granular structures of hippocampal neurons.
Neuroinflammation
Inflammatory mediators are detected in brain sections from AD and PD patients, and neuroinflammation may be one of the causes of these neurodegenerative diseases [64]. Numerous studies have also found that the etiology of AD and PD may be related to chronic neuroinflammation [65][66]. In the study of two transgenic mice (AblPP/tTA mice and ArgPP/tTA mice), it was found that c-Abl overexpression can lead to neuronal loss and neuroinflammatory response [65]. Neuroinflammation can be triggered by a variety of biological mechanisms, including OS and glial response. Neuroinflammatory mediators, such as cytokines and prostaglandins, play an important role in the development of neurodegenerative diseases [67][68][69]. In the early stage of neuroinflammation in AD, a vicious cycle of microglial activation, proinflammatory factor release and neuronal damage may occur [70]. Some authors have pointed out that c-Abl activation by neuronal cells can lead to neurodegenerative changes and neuroinflammatory changes [66]. Treatment of AD model mice with a c-Abl blocker led to the clearance of Aβ peptides, reduced the number of astrocytes and dendritic cells, and regulated the distribution of cytokines and chemokines [71]. It has also been indicated that continuous overexpression of c-Abl in neurons can cause the degeneration of neuronal cells in the hippocampal region. In a follow-up study, it was found that this pathophysiological process is mainly caused by transient and obvious changes in the cell cycle that are associated with protein and DNA replication in the olfactory bulb and activation of transcription 1 (STAT1) signaling pathway, which is crucial for the regulation of c-Abl-induced neuroinflammation and neurodegenerative diseases [66], in the hippocampal region. Wu et al. found that c-Abl can abnormally activate P38 and lead to neuronal death in the neuroinflammatory environment [72]. It is known that P38 is a member of mitogen activated protein kinases (MAPKs) family and plays an important role in inflammation, neurodegeneration and cell death. Therefore, according to all above these findings, we speculate that the activation of c-Abl in neurons leads to pathological changes related to neuroinflammation.
Roles of c-Abl blockers
Imatinib, nilotinib, and bosutinib have been shown to inhibit proinflammatory cytokines (TNF-α, IL-1β, IL-6, iNOS, COX-2, NLRP3) production, thereby reducing the recruitment of inflammatory cells to central nervous system [71][72][73][74]. AD is the most common neurodegenerative disease and the main cause of dementia. Its main neuropathological hallmarks are mainly the accumulation of extracellular neurotrophic plaques comprising Aβ peptides and the formation of intracellular and extracellular NFTs in brain regions. Neuroinflammation also plays an important role in the pathogenesis and progression of AD [74]. Infection, trauma, ischemia, and toxins increase levels of proinflammatory cytokines, including TNF-α, IL-1β, IL-6, IL-18, and chemokines such as C-C motif chemokine ligand 1 (CCL1), CCL5 and C-X-C motif chemokine ligand 1 (CXCL1). The release of pro-inflammatory molecules can lead to synaptic dysfunction, neuronal death and neurogenesis inhibition. During the progression of AD, memory loss and cognitive decline are accompanied by the degradation of specific neurons in the hippocampus and cerebral cortex. Studies have shown that the phosphorylation of tyrosine kinases at T412 is significantly increased in the hippocampus and entorhinal cortex in the brains of AD patients [75]. Moreover, Aβ can enhance the activity of c-Abl, and this kinase is involved in regulating the death of neurons [57]. c-Abl blockers have also been shown to significantly reduce Tau protein phosphorylation in transgenic AD animal models. It has also been found in vitro that Aβ can significantly increase c-Abl expression and phosphorylation of Tau protein in nerve cells [76]. This indicates that the expression of c-Abl is significantly increased during the pathogenesis of AD and that kinase blockers can significantly reduce the accumulation and expression of abnormal proteins.
In addition, similar studies have also shown that c-Abl blocker imatinib can significantly reduce plasma Aβ protein levels [77]. Intraperitoneal injection of the drug into AD model mice can not only ameliorate cognitive decline but also reduce neuronal apoptosis and Tau protein phosphorylation [78]. In addition, a study on AD-related diseases in humans showed that the level of c-Abl in the brains of AD patients was obviously higher than that in the brains of controls. The Abl-py412, an activated form of c-Abl phosphorylated on tyrosine residue 412, has an increased level in the early stage of AD. In the late stage of the disease, the level of abl-py412 is mainly increased in the hippocampal area. The Abl-pt735, an alternative phosphorylated form of c-Abl on threonine residue 735, has an increased level specifically in the hippocampal region. Furthermore, c-Abl and phosphorylated Tau protein interact in the brains of AD patients, and there is a certain correlation between the levels of these two proteins. The above results show that functional state of c-Abl is different at different stages of AD and that phosphorylated c-Abl and Tau protein interact in the pathogenesis of AD [54].
PD is the second most common neurodegenerative disease in the elderly. Studies have found that Parkin is a substrate of c-Abl and that c-Abl can specifically phosphorylate Parkin at Y143 [79][80]. The main pathway of this disease is the direct phosphorylation of α-synuclein at Y39 by c-Abl, which leads to abnormal aggregation of α-synuclein and a reduction in Parkin expression. In addition, c-Abl can phosphorylate E3 ubiquitin ligase at Y143 and inactivate it [81][82]. These changes can lead to an abnormal increase in the level of toxic protein zinc finger protein 746 (PARIS) and aminoacyl tRNA synthetase complex-interacting multifunctional protein 2 (AIMP2) [81][82]. In addition, previous studies have demonstrated that virus induced PARIS transgenic mice can lead to c-Abl activity dependent on PD characteristics such as dyskinesia, dopaminergic neuron loss and neuroinflammation [83]. Nilotinib, a c-Abl blocker, can significantly reduce c-Abl activity and Parkin levels, and improve neuronal apoptosis and cognitive function [84]. Wu et al. [85] found that in the study of BV2 microglia and PD model of mouse brain neuroinflammation induced by lipopolysaccharide (LPS), nilotinib can reduce TNF-α, IL-1β, IL-6, iNOS, COX-2, and other proinflammatory factors in BV2 cells. And it can significantly inhibit LPS induced neuroinflammation. In addition, tyrosine hydroxylase (TH) is an important player in PD, and nilornib can increase the number of TH-and Nissl-positive neurons in PD patients. It has also been found that c-Abl blockers can significantly reduce dopaminergic neuron loss in c-Abl knockout animals [80]. In vitro studies, ladotinib can protect against mitochondrial function impairment caused by α-synuclein and reduce the formation of α-synuclein inclusions. Furthermore, in vivo studies, the drug can effectively reduce dopaminergic neuron loss and neuroinflammation and improve cognitive function [86]. The intracellular inflammasome complex is involved in the recognition and execution of host inflammatory responses. Studies using pharmacological inhibition of c-Abl found that dasatinib reduced inflammasome activation, mitochondrial oxidative stress and LPS induced microglial activation [87]. Besides, the relate vitro study also indicated that c-Abl mediated microglia activation may be an important source of inflammatory mediators [88]. In addition, many studies have shown that the activity and expression of c-Abl are significantly increased in the brains of PD patients. Kinase blockers can improve brain function, reduce neuron death, inhibit CDK5 phosphorylation, regulate α-synuclein elimination, and inhibit Parkin phosphorylation [89][90][91][92][93]. In addition to animal experiments, clinical studies have also found that the treatment of patients with moderate or severe PD with nilotinib for 6 months can significantly alleviate cognitive symptoms [94]. In addition, some studies have found that c-Abl kinase inhibitor PD180970 can reduce Toll like receptor-4 mediated NF-κB and inhibits the release of pro-inflammatory cytokines such as IL-6 and monocyte chemoattractant protein-1 (MCP-1) [95].
In addition to the above common neurodegenerative diseases, c-Abl is also associated with Niemann-Pick C (NPC), a fatal autosomal recessive disease characterized by the accumulation of free cholesterol and sphingolipids in the endolysosomal system. c-Abl is activated and triggers neuronal apoptosis in vitro and in vivo nasopharyngeal carcinoma models [96][97]. Klein et al. found that the c-Abl/p73 pathway is related to neurodegeneration in NPC and that c-Abl blocker can delay neurodegeneration in this disease [96]. In addition, in other studies of NPC models in neurons, the c-Abl/Histone deacetylases (HDAC2) signaling pathway was found to be involved in the regulation of neurons. Inhibition of c-Abl may be a pharmacological strategy for preventing the adverse effects of elevated HDAC2 levels in nasopharyngeal carcinoma patients [98]. In addition, c-Abl is involved in ALS, which is characterized by neuron death. In ALS, c-Abl signaling is triggered through mitochondrial alteration-mediated ROS production [99]. Some researches suggest that c-Abl is a treatment target for ALS, and it has been found that the c-Abl blocker dasatinib has neuroprotective effects against this disease in vitro and in vivo [100]. Small interfering RNA (siRNA)-mediated c-Abl gene knockout attenuated the production of proinflammatory mediators in LPS induced glial cell culture [101].
Conclusions
The expression and activation of c-Abl are abnormally elevated in various neurodegenerative diseases (AD, PD, NPC, ALS, etc.), mainly as a result of neuroinflammation, OS, and abnormal Aβ and Tau protein. In addition, various c-Abl blockers (nilotinib, imatinib, dasatinib, ladotinib and other drugs) can effectively reduce abnormal protein levels and ameliorate cognitive dysfunction. The main mechanism of anesthesia-induced POCD is very similar to the mechanism by which c-Abl is involved in the pathogenesis of neurodegenerative diseases. Therefore, we hypothesized that anesthesia-induced POCD may also be related to abnormal activation or increased expression of c-Abl following the exposure to anesthetic drugs. These drugs, which may have certain preventive effects against POCD, may be new options for the treatment of postoperative neurological dysfunction caused by surgery and anesthesia. However, high-quality studies confirming specific role and potential mechanism of c-Abl in dementia and cognitive decline after anesthesia are lacking. c-Abl is a very promising target for perioperative intervention and treatment of POCD.
Facts
• c-Abl plays a crucial role in neurodegenerative diseases, mainly through mechanisms such as neuroinflammation, oxidative stress (OS) and Tau protein phosphorylation.
• Blockers of c-Abl may have a good preventive and treatment effects on postoperative neurodegeneration.
• This article summarizes the current understanding and research progress on the mechanisms by which c-Abl may be related to postoperative neurodegeneration.
Open questions
• Does c-Abl inhibition have anti-inflammatory and neuroprotective effects?
• Through which mechanisms is c-Abl related to POCD? Key
Author contributions
LF, SF, YY, YL, LX, YZ and LL contributed to the design of study, the review of literature and drifting of the manuscript. All authors have read and approved the manuscript.
Availability of supporting data
The datasets used and/or analysed during the current study are not publicity available. All data are available from the corresponding author upon reasonable request. | 2022-10-06T18:02:49.832Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "e8c9339c4f41e21ec8d6b19351a534b97748bffe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e8c9339c4f41e21ec8d6b19351a534b97748bffe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3914203 | pes2o/s2orc | v3-fos-license | Near infrared spectroscopy (NIRS) of the thenar eminence in anesthesia and intensive care
Near infrared spectroscopy of the thenar eminence (NIRSth) is a noninvasive bedside method for assessing tissue oxygenation. The NIRS probe emits light with several wavelengths in the 700- to 850-nm interval and measures the reflected light mainly from a predefined depth. Complex physical models then allow the measurement of the relative concentrations of oxy and deoxyhemoglobin, and thus tissue saturation (StO2), as well as an approximation of the tissue hemoglobin, given as tissue hemoglobin index. Here we review of current knowledge of the application of NIRSth in anesthesia and intensive care. We performed an analytical and descriptive review of the literature using the terms “near-infrared spectroscopy” combined with “anesthesia,” “anesthesiology,” “intensive care,” “critical care,” “sepsis,” “bleeding,” “hemorrhage,” “surgery,” and “trauma” with particular focus on all NIRS studies involving measurement at the thenar eminence. We found that NIRSth has been applied as clinical research tool to perform both static and dynamic assessment of StO2. Specifically, a vascular occlusion test (VOT) with a pressure cuff can be used to provide a dynamic assessment of the tissue oxygenation response to ischemia. StO2 changes during such induced ischemia-reperfusion yield information on oxygen consumption and microvasculatory reactivity. Some evidence suggests that StO2 during VOT can detect fluid responsiveness during surgery. In hypovolemic shock, StO2 can help to predict outcome, but not in septic shock. In contrast, NIRS parameters during VOT increase the diagnostic and prognostic accuracy in both hypovolemic and septic shock. Minimal data are available on static or dynamic StO2 used to guide therapy. Although the available data are promising, further studies are necessary before NIRSth can become part of routine clinical practice.
Introduction
Oxygen delivery (DO 2 ) and consumption (VO 2 ) often are disturbed in critically ill patients [1]. Such disturbance may lead to pathological changes in tissue oxygenation. Thus, monitoring of tissue oxygenation appears desirable. Invasive monitoring of systemic DO 2 and VO 2 has been used in intensive care medicine for decades [2]. In contrast, no method for assessing tissue oxygenation has yet gained widespread clinical use. This is unfortunate, because tissue oxygenation may reflect changes in the microcirculation, a similarly important target for therapy during major surgery and in the critically ill. Disturbances in microcirculation are common and well documented in hemorrhage and critical illness [3,4], and they logically relate to tissue oxygenation. Thus, monitoring of tissue oxygenation may provide useful information not only about the state of tissue oxygenation itself but probably about the state of the microcirculation. A potentially useful method to monitor tissue oxygenation may be offered by near infrared spectroscopy (NIRS)-based technology.
Although the concept of NIRS has already been available during the second half of the 20 th century, its main initial application was for chemical analysis [5,6]. Since the end of the 1970s [7], numerous studies have been published about this method [8,9]. NIRS offers real-time noninvasive monitoring of oxy and deoxyhemoglobin in tissues within a few centimeters from the skin ( Figure 1). Furthermore, so-called dynamic values, i.e., values registered during short occlusion of the vascular supply of the area under assessment can be measured. These dynamic values might give additional data on local VO 2 and probably the condition of the blood flow of the microcirculation [10]. Some NIRS variables also correlate with invasively monitored central circulatory variables [11]. More recently, due to device development, availability, and marketing, clinical research dealing with the utility of NIRS has focused on the specific methodology of NIRS of the thenar eminence (NIRS th ).
Clinical application of NIRS th remains a relatively new method in a field where no "gold standards" exist. Thus, clinicians considering the use of NIRS th need to understand the principles and evidence behind it and appreciate the many areas of uncertainty that surround its application. In this review, we assess several of these aspects and suggest studies to address areas of controversy.
The principle of clinical NIRS is to noninvasively measure the attenuation of light by hemoglobin, where the emitted light is in a wavelength range longer than visible light [12,13]. NIRS utilizes a narrower spectrum of wavelengths than pulse oximetry, which penetrate deeper into the tissue [14]. Furthermore, whilst NIRS characterizes the total tissue oxy and deoxyhemoglobin in a quantitative and qualitative fashion [13], generating information on oxygen supply and demand, pulse oximetry monitors only hemoglobin in the arterial (pulsatile) part of the local circulation [15].
The near infrared (NIR) light spectrum ranges from 700 nm to 1000 nm. For clinical applications of NIRS, wavelengths approximately 700 to 850 nm are used. The interval stretches on either side of the isobestic point of hemoglobin, i.e., a given wavelength where absorption of light for oxy and deoxyhemoglobin is identical. This wavelength interval maximizes the difference between oxy and deoxyhemoglobin and minimizes the influence of other chromophores, such as myoglobin, cytochrome oxidase, melanin, and bilirubin, on the measurements [16]. Fortunately, the impact of myoglobin on NIRS for measuring tissue oxygenation is minimal [17,18] and melanin, due to its superficial localization, is not a major issue in this regard. Increased conjugated bilirubin levels do influence measurements by dampening the signal. However, trends can still be followed even with jaundice [19].
Physical background
NIRS technology is based on sophisticated physical models, which are greatly simplified in the description below. The attenuation of light in a sample or tissue is proportional to the pathlength of the light and the absorption coefficient of the chromophore according to the physical principles referred to in the Lambert-Beer law [20]. The absorption coefficient of a compound, in the former equation, is a product of the concentration of the compound and the specific extinction coefficient of the compound. Thus, if attenuation of NIR spectrum light is measured and if all other components of the equation described above are known, the concentration of the chromophore, e.g., oxyhemoglobin can be measured. Unfortunately, because the pathlength of light varies due to reflection and interference in a complex milieu of different tissues, absolute concentrations are difficult to estimate. However, the pathlength of light is more or less constant and the extinction coefficients of the common chromophores are known physical quantities. Thus, changes in attenuation of light will be directly proportional to relative changes in concentration of the chromophore. Absolute changes in concentration can be approximated by creating mathematical algorithms for light pathlength in a tissue.
Because the absorption coefficient is direct proportional to concentrations of a chromophore in a tissue studied and extinction coefficient of the compound is constant, estimating the absorption coefficient yields approximations of the absolute chromophore concentration. This is possible through advanced modeling of the behavior of NIR light in tissues and the technical possibility of measuring at several NIR wavelengths [21]. Yet, given the large number of assumptions and approximations in the theoretical basis of NIRS, one may consider trends in different NIRS parameters as more robust than discrete values.
Technical considerations
The NIRS probes in current use measure reflected light. Thus, the NIR light source is placed beside the light sensor. The distance between the light source and sensor determine the distance from where the main part of the reflected light is measured. The technical limit of the monitored depth is the energy of light that does not damage tissues [21]. The main determinants of signal are small vessels of the microcirculation [22]. The brain [23], kidney [24], lower extremity [25], brachioradialis muscle [26], and thenar eminence [27] are all possible sites for bedside NIRS monitoring. The advantage of the thenar eminence compared with other sites, in terms of minimizing variability, is the relatively thin fat tissue over the muscle. Additionally, fibrous strands in its subcutaneous tissue limit the extent of edema formation providing the best possible setting for muscle tissue saturation (StO 2 ) measurement even in obese and critically ill patients [28]. Due to anatomical conditions, both the brachioradial muscle and the muscles of the thenar eminence can be easily subjected to the vascular obstruction test (see below).
Derived parametersthe vascular occlusion test
Assessing the ratio of oxy and deoxyhemoglobin in the monitored tissue gives continuous StO 2 . Because absolute hemoglobin content also can be estimated, total tissue hemoglobin and its absolute changes are expressed as tissue hemoglobin index (THI), which can be obtained with this method. THI is, however, not total tissue hemoglobin but its approximation, based on the signal strength of hemoglobin in the monitored area. Low StO 2 and THI are common findings in hypovolemic shock states.
By occluding the arterial [16] or the venous [29] blood flow to the thenar eminence, NIRS can assess dynamic changes that reflect VO 2 and postischemic reperfusion and hyperemia. This, vascular occlusion test (VOT), is of special interest in septic shock [30,31] or during anesthesia [32] where the static variables may not be affected despite disturbed circulation.
Arterial and venous vascular occlusion is achieved by a pneumatic cuff on the arm inflated to pressures well above the systolic arterial pressure, aiming to induce ischemia in the thenar muscles and changes in StO 2 (Figures 2a,b). Considerable duration of obstruction is required to obtain a reperfusion response that differentiates healthy volunteers from resuscitated septic shock patients [33]. However, the optimal way of performing VOT is a matter of debate. Both the intensity and/or duration of the VOT are a matter of controversy; some authors advocate a time-targeted VOT [34,35], and others advocate occlusion to a StO 2 -targeted VOT [33,36]. The argument for time-targeted VOT is that maximal ischemic vascular response is reached within a few minutes of VOT [37] and that long vascular occlusion times could lead to inability to complete VOT procedure due to subject discomfort [38]. On the other hand, argument for StO 2 -targeted VOT, i.e., aiming for StO 2 of 40%, is that a standardized level of ischemia is achieved, thus interindividual variations in response to VOT giving varying level of ischemia can be minimized.
The rate of desaturation in the thenar muscles (R des ; % × sec -1 ) after vascular obstruction can be used to estimate VO 2 in the thenar muscles. The product of the absolute value of R des and mean THI value quantifies the amount of desaturated hemoglobin [16]. The latter can be converted to thenar VO 2 using the hemoglobinoxygen binding constant [39].
In addition, after cuff deflation there is a swift restoration of blood flow that can be described in terms of the derivate of the StO 2 upslope (R res ; % × sec -1 ). During this reactive hyperemia, the StO 2 increases over baseline levels, indicating postischemic vasodilatation and capillary recruitment. The integral of the post reoxygenation StO 2 curve over baseline quantifies reactive hyperemia.
The VOT derived variables add to the robustness of NIRS measurements. In a small study, R des , R res , and reactive hyperemia were all lower in septic shock patients than in healthy controls [31]. Moreover, R res had an inverse relationship to sequential organ failure assessment (SOFA) and predicted mortality. Finally, a coefficient of variation of less than 10% has been reported for R res [33]. Venous occlusion is performed by inflating a pneumatic cuff above venous pressure on the arm [29]. In this setting, NIRS shows increased THI due to vascular congestion and eventually decreasing StO 2 . The venous occlusion method also can be used to estimate local VO 2 . Some have reported varying reproducibility of VO 2 measurements with venous occlusion [40].
NIRS during the perioperative period
Although there is an extensive literature on NIRS used before [41], during [42][43][44][45] or after surgery [46,47], relatively little has been published on NIRS th during the perioperative period. However, in a recent publication R res , measured in patients undergoing major abdominal surgery patients, was decreased in fluid responsive patients [32]. In the same study, fluid responsiveness was detected by invasive methods, such as pulse pressure variation.
Data are conflicting on the ability of NIRS th to detect blood loss, an area of central interest in anesthesia. In awake volunteers, a 500-ml blood loss at blood donation did not lead to changes in NIRS variables [27]. On the other hand, hemodynamically significant hypovolemia, in awake volunteers, did decrease StO 2 and THI [48]. A possible explanation could be that tissue hemoglobin and oxygenation at the thenar eminence are not affected by blood loss within the capacity of the compensatory mechanisms of hypovolemia.
In patients after cardiac surgery, StO 2 and THI did not correlate with global circulatory parameters, but changes in body-finger temperature correlated with changes in StO 2 [49]. However, StO 2 during the perioperative period in cardiac surgery is lower in patients who develop certain postoperative complications [50] than those who do not [51]. StO 2 at the thenar eminence does not predict mortality in cardiac surgery [51] or surgicalsite infections in colon surgery [52].
Intensive care applications
In a general intensive care population with most patients in resuscitated shock, StO 2 and R res appear related to capillary refill time and central to peripheral temperature gradient, but not to the etiology of shock [53]. In a mixed group of patients with increased blood lactate levels observed during 8 hours of resuscitation [54], half had low StO 2 (<70%) on admission. There was no difference in systemic circulatory variables between patients with low or normal StO 2 , but SOFA scores and acute physiology and chronic health evaluation II scores were higher in patients with decreased StO 2 . However, blood transfusion substantially increasing blood hemoglobin did not increase StO 2 in a mixed group of stable patients [55] and the correlation between blood hemoglobin and tissue hemoglobin index may be limited [31,55]. In patients with hypovolemic or septic patients, more homogenous observations have been described.
NIRS in hypovolemic shock
The rationale for monitoring peripheral tissue as the thenar eminence in hypovolemic shock is centralization of circulation to vital organs leading to decreased blood flow in muscles [56]. In acute hemorrhage, activation of the sympathetic nervous system [57] should decrease thenar muscles blood flow, with increased oxygen extraction and decreased tissue hemoglobin content. In this setting, NIRS th may thus act as a sensor of the vascular response to hypovolemia. In trauma patients with severe shock, StO 2 is lower in than in milder grades of shock or in normal individuals [58], although patients with shock can present with StO 2 values as in controls [38].
In a study of severe postpartum hemorrhage, the StO 2 range overlapped during and after hemorrhage, but StO 2 increased after control of bleeding [59]. The lowest StO 2 in the trauma bay has been shown to be as good as the lowest systolic blood pressure at identifying severe shock as defined by experienced clinicians [58]. Furthermore, StO 2 within 1 hour of admission is lower in trauma patients who develop multiorgan dysfunction (MODS) or die, and a strongest predictor of MODS or death than other diagnostic modalities [60,61]. Low StO 2 within 1 hour of admission was as sensitive as a high base deficit in identifying patients who developed MODS or died, although specificity for both was low [62]. Finally, low StO 2 within 1 hour of admission identifies trauma patients who will require blood transfusion within the next 24 hours [63].
The discriminatory power of dynamic NIRS parameters, however, is of greater interest. R res was lower in trauma patients compared with controls with little overlap [38]. Moreover, low R res predicted increased troponin I levels in postpartum bleeding [59]. Thus, in hypovolemia, low static StO 2 predict adverse outcome but dynamic NIRS parameters seem to be more promising.
NIRS in sepsis
In septic shock, although hypovolemia can be a finding [64], there is a substantial microcirculatory disturbance with closed capillaries, arteriovenous shunting, and decreased flow [65]. As a result of these phenomena, oxygen content in the vessels of the microcirculation could be normal, making clinical interpretation of NIRS data complex. These pathophysiologic changes may be not necessarily mirrored by low StO 2 , but rather by low R res and impaired postischemic hyperemic response.
Several studies report a difference in StO 2 between healthy subjects and patients with severe sepsis or septic shock [16,30,34,66,67], whereas others do not [31,68,69]. Although variation in population characteristics could partly explain these results, in all studies StO 2 values overlap between septic shock patients and healthy volunteers. This is not surprising because, in sepsis, StO 2 in sepsis can be at the higher end of the normal spectrum or markedly low [65]. Dynamic NIRS parameters, however, improve the power of the method to distinguish pathologic tissue oxygenation from normal. The R des of the thenar muscles is slower in septic shock patients, indicating a lower rate of tissue VO 2 [16,31,68,69]. Furthermore, R des varies with the severity of systemic infections [69]. Similarly, R res appears lower in septic patients compared with healthy controls [16,30,31,34,70] and decreases with increasing disease severity [30,31]. R res ranges overlap minimally [30,68,70] or not at all [34] when comparing healthy controls and septic patients, and R res improves as septic shock resolves [34]. Finally postischemic hyperemia is decreased in septic patients compared with healthy controls [30,31]. Thus, NIRS th could be a method for bedside assessment of the microcirculation [71].
Monitoring global hemodynamics with NIRS th also has attracted interest. In sepsis, treatment based on venous saturation in the superior vena cava (ScvO 2 ) [64] or the pulmonary artery (SvO 2 ) [72] is used, because these are markers of global DO 2 and VO 2 balance [73]. Noninvasively obtained surrogates of ScvO 2 or SvO 2 would be valuable. In this regard, StO 2 correlates to ScvO 2 [74] and SvO 2 [67] in patients with severe sepsis or septic shock; however, correlation coefficients are relatively low. The accuracy of estimating SvO 2 could be substantially improved by calculating the "NIRS-derived SvO 2 " [67]. In severe sepsis and severe heart failure, StO 2 , however, did not estimate SvO 2 [75]. Still, data suggest that patients with severe sepsis or septic shock and low StO 2 also have low ScvO 2 , suggesting hypodynamic circulation [74].
Variables related to DO 2 may correlate with NIRS parameters. R res correlates with cardiac output and to a lesser extent with blood lactate levels in septic patients [34], whereas StO 2 does not correlate with lactate or base deficit [67]. A low StO 2 predicts a very low DO 2 in early sepsis with high sensitivity and specificity [76]; however, moderately low DO 2 does not correlate with StO 2 . Neither was change in R res correlated to change in cardiac output in septic patients. Finally, StO 2 did not correlate with the severity of illness [67], but a StO 2 <78% in resuscitated patients predicted mortality [77]. Low R res correlates with organ failure [31] and R res is lower in nonsurvivors than survivors [34].
Recently, it has been reported that increasing blood pressure with noradrenalin infusion from 65 mmHg to 85 mmHg in resuscitated sepsis patients normalized R res [78]. These patients, although seemingly resuscitated according to the Surviving Sepsis Campaign guidelines, could improve the thenar perfusion by achieving higher mean arterial pressure [79]. These data could suggest that NIRS can identify patients who benefit from treatment beyond the traditional goals, thus the usefulness of NIRS as a bedside tool to optimize tissue oxygenation [71]. Thus, in resuscitated, septic patients, dynamic NIRS of the thenar eminence provides information on microcirculation and trends could be used to guide treatment.
NIRS in miscellaneous conditions
In patients with chronic heart failure, thenar StO 2 , R des , and R res are low [80]. NIRS th parameters in these patients improved after 6 hours of dobutamine or levosimendan infusion [80], or 3 months of regular exercise training [81]. Patients with cirrhosis demonstrated a supranormal hyperemic response after vascular occlusion test [82], which increases with increasing severity of liver disease [82].
NIRS-derived and central hemodynamic parameters
The performance of NIRS th in estimating global circulatory parameters is highly dependent on the coupling between the circulation of the hand and the central circulation. Static NIRS parameters, such as StO 2 , should, in theory, be related to centrally measured circulatory parameters. However, this relationship has not been demonstrated in general in critically ill patients [54] or after cardiac surgery [49]. Although, global venous saturations have been described to correlate to StO 2 in sepsis, this relationship is weak [67,74,83]. In sepsis, correlation between StO 2 and SvO 2 can be improved with correction equations [67]. Only substantial deviations from normal DO 2 levels are detected reliably by StO 2 in sepsis [76]. In these patients, R res correlates with cardiac index and blood lactate levels [34].
Limitations
NIRS th monitors peripheral muscle as a marker of perfusion elsewhere. The theoretical concern is whether the small volume of distal muscle can be a good indicator of the state of the tissue oxygenation in the rest of the body and the vital organs. For example, local factors, such as obstruction to flow by atherosclerosis, an arterial catheter, or thrombosis after previous arterial catheterization, could affect measurement. Although a brief report suggests that catheterization of the radial artery in adult, elective, surgical patients does not affect StO 2 [84], this may not be the case in critically ill patients with circulatory failure.
StO 2 is not the same as tissue blood flow in the tissue or even oxygen supply. StO 2 is affected by local VO 2 , which could be affected by states that alter muscle metabolism, such as muscle relaxants. The balance between the metabolic state of the muscle and other vital organs may vary between individuals and intraindividually during the course of disease, affecting the global relevance of some of the NIRS variables. Also tissues overlaying the thenar muscles can influence measurements [85]. Very dark skin with high melanin content or thick, edematous, or injured connective tissues and low hemoglobin levels also could pose problems with NIRS measurements [86].
There are many studies on NIRS in the perioperative period and critical illness where many different sites are monitored. Moreover, the methodology of NIRS is not standardized [87,88]. Several probes are available on the market, different occlusion protocols are used, and different parts of downslope and the upslope StO 2 curves at vascular occlusion are used in calculations. Furthermore, studies are conducted on patients in different phases of disease, which may represent different pathophysiologic situations. Hence, comparing results can be difficult (Table 1).
Static NIRS variables are influenced by the temperature of the hands compared with core temperature [53] and could be a reason for the overlap between patients with normal and abnormal peripheral circulation. In anesthesia with volatile anaesthetics, vasodilatation is a common feature [89] and monitoring StO 2 would be expected to be less affected by peripheral vascular tone.
The question of whether organ failure in sepsis is mostly dependent on disturbed mitochondrial oxygen metabolism [90][91][92], or on limited DO 2 [93,94], is a matter of debate. NIRS does not assess mitochondrial oxygen metabolism in sepsis, although decreased tissue VO 2 , R des , could imply mitochondrial dysfunction. However, decreased tissue VO 2 also could depend on shunting of delivered oxygen [65]. This phenomenon is not measured by NIRS either. However, with all of its limitations, NIRS th is a noninvasive method that, given that supporting data will be available, could become part of anesthetic and intensive care monitoring in a manner similar to pulse oximetry.
Conclusions NIRS th eminence estimates StO 2 in peripheral muscles. In hypovolemia, such StO 2 is decreased and relates to severity of disease and outcome. Hence, StO 2 measurements could aid with the clinical management of these patients. In unresuscitated or inadequately Data are given as mean ± SD or median (range).
resuscitated septic shock, R des and R res are decreased, indicating disturbed tissue oxygen metabolism and microvascular reserve. The pathologic findings in these dynamic NIRS parameters could be of value in resuscitated sepsis where macrocirculatory failure has been corrected and during anesthesia to monitor adequacy of peripheral perfusion and fluid status. Despite its limitations, NIRS th takes monitoring from global to local level. The existing literature on NIRS th is mainly focused on validation of this technique. Future studies that implement NIRS th into treatment algorithms in anesthesia and intensive care would be valuable to define the place for this monitoring modality in daily management of critically ill patients. | 2017-10-25T17:07:42.327Z | 2012-05-08T00:00:00.000 | {
"year": 2012,
"sha1": "463968803e30b5509a783dbe0e71f7dfc8230d2b",
"oa_license": "CCBY",
"oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/2110-5820-2-11",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee89fbc192ebe7574184135e1dca37afe8ebefea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11750363 | pes2o/s2orc | v3-fos-license | Current and emergent strategies for disinfection of hospital environments
Abstract A significant number of hospital-acquired infections occur due to inefficient disinfection of hospital surfaces, instruments and rooms. The emergence and wide spread of multiresistant forms of several microorganisms has led to a situation where few compounds are able to inhibit or kill the infectious agents. Several strategies to disinfect both clinical equipment and the environment are available, often involving the use of antimicrobial chemicals. More recently, investigations into gas plasma, antimicrobial surfaces and vapour systems have gained interest as promising alternatives to conventional disinfectants. This review provides updated information on the current and emergent disinfection strategies for clinical environments.
Introduction
The number of hospital-acquired infections (HAIs) has been growing exponentially worldwide since 1980, especially due to the emergence and wide spread of multidrug-resistant (MDR) bacteria. Multidrug resistance is an intrinsic and inevitable aspect of microbial survival and has been a major problem in the treatment of bacterial infections. 1 -4 The evolution of bacterial resistance is a consequence of the indiscriminate use of antibiotics and of the transmission of resistance within and between individuals. 5 -8 Also, the lack of new clinically relevant classes of antibiotics constitutes a major threat to public health.
HAIs are among the major causes of death and increased morbidity among hospitalized patients, with a minimum of 175000 deaths every year in industrialized countries. 9 -12 Several investigations showed that .60% of worldwide HAIs have been linked to the attachment of different pathogens to medical implants and devices, such as venous and urinary catheters, arthroprostheses, fracture-fixation devices and heart valves. 13 -18 As a direct consequence, the replacement of implants, which entails significant costs and suffering for patients, often remains the only efficient therapy. 19 Additionally, it has been demonstrated that the increased incidence of HAIs is related to cross-infections from patient to patient or hospital staff to patient and to the presence of pathogenic microorganisms that are selected and maintained within the hospital environment (including equipment). 1,11,20 -23 Poor infection control practices may facilitate patient-to-patient transmission of pathogens; for instance, in the accommodation of multiple patients in the same room. However, failure of the immune system due to illness and/or the use of immunosupressors and other therapeutic drugs can increase the patient's susceptibility to infections. Moreover, the use of antibiotics can inadvertently select antibiotic-resistant microorganisms. 21 Since the environment serves as an important reservoir for infectious organisms, the control of hospital infections is a matter of great concern and a major challenge. The introduction of optimized disinfection products and processes is critical to control and prevent the spread of nosocomial infections, cross-resistance and persister cells. 24 Within recent decades, requirements regarding the antimicrobial activity of disinfectants in the medical field have been defined in various European standards. 25 Also, guidelines have been developed by the CDC, which recommend hospitals to thoroughly clean and disinfect environmental and medical equipment surfaces on a regular basis. 26 However, there is a variety of products available on the market with moderate or even insufficient antimicrobial action. 25 New products and technologies with 'permanent' antimicrobial activity without the risk of generating resistant microorganisms are needed. 12 Hence, this manuscript provides information on the main pathogens causing HAI and the relevant in-use and emergent strategies for their control.
Main hospital pathogens
The increase in HAI is associated with the higher capacity of bacteria to resist and adapt to harsh environmental conditions, including the presence of antimicrobial agents. Deadly pathogens can survive for long periods of time on hospital surfaces, making the environment a continuous reservoir of infectious agents. The adhesion of pathogens to a surface followed by biofilm formation in ,24 h is a critical microbiological problem for healthcare # The Author 2013. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com J Antimicrob Chemother 2013; 68: 2718 -2732 doi: 10.1093/jac/dkt281 Advance Access publication 18 July 2013 services. 10 In fact, the concentration of disinfectants required to kill sessile bacteria may be 1000-fold higher than that required to kill planktonic bacteria of the same strain. Thus, antimicrobial therapies fail to kill biofilms most of the time. 27 -30 Furthermore, there are few prevention techniques to control biofilm formation without causing side effects. 31 Some of the most important pathogens involved in HAIs include methicillin-resistant Staphylococcus aureus (MRSA), Clostridium difficile, Pseudomonas aeruginosa, vancomycin-resistant Enterococcus spp. (VRE), Acinetobacter baumannii and some Enterobacteriaceae strains. 1,21,32,33 To a lesser extent, pathogens such as Candida species, viruses [adenoviruses, noroviruses, rotaviruses, influenza, parainfluenza, hepatitis B viruses and severe acute respiratory syndrome (SARS)-associated coronaviruses] can also survive on surfaces and medical equipment, although there is little evidence of possible survival. 2,32 -34 Most of these pathogens can survive for months on surfaces. 32,35 Some examples of the most persistent hospital pathogens are summarized in Table 1 along with some of their characteristics. Some investigations have proposed that Gram-negative bacteria persist longer than Gram-positive bacteria and, although it has been suggested that the type of surface does not influence the period of persistence, it has also been shown that longer persistence may occur on plastic or even on steel. 32,36 In terms of environmental conditions, lower temperatures (4 -68C) and high humidity (.70%) improved the persistence of several bacteria, fungi and viruses. 32 Moreover, the frequency of contamination has been shown to vary depending on the body sites at which patients are colonized or infected. It was demonstrated that 36% of surfaces sampled in the rooms of patients with MRSA in a wound or urine were contaminated, compared with 6% of surfaces in the rooms of patients infected with MRSA at other body sites. 37
Influence of clinical environment on HAI propagation
Pathogens can spread from patient to patient through contact with inanimate surfaces, including medical equipment and the immediate patient environment. 38 There is clinical evidence suggesting an association between poor environmental hygiene and the transmission of microorganisms causing HAIs. 39 Cheng et al. 40 found a strong correlation between environmental contamination by MRSA and hospital infection rates. Drees et al. 41 demonstrated an increase in VRE infection risk for an occupant of a room where a patient with this infection was previously treated. It was also demonstrated that nosocomial transmission of norovirus, C. difficile and Acinetobacter spp. was correlated with contaminated environmental surfaces. 15 In another study, a positive and measurable effect on the clinical environment was demonstrated with the introduction of one extra cleaner, which apparently protected the patients against MRSA infection. 42 The potential for contaminated environmental surfaces to contribute to pathogen transmission depends on two important factors: the pathogens must survive on dry surfaces and the contamination has to occur on surfaces commonly touched by patients and healthcare staff at a sufficiently high level to enable transmission to patients. 35 Moreover, pathogen transmission will also depend on the infectious dose and route of transmission, along with host susceptibility.
Shared clinical equipment that comes into contact with intact skin, despite being unlikely to introduce infection, can also promote the transfer of microorganisms between patients. 43 The most frequently contaminated surfaces are floors, doorknobs, television remote control devices, bed-frame lockers, mattresses, bedside tables and toilet seats in rooms previously occupied by an infected patient. 1,12,35,44 Wilcox et al. 45 found that 50% of commodes, toilet floors and bed frames sampled at a hospital were contaminated with C. difficile. Medical devices, including stethoscopes and otoscopes, are highly prone to be contaminated with bacteria and have been implicated as potential vectors of cross-transmission. 46 Moreover, bacteria were found on various plastic items in the hospital, including pagers and cell phones. 47 Cotterill et al. 48 provided suggestive evidence that contaminated ventilation grills were sources of MRSA outbreaks in hospitals. Additionally, an estimated 20% -40% of HAIs have been attributed to cross-infection via the hands of healthcare personnel, who have become contaminated from direct contact with the patient or indirectly by touching contaminated environmental surfaces. 15,38 In fact, hand hygiene is a major contributing factor to the current infection threats to hospital inpatients. 49 Barker et al. 50 showed that norovirus is consistently transferred via the fingers to melamine surfaces and from there to other typical hand-contact surfaces, such as taps, door handles and telephone receivers. Pessoa-Silva et al. 51 demonstrated that hands become increasingly contaminated with commensal flora and potential pathogens during neonatal care and that gloves do not fully protect the workers' hands from contamination. Pittet et al. 52 concluded that bacterial contamination increased linearly with time on ungloved hands during patient care. This demonstrates the importance of decontaminating hands before every patient contact. Fendler et al. 53 concluded that the use of an alcohol gel hand sanitizer decreased infection rates during a 34 month period and can provide an additional tool for an effective infection control programme. The same conclusion was reached by Hilburn et al. 54 The gloves of medical staff are also easily infected from direct contact with an infected patient or, indirectly, by touching contaminated surfaces, which serve as a carrier for pathogenic microorganisms. 11 In a study focused on MRSA infection, 42% of personnel gloves that contacted the furniture/surfaces of a patient room but had no direct contact with infected patients were contaminated. 37,44 More significantly, it was found that 65% of the nursing staff that had directly treated an infected individual contaminated their gowns/uniforms with the organism. 55 The white coats, shirts and ties of doctors have also been found to contain potentially pathogenic flora. 56
Disinfectant selection
Maintenance of a good hospital environment requires the implementation of adequate strategies. Such strategies are described in guidelines proposed by several committees, particularly the Healthcare Infection Control Practice Advisory Committee. 57 For instance, in the case of surfaces with blood contamination, a disinfectant with activity against tuberculosis and hepatitis B virus (HBV)/HIV or a 5.25% bleach solution, at a final dilution of 1 : 10, can be used. 11 These documents describe procedures to be implemented in healthcare facilities in order to achieve efficient cleaning and disinfection and also review the main uses of
Continued
Review disinfectants as well as their mechanism of action and activity. Rutala and Weber 58 developed a set of guidelines for hospital environment cleaning.
Cleaning is related to the clearance of foreign material from a surface or equipment, allowing the removal of some organic material and microorganisms by detergents. 11,33 However, this process does not kill bacteria, which, under favourable conditions, can redeposit elsewhere and form biofilms. 31 Consequently, cleaning must always precede disinfection and sterilization in order to eliminate infectious microorganisms. 59 A significant amount of microorganisms is destroyed by the disinfection process, which involves the use of chemical agents such as quaternary ammonium compounds, aldehydes, alcohols and halogens, or radiation and heat. 11,31,33,60 The control of hospital infection must involve both disinfection and sterilization processes and sometimes the use of aerosols to clean the air. 11,33
JAC
A biocide can target different locations on a cell as it may interact with the surface, the bacterial cell wall and the outer membrane, or it may penetrate the cell, where it can cause reversible or irreversible changes by interacting with nucleic acids, inhibiting enzymes and cell growth. 24,60 When choosing a disinfectant, several factors must be taken into account, such as its efficiency, compliance with regulations, user acceptability, instrument compatibility, the types of surfaces and medical equipment, and the pathogenicity, infection rates and persistence of the microorganisms. 61 A disinfectant must be safe, easy to use and effective against a wide range of pathogenic microorganisms and should not leave toxic residues. 31,61 The efficiency of the disinfectant depends on several factors, mainly surface characteristics (hydrophobicity, charge and roughness), the amount of organic and inorganic matter, temperature, pH, the chemical structure of the biocidal agent and the type of infection and pathogen. 11,24,31 The mode of action of the disinfectant and the route of entry into the cell (porin channels in the case of hydrophilic disinfectants and the hydrophobic path for hydrophobic disinfectants) also play significant roles. 24 The risk of infection of the room/surface/ equipment must also be considered in the choice of the disinfectant, as well as its concentration and exposure time. 24,62 Hospital areas should be defined according to the risk of infection in order to establish and promote proper cleaning/disinfection. Areas where contamination is expected, e.g. laboratories, operating theatres, ambulatory surgical units, labour and delivery rooms, areas with blood or body fluid spills, and neonatal and burn units, must be cleaned and disinfected frequently (often several times per day). 1 Areas of low risk, such as administration and waiting rooms, only require a daily cleaning.
According to its efficiency and ability to kill bacterial spores, an antimicrobial product can belong to one of four distinct groups: sterilants or high-, intermediate-and low-level disinfectants (Table 2). Given the increased resistance to antimicrobial agents displayed by bacteria upon biofilm formation, this hierarchy is only a rough guide. 60,63,64 A disinfectant is almost never 100% effective due to the resistance of some bacteria to specific compounds and due to inefficient cleaning protocols. 1 Once a disinfectant is removed, the surviving bacterial population can potentially regrow. Moreover, viable spores still attached to various materials can remain undetected by current sporicidal tests, resulting in overestimation of the sporicidal activity of sterilizing agents. 65 The efficacy of diverse chemical disinfectants in inhibiting and killing some of the most clinically relevant bacteria, pathogenic fungi and yeasts has been evaluated by several authors. The activity of a disinfectant is generally analysed in terms of its MIC and MBC. However, the most suitable measure is the log 10 reduction of the number of cfu. The time taken to obtain a 5 log 10 reduction is also a reference to assess disinfection efficacy. 61
Traditional disinfection strategies
The use of biocides has evolved over time. Alcohols such as ethanol have a long history of antiseptic use; around the 19th and 20th centuries phenolics and hypochlorites started to be employed and, later, quaternary ammonium compounds. 66 More recently, the most common products have been chlorhexidine and silver salts, peroxygens, glutaraldehyde and ortho-phthalaldehyde. 66,67 Alcohol disinfectants cause protein denaturation and are effective against vegetative bacteria, fungi and viruses, but have no effect on spores. 61,68 Chlorine-releasing agents can oxidize membrane proteins and are very effective in removing biofilms from surfaces, requiring short exposure times for growth inhibition. 57,61,68 However, these chemical agents are corrosive to metals and can be inactivated by the presence of organic matter. 57,61,69 Moreover, in the last few years the use of chlorine has been associated with the formation of carcinogenic compounds and some pathogens have been shown to be resistant to chlorine. 70 The aldehyde-based disinfectants disrupt proteins and nucleic acids by alkylation and have antimicrobial activity against spores, bacteria, viruses and fungi. 61,64 Quaternary ammonium compounds and phenols solubilize the membrane and the cell wall. 68 Hydrogen peroxide and peracetic acid promote protein denaturation, and are active against several groups of microorganisms and pathogens implicated in nosocomial infections. 38,57,61,64 The mechanism of action of different disinfectant categories has been presented elsewhere in greater detail. 64,71,72 Countless studies have been performed regarding the antimicrobial action and efficacy of different disinfectants. Rutala and Weber 69 reviewed the use of inorganic hypochlorite (bleach) in healthcare facilities for disinfection of medical devices and environmental surfaces, and concluded that the many advantages of chlorine (e.g. fast microbiocidal activity, cost-effectiveness and good track record) are likely to support its continued use in healthcare settings. Griffiths et al. 73 evaluated the efficacy of several disinfectants (sodium dichloroisocyanurate, chlorine dioxide, 70% industrial methylated spirits, 2% alkaline glutaraldehyde, 10% succinedialdehyde and formaldehyde mixture, 0.35% peracetic acid and a peroxygen compound at 1% and 3%) against different strains of mycobacteria and showed that disinfectants based on sodium dichloroisocyanurate were more effective. Moreover, they concluded that clinical strains were more resistant to biocides than laboratory type strains. 73 Other studies have shown that chlorhexidine at 0.5% concentration is the best choice, among several antiseptics and surface disinfectants (including betadine, hydrogen peroxide, sodium hypochlorite, alcohol and ultraviolet radiation), to kill clinical yeast isolates, either in planktonic cultures or in biofilm. 74 Oie et al. 75 analysed the effects of four different chemical treatments (0.2% alkyldiaminoethyl glycine, 0.01% or 0.1% sodium hypochlorite and 80% ethyl alcohol) on the disinfection of porous and smooth surfaces contaminated by S. aureus in a university hospital. The results demonstrated that the disinfection of porous surfaces was more difficult and none of the disinfectants was effective, highlighting the need for more frequent disinfection and the use of high-level disinfectants on these surfaces. 75 More recently, Speight et al. 76 evaluated the effect of 32 disinfectants on spores of C. difficile by a suspension test and only eight products gave .3 log 10 reduction in viability within 1 min (to have a more realistic simulation of probable real-life exposures) under dirty conditions (3% BSA). These results underscore the importance of carefully selecting the disinfectant to eliminate spores of this particular microorganism. 76 Kim et al. 77 Review concluded that not all biocides used in hospitals can kill this microorganism. Also, the efficacy of disinfectants was higher for planktonic cells (reduction to undetectable levels). 77 Bridier et al. 78 studied the effects of three common disinfectants (peracetic acid, benzalkonium chloride and ortho-phthalaldehyde) on 77 bacterial strains and found that mycobacteria demonstrated a marked resistance to all the biocides. Benzalkonium chloride was inefficient even at very high concentrations. Also, resistance was dependent on the strain within the same species. 78 Gutiérrez-Martín et al. 79 analysed the activity of 16 active compounds and 11 commercial disinfectants against Campylobacter jejuni by performing a suspension test in the presence and absence of serum. High levels of reduction (.6 log 10 ) for some disinfectants [chloramine-T, povidone iodine (1% available iodine), cetylpiridinium chloride, ethanol, isopropanol, chlorhexidine digluconate, formaldehyde, phenol and 10 of the 11 commercial formulations, especially those based on quaternary ammonium compounds] were obtained, regardless of the presence or absence of organic material. 79 Table 3 compiles some of the available information regarding hospital disinfectants.
Disinfection methods employed in many intensive therapy units and other healthcare facilities include the use of antimicrobial wipes. Such products might be efficient in removing a microbial bioburden from a surface. 80 The use of alcohol wipes was also demonstrated to generally decrease the mean daily bacterial load on toilets where wipes were made available. 81 However, in most cases, antibacterial wipes used in hospitals were found to spread germs rather than eradicate them. Wipes can act as sources of cross-contamination when they are used on surfaces next to patients or on those commonly touched by staff and patients (e.g. tables, keypads). 82 Moreover, many hospital personnel use a single wipe several times to clean and disinfect multiple surfaces before discarding it. Instead, they should use a wipe on a single discrete surface that requires only low-level disinfection.
A description of methods for hospital equipment disinfection with an analysis of the exposure time and the type of disinfectant and a summary of advantages and disadvantages of some chemical sterilants can be found elsewhere. 11,57 Nevertheless, it has to be taken into account that some of these studies were performed with culture collection strains, whose responses to the presence of these disinfectants may differ from those of clinical isolates. 60 Furthermore, some studies evaluated only a single strain of the species. Therefore, the results obtained may not always be representative of what occurs in clinical practice. Most of the presented investigations were performed on a laboratory scale, which may not truly reflect the complexity of a hospital scenario.
Alternatives to traditional disinfection
The need for appropriate disinfection procedures is enhanced by the multitude of outbreaks that have resulted from improperly decontaminated patient-care items. 83 The disinfection processes previously described are executed by the application of chemical agents in solution. However, this kind of disinfection has some disadvantages: the application of disinfectants requires an exposure time of at least 5-10 min; the chemicals might react with acids; they might lose their activity in contact with organic substances; and some of them can cause skin, eye and respiratory tract irritation. 23 In this context, new disinfection strategies must be developed and their efficiencies must be evaluated in terms of their potential applications in hospital settings. This need for novel control methods has been emphasized by the increased resistance of bacterial species to some disinfectants, mainly as a consequence of biofilm formation. 31 One of these alternative strategies is steam vapour disinfection, which has been evaluated by different groups. Tanner 84 demonstrated a reduction of 7 log 10 in MRSA, VRE and P. aeruginosa (to undetectable values) within 5 s of application of a steam vapour system. Sexton et al. 23 applied a steam vapour system to combat MRSA, total coliform bacteria and C. difficile cells, attaining a 90% reduction in bacterial levels. Hydrogen peroxide vapour (HPV) is also used for decontamination of clinical surfaces and equipment. Otter and French 85 found that vegetative bacteria and spores (6 -7 log 10 cfu) survived on surfaces for .5 weeks, but were inactivated within 90 min of exposure to HPV in a 100 m 3 test room. Initial inocula of M. tuberculosis ( 3 log 10 ) and Geobacillus stearothermophilus (6 log 10 ) were exposed to HPV at 10 locations during room experiments and both microorganisms were inactivated in all locations within 90 min of HPV exposure. 86 Falagas et al. 38 reviewed other studies of disinfection with HPV against MRSA, C. difficile and other pathogens in several sampled hospital locations (including surgical wards, ward side rooms, single isolation rooms, multiple-bed ward bays and bathrooms); complete or almost complete disinfection of the sampled hospital sites was achieved with airborne hydrogen peroxide. HPV appears to have low toxicity and has good compatibility with most inanimate materials. 87 UV light exposure has also been applied for room decontamination. It has been used in air-handling systems and upper-room air-purifying systems to destroy microorganisms and can also inactivate microorganisms on surfaces. 88 UV-C was demonstrated to be effective in eliminating vegetative bacteria on contaminated surfaces (both in the line of sight and behind objects) within 15 min and in eliminating C. difficile spores within 50 min. 89 In other study, a mobile UV-C light unit significantly reduced aerobic colony counts and spores of C. difficile on contaminated surfaces in patient rooms. 88 It was also demonstrated that ceiling-mounted UV germicidal irradiation lamps were effective in reducing the viability of both Bacillus cereus and Bacillus anthracis vegetative cells and spores after a minimum exposure time of 1 h at an intensity as low as 8 mW/cm 2 . 90 Both the HPV and UV light methods have demonstrated good results. However, they require the removal of patients and healthcare personnel from the room, have a high acquisition cost and increase room turnover time. 91 Moreover, they do not replace standard cleaning and disinfection. 92 Bacterial adhesion to stainless steel surfaces (the most commonly used material in industry and hospitals) is one of the major reasons for cross-contamination in many scenarios. 93 Thus, more recently, different strategies for the production of antimicrobial surfaces with the purpose of reducing HAI have been extensively investigated and developed. Researchers have focused on the development of surfaces with antimicrobial coatings in order to inhibit biofilm formation by either killing the bacteria or preventing their adhesion. Bacterial growth control can then occur by three different modes of action: biocide leaching, which involves the release of a cytotoxic compound to kill attached microorganisms; adhesion prevention, which uses a super-hydrophobic Review 2723 JAC Continued Review 2725 JAC covering in order to prevent microbial adhesion; and contact killing, which consists of disruption of cell membranes in contact with the surface. 13 Many of these techniques consist of surface modification by introduction of antimicrobial substances such as antibiotics, metals (such as copper and silver) and antiseptics. 93 -98 These surfaces can be obtained by different methods, including adsorption, ligand -receptor pairing and covalent binding. 94 Surfaces that release antimicrobial products can work quite efficiently, although they eventually become exhausted and the diffusible antimicrobials pose the potential problem of inducing microbial resistance, since the surfaces are continually releasing active compounds to the environment for a long period of time. 55,99 Alternatively, the design of surfaces that kill microbes on contact and do not release biocides may solve this problem. The covering of surfaces with transient metals, such as copper and silver as mentioned above, is a well-established strategy. Moreover, surfaces that catalytically produce biocides using externally applied chemical, electrical or optical energy are an interesting alternative. 99 An example of biocide leaching is the use of Surfacine, which incorporates an antimicrobial compound (silver iodide) in a surface-immobilized coating (a modified polyhexamethylenebiguanide). 83 This compound interacts with the microorganism by electrostatic attraction, penetrates the cell and finally kills it. 83 Surfaces treated with silver iodide resulted in excellent elimination of VRE at challenge levels of 100 cfu/in 2 for at least 13 days. 11 Furthermore, these surfaces retained the antimicrobial effect even after cleaning. 11 Photocatalytic oxidation on surfaces coated with titanium dioxide (TiO 2 ) is also a possible alternative. In the presence of water and oxygen, highly reactive OH 2 radicals generated by TiO 2 and mild UV-A are able to destroy bacteria, thus reducing bacterial contamination. 100 Another example of a product that releases an organic antimicrobial is Microban w , which contains triclosan [5-chloro-2-(2,4-dichlorophenoxy)-phenol] as the antimicrobial agent, making the surface resistant to the growth of microorganisms. 55 This antimicrobial technology can be found in hundreds of consumer, industrial and medical products around the world.
Covalent immobilization of bioactive compounds onto functionalized polymer surfaces has also seen rapid growth in the past decade in such industries as biomedical, textiles, microelectronics, bioprocessing and food packaging. 101 Table 4 presents some examples of the application of these surfaces.
Furthermore, other kinds of surface are stimulating intensive research and some of them are already used, while others are promising candidates for practical application. 102 -104 Examples include materials for medical implants (such as catheters), devices and instruments that are in contact with patients, surgical gowns and other protective clothing (such as surgical masks, caps) and polymeric coatings on surfaces such as shower walls. 102,103 Antimicrobial polymers provide a very suitable strategy for achieving this objective since they can be applied to diverse objects. Polyelectrolyte multilayers (PEMs) have also been investigated extensively as biomaterials and biomaterial interfaces and also for bacterial contamination prevention. 104 Gas plasma is another promising alternative to sterilization that can be applied in healthcare services, although it is mainly targeted to equipment rather than to surfaces. Plasma consists of a mixture of photons, electrons, ions, atoms and radicals (such as atomic oxygen, ozone, nitrogen oxides, hydroxyl and superoxide). As a result of air plasma discharges, the gas enters an ionized state (by energetic transfer) and exhibits antimicrobial properties. 9,20,105,106 Two types of plasma can be defined according to the conditions under which the plasma is formed: thermal and non-thermal plasmas. Compared with non-thermal plasmas, thermal plasmas require higher pressure and are characterized by higher temperatures and a local thermodynamic equilibrium. 105,106 These systems have many advantages over more conventional disinfection techniques as they enable the disinfection of the interior of some types of equipment and materials, such as needles, at low cost and with easy handling. 107,108 Furthermore, gas plasma does not require chemical products and it is not toxic to the skin. 20 Shimizu et al. 108 performed treatments with plasma using a surface micro-discharge device, under different temperature and humidity conditions. The antimicrobial effect was tested on Escherichia coli and Enterococcus mundtii with a reduction of 5 log 10 after 15 and 30 s of plasma treatment for E. mundtii and E. coli, respectively. 108 Other researchers have developed a large-scale plasma dispenser and evaluated its effect on E. coli and Candida albicans in agar plates in the presence and absence of textiles. 107 Their results demonstrated that the system was not affected by the presence of the textile, and in both cases a 15 s treatment caused a 5 log 10 reduction and, after treatment for 5 s, the fungi were reduced by 4 log 10 . Joshi et al. 10 studied the biocidal efficacy of non-thermal dielectricbarrier discharge plasma against E. coli, S. aureus and MRSA in biofilms and planktonic cells. The planktonic cells were completely eliminated after 120 s of treatment, whereas the MRSA growing in biofilms were killed by .60% within 15 s, suggesting that the effectiveness of a plasma system is highly dependent on exposure time and cell density. 10 Recent investigations evaluated the effect of non-thermal gas plasma on biofilms of Staphylococcus epidermidis and MRSA on glass surfaces; a log 10 reduction of 4 and 4.5 was observed after 1 h of exposure, respectively, and greater reductions could be attained with more prolonged exposure times. 9 Burts et al. 20 tested an atmospheric non-thermal plasma as a disinfectant for hospital pagers by analysis of MRSA reduction, and found complete disinfection within 30 s. Different cell densities and exposure times were also evaluated and a 4 -5 log 10 reduction was obtained within 10 min. More information regarding this disinfection strategy has been reviewed by Moisan et al. 105,106 These novel disinfection techniques are important means of reducing the high numbers of nosocomial infections and are more efficient than conventional disinfection methods.
Conclusions
There is great concern about the growth and prevalence of HAI due to the increased incidence of resistant bacteria. Furthermore, the development of new antibiotics is a difficult task because of high research costs and regulatory issues. Conventional cleaning methods for the eradication of hospital environmental contamination seem to be inefficient. This manuscript reviews several new disinfection alternatives as attempts to overcome these problems. Most of the data currently available have been generated by the manufacturers and need to be validated by independent investigations. Moreover, studies concerning bacterial biology and physiology allied to genomics and computer analysis should be applied to identify and understand the pathogenesis associated with resistant bacteria and crucial targets for novel biocides. Thus, further evaluation and implementation of new measures and new disinfection methods are necessary, not forgetting their validation in terms of effectiveness, safety and disposal. Additionally, it is important always to evaluate the risk of emerging phenotypic resistance when developing new disinfection strategies.
Transparency declarations
None to declare. | 2017-04-16T12:34:00.685Z | 2013-07-18T00:00:00.000 | {
"year": 2013,
"sha1": "3177a3605edcbb77e2777991d416a8b4ca8e30ca",
"oa_license": null,
"oa_url": "https://academic.oup.com/jac/article-pdf/68/12/2718/2055527/dkt281.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f36623dae5fac4d5c4a7ffbc3131b77ae731c3b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239613756 | pes2o/s2orc | v3-fos-license | EDITORIAL : Geographical insights of the corporate governance research
Abdlmutaleb Boshanna conducts a systematic review and provides a comprehensive up-to-date review of the literature about diversity on corporate boards. Unlike previous studies, the authors did not restrict the search to a specific type of diversity or limited firm outcomes. The aim was to review, evaluate, synthesize, and summarize the literature on five key areas: 1) the theoretical approach (going beyond the theoretical analysis of each article by exploring how the theoretical perspective informs their focus); 2) dominant framing and theorizing; 3) determinants and consequences; 4) how board diversity is defined and operationalized; and 5) the outcomes of board diversity. This paper contributes to the previous research paper by EmadEldeen, Elbayoumi, Basuony, and Mohamed (2021), Jonty and Mokoaleli-Mokoteli (2015), Giovinco (2014), Shehata (2013), Santen and Donker (2009).
Patrick Ulrich and Robert Rieg declare that in family businesses, which per se are less likely to offer variable compensation to their executives, it is assumed that internal rather than external metrics are more likely to be used as the basis for compensation. This paper tests this thesis on the basis of an empirical survey of 113 German companies. The empirical study shows clear differences in the use of internal and external metrics as a basis for executive compensationa fact that has so far not been addressed in other, previous empirical studies, including Lemennicier, Hermet, and Palanigounder (2019), Beavers (2018), Iskandrani, Yaseen, and Al-Amarneh (2018), Alshimmiri (2004).
Sunny Oswal and Kushagra Goel study the concept of equity returns and see whether there is a significant difference between the expected return which is calculated through the capital asset pricing model (CAPM) and the actual return given by the stock. For this study, 10 stocks with maximum market capitalization are taken focusing on 12 countries for this research subdivided into developed and developing countries. The hypothesis being whether the actual stock returns are significantly different from the expected stock return, for the same paired t-test has been deployed on 120 stocks to check the significance.
Thien Le examines the relation between firm pair's sharing of a top institutional investor (i.e., an institutional investor with the largest shareholding) and accounting comparability. Using data from Compustat, CRSP, and Thompson Reuters over the 1993-2017 period, the study finds that firm pairs that share the top institutional investor exhibit higher accounting comparability than other firm pairs. Also, firm pairs whose top institutional investors are monitoring institutions (regardless of whether they are the same institutions) exhibit greater comparability than other firm pairs whose top institutional investors are non-monitoring institutions.
Isha Gupta, T. V. Raman, and Naliniprava Tripathy examine the impact of related/unrelated merger and acquisition (M&A) on value creation and research and development (R&D) of Indian non-financial sector companies. This study focuses on whether related M&A outperforms unrelated M&A in the context of value creation and R&D. The sample of the study includes 64 companies to evaluate the significance of relatedness and unrelatedness between target and acquiring companies of the Indian non-financial sector. The findings of the study acclaim that related M&A outperform unrelated M&A.
Um-E-Roman Fayyaz, Raja Nabeel-Ud-Din Jalal, Gianluca Antonucci, and Michelina Venditti intend to investigate the impact of chief executive officers' (CEO) powers on corporate decisions made by firms in the context of board oversight (BO) and market competition (MC). From 2007 to 2017, the authors applied a quantitative approach to a sample of two stressed European markets (i.e., Hungary and Greece). The authors found that CEO power has a negative impact on corporate risk and firm performance. Furthermore, results also reveal no sign of moderation effect for MC with corporate decisions, whereas BO moderated the CEO power and corporate decisions in the Hungarian market. This study provides a contribution to the previous papers by Daradkah (2021), Wukich (2020), Saerang, Tulung, and Ogi (2018).
Massimo Cecchi
instead, a larger examines, of approximately 15,000 sample Italian limited companies, which include, in particular, unlisted companies. No statistically significant correlations between performance and gender emerge. Therefore, if women have to -be better‖ to be treated -equally‖, we can conclude that women do not seem to perform better than their male counterparts. However, women are not found to perform worse, either. This is an excellent contribution to the research by Derbali, Jamel, Lamouchi, Elnagar, and Ltaifa (2020), Velte (2017), Iren (2016), Ahmad and Alshbiel (2016).
Emiliano Di Carlo outlines the elements required to assess the extent of the risk of conflict of interest in organizations. The research framework considers the following two elements: a) the probability that the secondary interest may interfere, even if only apparently, with the primary interest of the organization; b) the seriousness of the damage and/or moral unacceptability of the mere appearance of improper behavior. The assessment also allows understanding not only what the causes are, that can increase the probability of interference of the secondary interests, but also the factors that feed these interests, suggesting the most suitable remedies.
Giacomo Bider and Gimede Gigante investigate whether corporate venture capital (CVC) activity, measured as the number of investments, deal size, and the number of realized exits is beneficial for value creation and innovation for European listed companies. It is found evidence that CVC activity creates firm value in the period under consideration (2008-2019), confirming North American's past evidence. Exits convey a negative effect on firm value, suggesting that CVC performance may not be satisfactory enough. When considering innovation, evidence is presented that investing in rounds with a higher deal size positively affects investor's patenting levels, indicating that the later the start-up's stage in its life cycle, the higher the possibility for the CVC investor to effectively absorb its technology.
Catherine E. Batt, Páll Rikhardsson, and Thorlakur Karlsson explore how sudden changes in organizational context impact the importance of budgeting. This study is based on a survey of CFOs of the 300 largest companies in Iceland, according to the dataset Frjáls Verslun, following the financial crisis of 2008. The results show widespread use of budgeting, regardless of the size of the organization. Also, uncertainty and organizational complexity do not impact the perceived importance of budgeting.
Alessandro Migliavacca, Christian Rainero, and Vera Palea address equity investment valuation through market multiples and its consequences in investors' financial statements under fair value accounting principles. The authors analyze the distribution of the estimated-to-actual fair value ratio under the IFRS 13 perspective and the effects of a randomly selected portfolio on the balance sheet and income statement of the investor. The study's primary findings are that the market multiples tend to produce consistent results in 7 (at least) to 20 (at best) out of 100 cases, and over or underestimate the fair value in all the remaining cases without any apparent or predictable reason.
Angelo O. Burdeos extends past studies by examining the effect of ownership structure on discretionary current accruals. The study determines the level of income-increasing earnings management of initial public offerings (IPOs) in the Philippines and the factors that explain it. Particularly, the paper examines the effect of ownership concentration and largest shareholder ownership on discretionary current accruals. The study finds -4.19% discretionary current accrual on average.
Arash Faizabad, Mohammad Refakar, and Claudia Champagne investigate the effectiveness of corporate, social, and political connections on corporate governance practices. The findings of this research show that networking activities in various forms positively and negatively affect corporate governance practices. As for corporate connections, there is no consensus on the relationship between interlocked boards and firm performance. As for social connections, the evidence provides contradictory results regarding the effects of social ties on CEO compensation and firm performance.
Shab Hundal and Tatyana Kauppinen explore the following research objectives. First, the motivation of internationalization of family firms (FFs) in Russia; second, their process of internationalization, and third, the problems and challenges faced by the FFs. Different theoretical perspectives have been discussed to problematize and analyze the research objectives of the study. The current qualitative study is based on the semi-structured interview method. As many as ten FF entrepreneurs, representing five different industries, have been analyzed.
The findings show that there is neither clarity nor unanimity of the very meaning and understanding of FFs in Russia.
Badar Alshabibi, Shanmuga Pria, and Khaled Hussainey investigate whether corporate board characteristics influence dividend policy in Omani listed firms. It was found that dividends payout is positively associated with board independence, board activity, and board nationality diversity. Though, no evidence is found that board size and gender diversity have an impact on dividends payout. When controlling for the global oil crisis, none of the corporate board attributes influence dividends payout.
Mehadi Mamun attempts to shed light on workers who are very vulnerable and examines the impact of privatisation on workers' quality of working life. Employing document analysis and semi-structured face-to-face interviews with privatised and state-owned organisations' workers in Bangladesh, this study finds that workers' compensation, job security, access to trade unions, and leave entitlements in most privatised case study organisations are less than their counterparts in comparable state-owned organisations.
The papers published in this issue of the journal are very interesting and useful sources of the literature for both experienced and younger researchers. | 2021-10-21T15:19:54.946Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "936e0503762c74f15105eb43a8c8a15c7897b2aa",
"oa_license": null,
"oa_url": "https://doi.org/10.22495/cocv18i4editorial",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "12e6b4f6ec193e0457f0dc9fa7a797683b7cc285",
"s2fieldsofstudy": [
"Geography",
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
232381570 | pes2o/s2orc | v3-fos-license | Classification of Indoor Human Fall Events Using Deep Learning
Human fall identification can play a significant role in generating sensor based alarm systems, assisting physical therapists not only to reduce after fall effects but also to save human lives. Usually, elderly people suffer from various kinds of diseases and fall action is a very frequently occurring circumstance at this time for them. In this regard, this paper represents an architecture to classify fall events from others indoor natural activities of human beings. Video frame generator is applied to extract frame from video clips. Initially, a two dimensional convolutional neural network (2DCNN) model is proposed to extract features from video frames. Afterward, gated recurrent unit (GRU) network finds the temporal dependency of human movement. Binary cross-entropy loss function is calculated to update the attributes of the network like weights, learning rate to minimize the losses. Finally, sigmoid classifier is used for binary classification to detect human fall events. Experimental result shows that the proposed model obtains an accuracy of 99%, which outperforms other state-of-the-art models.
Introduction
Human fall detection system is an important segment of assistive technology since living assistances are very much obligatory for many people. There has been a remarkable emersion in the elderly population in Bangladesh as well as in western countries over recent years. The statistics on human fall detection have exposed that falls play a key role in injurious death for elders more than 79 years of age [1]. According to the review from the National Institutes of Health of the United States, approximately 1.6 million aged people are sustaining fall-related excoriations in the U.S. every year [2]. Meanwhile, China is facing the fastest aging population in human history, the population will rise to about 35% by 2050 [2] from 2020. A study shows that about 93% of the elders among which 29% live alone in a house [3]. It was verified that about 50% of the aged people lying on the floor because of fall events for more than one hour, will die within six months even though they do not have any direct injuries [4].
Another statistic provided by the Public Health Agency of Canada [5] reported mentionable data. In 2026, one Canadian older than 65 will be out of five where in 2001 the portion was eight to one. It is notable that 93% of elderly people stay in their private house among which 29% of them lead a lonely life [5]. Again almost 62% of injury-related hospitalizations for elders are the result of falls [6].
To identify fall at right, in recent years various methods are proposed using advanced devices like wearable sensors, accelerometers, gyroscope, magnetometers and so on. However, this is not an effective solution since it is impractical to wear a device for a long time [7]. Therefore, it is indispensable to initiate a penetrating surveillance system for senior people, which can immediately and automatically detect fall actions inside the room and notify the state to the caretakers. This is only possible when a sensor-based alarm generation system is placed in the living room.
In this paper, a deep learning and vision-based framework is proposed for human fall detection and classification that can also monitor old people in the indoor environment incorporating a convolutional neural network (CNN) with the recurrent neural network (RNN). Among different types of RNN, gated recurrent unit is integrated along with CNN. We have also explored some transfer learning models like VGG16, VGG19 followed by GRU to classify human fall action. Besides, we have also assessed our proposed model using the two most prominent datasets, UR fall detection dataset and Multiple cameras fall dataset. Our proposed model shows an impressive performance using these datasets compared to other existing models.
Moreover, with the expansion of technology, human beings are leading a sedentary lifestyle which has numerous negative impacts [8]. Because of this lifestyle, people are suffering from many more diseases like chronic health disease, muscle weakness, labyrinthitis, osteoporosis. Besides, the number of lonely living beings is increasing with the advancement of technology which represents the necessity of a monitoring system to detect adverse events [9]. This model will be applicable in the medical alert system and even in the old home for monitoring individuals. However, the key contributions of our proposed model are enlisted below: The later part of this paper is well organized as follows-Section 2 represents the literature discussion on human fall events, Section 3 describes the workflow of the proposed architecture, Section 4 presents experimental details to evaluate the efficiency of the proposed model and Section 5 discusses the limitation of the proposed model along with potential future work.
Related Work
In this decade, among different types of deep learning models [10], convolutional neural network (CNN) has acquired immense success in computing like image segmentation [11], object recognition [12], natural language processing [13], image understanding [14] and machine translation [15], which requires a huge training dataset to complete the set up.
A study by Stone, Erik E. and Marjorie S. reveals that falls of older people at home happens most of the time in dark conditions [16]. Sowmya K. and Kang-Hyun Jo [17] classify fall events in a cluttered indoor environment by lessening occlusion effects. However, the knowledge of a series of poses is a key to detect non-fall from fall. For foreground extraction, a frame differencing method is implemented and the human silhouette is extracted using an ellipse fit. Binary support vector machine (SVM) is applied to differentiate fall frames from the non-fall. But SVM underperforms in case of an insufficient training data sample.
Convolutional neural network (CNN) is used to identify different poses in [18]. Here, in different illumination states, the background subtraction method misclassifies some datasets because of shadow. As a result, it generates false predictions in bending, crawling and sitting positions. Tamura et al. [19] developed a human fall detection system using a gyroscope and an accelerometer. When a fall action is detected it triggers a wearable airbag. To design the system, 16 subjects have been produced to identify mimicked falls and a thresholding technique is applied to perform this action.
A real-life action recognition system is overviewed in [20] using deep bidirectional LSTM (DB-LSTM) and convolutional neural network (CNN). Here, DB-LSTM recognizes hidden sequential patterns in the features where CNN extracts data from video frames. But it shows false projection at the identical background and occluded environment.
Du et al. [21] conducted research on convolutional neural network (CNN) to extract the skeleton joint map from different images. However, the result can be developed if the recurrent neural network (RNN) is implemented properly as in [22]. Anderson et al. [23] procreated a surveillance environment using multiple cameras. Human silhouettes captured from the cameras are converted to 3-D representations known as voxel person. Finally, a fall event is classified from linguistic summarizations of temporal fuzzy inference curves, which represent the states of a voxel person.
Different types of human action can be represented as the movement of the skeleton as the human body is an articulated system. The 2D skeleton is extracted from RGB sequences in [24] using the deeper-cut method and long short term memory (LSTM) is implemented to identify five several actions. However, It is more challenging on processing speed and recognition performance.
The faster R-CNN method is applied in [25], which achieves an accuracy of 95.5% as it cannot properly classify fall events when a person is sitting on a sofa or a chair. A PCANet model is trained followed by SVM classifier in [26], which obtains less sensitivity of 88.87% as it cannot identify fall events properly. Moreover, SVM underperforms as fine-tuning hyperparameters in SVM is not so easy.
In [27] curvature scale space (CSS) features are extracted from human silhouettes and an extreme learning machine (ELM) classifier is used to identify fall action. However, this experiment achieves 86.83% accuracy as it misclassifies the lying and walking position of humans as the silhouette of a falling and a lying person is similar.
A two-stage human fall classification model is proposed in [28], which achieves an accuracy of 97.34%. To identify human posture from the human skeleton, at the preprocessing stage it considers deflection angles along with spine ratio. To extract the human skeleton it uses OpenPose and to identify confusing daily living actions from fall events, a time-continuous recognition algorithm is developed. However, this model misclassifies workout motions and for increasing accuracy, it has proposed to develop a deep learning model in the future.
Besides, Chen et al. [29] detect human fall events from the human skeleton information by OpenPose and obtains an accuracy of 97%. To identify fall action three efficient parameters are considered here like centerline angle between human body and ground, speed of descent at the center point of the hip joint, the ratio of width and height of external rectangle surrounding the human body. However, this model is unable to classify partially occluded human actions.
Moreover, A vision-based human fall detection model is proposed by Chen et al. [7] in the case of complex background. They perform the mask R-CNN method to extract the object from a noisy background. Afterward, for fall action detection, attention-guided Bi-directional LSTM is applied and it acquires 96.7% accuracy. However, this model cannot identify the behavior of multiple people living in the same room.
Nowadays, different types of wearable sensors, that is, accelerometers, buttons are most frequently used to detect falls at the right time. However, using such detectors is uncomfortable and most of the time, older people forget to wear these sensors. Moreover, using a help button is worthless if the person has fainted or is immobilized. Such a framework for elderly fall classification and notification is proposed in [9]. Here, the tri-axial acceleration of human movement is measured with a cell phone. Both the time domain and frequency domain features are considered here. After feature extraction and pre-processing they have performed a deep belief network with a view to training and testing the system. It shows an accuracy of 97.56% sensitivity along with 97.03% specificity. However, for elder people, it is not always possible to carry a cell phone in an indoor environment.
In [8], the authors conducted a highly promising experiment to develop a hybrid model using a machine learning method combining with deep learning. After analyzing different comparative experiments, they proposed an architecture of CNN with LSTM for posture detection with an accuracy of 98%. This CNN model has been designed with 10 layers without any batch normalization. Unnormalized data and less dropout rate of 20% can lead to a huge training time of the dataset and the performance of the model may also be affected. Table 1 represents the summarization of this literature discussion. Table 1. Summarization of literature discussion.
Research Paper Proposed Models
Limitations Accuracy (%) [17] Frame differencing for foreground extraction, ellipse fit for human silhouettes extraction and SVM classifier for fall classification.
SVM underperforms in case of an insufficient training data sample. 96.34% [18] Background subtraction method to extract foreground and CNN to classify fall events.
Generates false predictions in bending, crawling, and sitting positions. 90.2% [20] CNN for feature extraction and DB-LSTM recognizes sequential pattern.
Shows false projection at the identical background and occluded environment. 92.66% [21] CNN to extract the skeleton joint map.
The result can be developed if the recurrent neural network is incorporated. 94% [23] Human silhouettes are converted to voxel person and linguistic summarizations of temporal fuzzy inference curves classify fall events.
Can not properly classify fall events when a person sitting on a sofa or a chair. 95.5% [26] PCANet model is trained followed by SVM classifier. Low accuracy rate. 88.87% (Sensitivity) [27] CSS features are extracted from human silhouettes and extreme learning machine (ELM) classify fall events.
Misclassifies the lying position. 86.83% [28] OpenPose method extract human skeleton and time-continuous recognition algorithm identify fall event.
Unable to classify partially occluded human actions. 97% [7] Mask R-CNN extract object from noisy background and Bi-LSTM classify human actions.
Cannot identify the behavior of multiple people living in the same room. 96.7% [9] Deep belief network for training and testing human actions.
Not always possible to carry a cell phone in an indoor environment.
Unnormalized data may lead to huge training time. 98% As deep learning models have outperformed state-of-the-art models, there are many scopes for innovation and development in this research area. However, a computer visionbased model is proposed to classify fall events at the right time, which provides whole information regarding the movement of a person.
Workflow of Proposed Architecture
Elderly people monitoring is done through a digital video camera, which will be placed in the room as there are some distance limitations in the Kinect camera. Although it has some privacy concerns, it will give us information about surroundings in case of fall events. Sequential frames are generated from videos of variable length. Frames are passed into the convolutional neural network to extract key features. After that these features are passed into a gated recurrent unit. The output from GRU is passed to a sigmoid classifier to predict the class. Figure 1 illustrates an overview of the proposed network.
Frame Generation from Video
There are approximately 300 videos from different datasets of various durations. From each video, we need to pass a sequence of frames to CNN. As mentioned earlier, we picked 10 distributed images from the whole video rather than considering all other frames. The algorithm to generate frame sequence from video is given below. We performed a frame differencing method in Algorithm 1 along with other operations for tweaking important and informative frames from the video.
Preprocessing
For improving image properties, eliminating noisy artifacts and enhancing certain features, it is necessary to preprocess data. We have performed preprocessing by three steps-resizing, augmentation and normalization. In order to minimize computational cost frames are resized to 150 × 150.
On the resized image, augmentation is performed, which transforms frames at each epoch of training. For augmentation, we have performed zoom, horizontal flip, rotation, width shift and height shift. This helps for better generalization of this model.
Convolutional Neural Network
Convolutional neural network is a type of deep neural network which extracts key features from images using learnable weights and biases and can differentiate one object from another. Comparing with other classification algorithms, it requires much lower preprocessing of images. It consists of an input layer, an output layer along with a series of hidden layers. The hidden layers typically consist of a stack of convolutional layers which perform pixel-wise convolution or multiplication operation and generate a convolved image. The activation function used here is a rectified linear unit (ReLU) layer which is followed by a series of pooling layers and fully connected layers.
There are three types of the pooling operation. These are max pooling, min pooling, and average pooling operation. Among these, the max-pooling operation is most frequently used as it minimizes computational cost along with learnable parameters.
For extracting features from images, CNN uses this series of convolutional layers followed by different pooling layers, flatten layer as well as fully connected layer. Each layer works with different activation functions. Following this, the proposed architecture is also designed with a series of convolution layers with 'ReLU' activation function. The ReLU activation function generates a rectified feature map as output. It does not excite all neurons at a time. When the output of linear transformation becomes zero, the neurons will be discarded. Although there are different types of pooling layers, we have used the max-pooling layer
Custom 2DCNN-GRU Architecture
Preprocessed data are fed to CNN. We have proposed a CNN architecture for extracting spatial features. As the environment in the dataset is less complex, 16, 32, 64, 128, 256, and 512 kernels are used sequentially in the convolution layer to extract features from the image. Weights are tuned for each layer depending on the activation function. This network gives much more accuracy than other pre-trained networks of CNN because an experiment is done for choosing kernel initializer along with activation function. In this model he_uniform kernel initializer is used for choosing the initial kernel and updating weights for training data with ReLU activation function. We have also conducted an experiment for choosing appropriate momentum for batch normalization. We have performed batch normalization with a momentum of 0.9. Batch normalization standardizes the mean and variance to make learning stable. Here, it takes 83 s to complete an epoch using batch normalization. Without batch normalization, it takes 99 s for an epoch in case of training the dataset. After batch normalization, the max pooling operation with stride (2, 2) is performed to extract the strong edges in the image. This also helps to remove shallow edges corresponding to noise. The output of the convolution layer and max-pooling layer are shown, respectively, in Figures 2 and 3. Figure 2 depicts the convolved image and Figure 3 illustrates the pooled feature map. After that, a time-distributed layer is used with 512 nodes to prepare data for RNN. Then GRU cells are used to follow the temporal dependency of video frames. The output of the GRU cell is passed to the dense layer. For eliminating overfitting, we have experimented on the dropout rate which is described in this paper. In this regard, a dropout rate of 0.5 is used in the dense layer for better accuracy. Adam optimizer with a learning rate of 0.0001 is used in the model. The detail of the proposed 2DCNN-GRU architecture is depicted in Figure 4.
Gated Recurrent Unit (GRU)
Among all kinds of human movements, to classify fall events, we need to consider temporal features along with spatial features. The recurrent neural network can extract temporal features by remembering necessary information from the past. However, during this operation, it faces vanishing and exploding gradient problems. In our model, we have used the gated recurrent unit (GRU) network to solve this problem of RNN using an update gate and a reset gate, which are the vectors to decide what information needs to be passed as output. These gates can be trained to hold information from long ago, without erasing new input through time but pass relevant information for prediction to the next time steps. GRU gives a much better result than a long short term memory (LSTM) cell because of its simple architecture. Figure 5 depicts the architecture of a GRU cell. There are three gates in GRU called the Update gate, the Reset gate, and the Current memory gate, which has no internal cell state. The Update gate identifies significant preceding information from the antecedent time steps, which is required to be passed along with future information. The reset gate decides how much of the past information needs to be forgotten. The current state gate determines the current state information along with the relevant information from the past.
Mathematical equations learned from [30] are given below which are used to perform these operations: Here, x t represents the mini batch input, h (t−1) acts as the hidden state of utmost time step, W (z) and W (r) are current weight parameters and U (z) , U (r) are updated weight parameters. The output of GRU cell is passed to the dense layer with a dropout rate of 50%. Finally, sigmoid activation function [31] is applied for binary classification of fall and non-fall action through the following equations.
Deep learning models are efficiently used to lessen uncertainty and entropy measures the uncertainty level. Binary cross entropy plays a significant role to alleviates incongruity between the explorative distribution of training data and the distribution incited by the model. Here following binary cross-entropy loss function [32] equation is used to compare the predicted class with the actual class.
Binary cross entropy loss
Here, t i denotes the truth value and p i represents the sigmoid probability of ith class.
Experiments
For executing this experiment, we have used the machine of a configuration of AMD Ryzen 7 2700X Eight-core 3.7 GHz Processor, 32 GB RAM, NVIDIA GEFORCE RTX 2060 SUPER of 8 GB GPU memory. We have also used Google colab for faster execution.
Dataset Description
We have conducted this experiment on the UR fall detection dataset and multiple cameras fall dataset to examine the accuracy of our proposed model.
UR Fall Detection Dataset
UR fall detection dataset is one of the most benchmark datasets which comprises 70 indoor videos among which 30 fall events and 40 daily living activities. Here, fall activities are recorded using two Kinect cameras whereas daily living activities are recorded using only one camera. In this dataset, there are 30 frames per video. Each video possesses a resolution of 640 × 240.
Multiple Cameras Fall Dataset
Multiple cameras fall dataset is one of the most extensive datasets which is widely used to classify human fall action. It includes 192 videos where 96 videos represent fall events and 96 videos are of regular indoor activities. This dataset is recorded with 24 scenarios which represent 9 different activities like walking, falling, lying on the ground, crouching, and so on. Eight different cameras are used to capture each activity. Each video has a frame rate of 30 fps with a frame size of 720 × 480.
Evaluation Metrices
The proposed model is also evaluated in terms of accuracy, precision, sensitivity, specificity and F1-score. Accuracy demonstrate the rate of correctly classified data using following equation [33]: where TP represents the true positive rate, that is, the detected result is a fall, which is actually a fall event, TN represents the true negative rate, that is, the detected event is a non-fall, which is actually a non-fall event, FP represents the false positive rate, that is, the detected event is a fall, which is a non-fall event in real-time and FN stands for the false negative, which means that the detected result is a non-fall but it is actually a fall event of the human being. In the case of binary classification, 100% precision score signifies that every element of the positive class verily belongs to the positive class, which is calculated by the equation [33]: Sensitivity gives the probability of positive result of test data through the equation [33]: The equation [33] to calculate specificity is given below which provides the probability of negative result of test data.
F1-score implies the harmonic mean between precision and sensitivity. The following equation, Ref. [33] is used to calculate the F1-score of the test data.
Results and Discussion
To implement this scratch model, we have made several experiments on the percentage of training and validation datasets to achieve better accuracy and we have got a mean accuracy of 99% where 99.8% and 98% accuracy for the UR fall detection dataset and Multiple cameras fall dataset, respectively. Multiple cameras fall dataset give less accuracy than the UR fall detection dataset because of the large number of videos in multiple cameras fall dataset. In multiple cameras fall dataset, our model misinterprets when a person lies down intentionally. Our proposed model outperforms when validation data is below 50%, which is shown in Figures 6-8. Considering this, 35% videos are used for validation and 25% are used for testing. The rest of the data are used for training. Figures 6-8 illustrate that at 40th epoch, the model achieves the best training and validation accuracy for 40% training, 35% validation and 25% testing data. An experiment is done using different frame numbers illustrated in Figure 9. This reveals that fewer than eight frames per video decrease the accuracy rate because it cannot consider all significant frames. Similarly, more than 18 frames per video decrease processing speed because of the large number of computations. The changes of the execution time of the proposed model according to the number of frames in input can be described using Table 2. Here, we see that the fewer the number of frames in input, the faster the execution of the model. However, it decreases the accuracy rate because of the absence of enough information according to Figure 9.
Therefore we have chosen 10 numbers of distributed frames from the whole video to reduce computational complexity along with better accuracy throughout the whole experiment. Figures 10 and 11 illustrate this 10 numbers of frame sequences of an input video of daily activities and fall events respectively in multiple cameras fall dataset. As stated earlier, in this scratch model we have performed batch normalization to achieve faster convergence during training. Therefore, Table 3 evaluates the performance using normalized and unnormalized data. It depicts that normalized data achieve a high accuracy rate within a lower number of epochs than unnormalized data. Figure 11. Sequential video frames for fall event in multiple cameras fall dataset. Output of different layers in 5 distributed frames among 10 frames are illustrated in Figure 12. Low to high level feature maps in Figure 12 shows that a deep neural network acts like a black-box, which extracts features of the input image.
At the preliminary stage, the layers extract shallow features like colors and lines while the deep level layers extract the more details patterns. Higher level features are encoded that are difficult to extract and are presented to human readable format [34]. Therefore, deep learning models can extract higher level features that are mystical and more immense in amount than the features considered by humans. Figure 13 shows the output of this experiment to classify human fall events. Here, the first 4 rows represent 5 frames of 4 non-fall event's videos and the rest of the rows represent 5 frames of 4 fall event's videos. Figure 14 demonstrates the attention map where the blue region depicts the region of interest after executing this model, which classifies fall and non-fall events.
Here, a dropout rate of 50% is used in dense layers because more or less than 50% dropout rate gives lower accuracy along with the overfitting problems of training data. The impact of the dropout rate for the change in accuracy is illustrated in Figure 15. This figure illustrates that the proposed model achieves a maximum accuracy of 99.8% at the dropout rate of 50% for the UR fall detection dataset.
Tables 4 and 5 represent a summary of test accuracy for different existing models where VGG 16 and VGG 19 give 98% test accuracy, Xception generates 99% accuracy but it uses a huge number of parameters compared with our proposed model. As VGG 16,VGG 19, and Xception are pre-trained models, so the number of parameters and depth of these models are observed from [35] where the rest of the others are known experimentally. In [36], the authors classify human fall action using 3DCNN combining with LSTM and obtains an accuracy of 99%. However, 3DCNN comprised with LSTM needs a huge number of parameters. Moreover, training iteration for 2DCNN needs 0.5 s per pass where 3DCNN lasts for 3 s. We have also conducted an experiment using 2DCNN with LSTM and it gives an accuracy of 89% for the same dataset. It is difficult to estimate the training time of parameters of models as it depends on the GPU model. Using our hardware configuration mentioned earlier and GPU, we have executed some deep learning approaches. Number of parameters, depth, and the training times for these models are represented in Table 6. From this table, it is seen that the training time of our proposed model is faster because of its fewer parameters than others. Considering all these issues, our proposed model gives an average of 99% accuracy using a lower number of parameters along with depth. This is illustrated by the confusion matrix in Figures 16 and 17 for the different datasets. The elements in dark blue diagonal demonstrate the number of correctly classified Fall and Non-Fall events. Here, we see the global accuracy is 100%, which is the ratio of the summation of elements in diagonal by the summation of all elements in the entire matrix. From the confusion matrix for UR fall detection dataset in Figure 16, we see that there are 25 test data points and all are classified correctly. In the confusion matrix for multiple cameras fall dataset in Figure 17, there are 48 test data points. Among them, one non-fall video is misclassified as a fall event. The class-wise performance like accuracy rate, precision score, sensitivity, specificity, and F1-score of our proposed 2DCNN-GRU model using the UR fall detection dataset and the multiple cameras fall dataset is shown in Tables 7 and 8. Tables 9 and 10 illustrate the comparison of the performance of our proposed model with some existing models. Using the UR fall detection dataset, our model achieves a maximum accuracy of 99.8% compared with Kasturi et al. [17] and Lu et al. [36], which are based on support vector machine (SVM) and 3DCNN followed by LSTM, respectively. On the multiple cameras fall dataset, the proposed model outperforms, acquiring 98% accuracy compared to Wang et al. [26], which is based on PCANet and it is also much higher than Ma et al. [27] based on extreme learning machine (ELM). Table 9. Performance comparison with existing models using UR fall detection dataset.
Conclusions
As the deep learning algorithm outperforms other feature extraction algorithms, in this paper, we have proposed a combined architecture where CNN is incorporated with GRU. This is quite challenging to identify human falls at the right time, not only to minimize the negative consequences of a fall but also to increase acceptance level among elderly people. We have used two existing benchmark datasets of fall classification-UR fall detection dataset and multiple cameras fall dataset of variable length videos. The frame number from each video is chosen empirically. To perform classification on these datasets, a scratch model is proposed where CNN followed by GRU is performed, which is our key contribution. Another novelty of our work is we have conducted some transfer learning models along with other deep learning models using the same datasets. However, the proposed model outperforms with an accuracy of 99% with a lower number of parameters than other existing architecture by parameter tunings. Binary cross-entropy loss function outperforms others like mean squared error, hinge loss, and so forth. Despite these, this experiment would be much better if we could enrich our dataset. In the future, we can include an alarm system to take necessary action in time and reduce the after-fall effects. Moreover, the proposed architecture extracts features from the entire frame. Besides, we can reduce the computation time if we can consider only the salient region. Furthermore, this model can also collaborate with people counting techniques to identify the actions of different people residing in the same room.
Author Contributions: All authors contributed equally to the conception of the idea, the design of experiments, the analysis and interpretation of results, and the writing and improvement of the manuscript. All authors have read and agreed to the published version of the manuscript. | 2021-03-29T05:17:20.141Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "6ac4cdafdf7f2bad8835c457e191f0af54b4be69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/23/3/328/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ac4cdafdf7f2bad8835c457e191f0af54b4be69",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
249937758 | pes2o/s2orc | v3-fos-license | The Multigrid Method for the Combined Hybrid Element of Linear Elasticity Problem
Combined hybrid element method is one kind of stable finite element discrete method in which the famous Babuska–Brezzi condition is satisfied automatically. So, the method is more widely used, compared with other kinds of mixed/hybrid element methods. In this paper, we develop a non-nested multigrid algorithm for combined hybrid quadrilateral or hexahedron elements of linear elasticity problem. The critical ingredient in the algorithm is a proper intergrid transfer operator. We establish such an operator on quadrilateral or hexahedron meshes and prove the mesh-independent convergence of the kth level iteration and full multigrid algorithm in L 2 norm. Numerical experiments are reported to support our theoretical results and illustrate the efficiency of the developed methods. We also give the numerical experiments showing the convergence of the developed method as Poisson’s ratio is close to 0.5.
Introduction
Finite element method is one of the most e cient numerical methods for the linear elasticity problem. So far, there are a number of previous works on nite element method for this problem. Various conforming [1,2] and nonconforming [3][4][5][6] nite elements were applied to this problem. To approximate the stress variables independently, mixed and hybrid element methods [7][8][9][10] were proposed. But in the two methods, the eld variables must satisfy the so-called Babuska-Brezzi condition [11], which restricts the wide and convenient application of the methods. To avoid this restriction, Zhou [12,13] put forward combined hybrid element method. It is a stabilized hybrid element method where Babuska-Brezzi condition is satis ed automatically, when the displacement space is weakly compatible. erefore, a wider range of approximation spaces is supplied for the eld variables. Furthermore, since the energy error can be reduced by adjusting the combined parameter in the combined hybrid variational principle [14], the more exact numerical solutions can be obtained. But like other nite element methods, the combined hybrid nite element may still lead to ill-conditioned linear system, while mesh size goes to zero. Hence, it is necessary to consider e cient solver for the discrete system.
Multigrid method (MGM) has been used extensively to e ciently solve the linear systems from various nite element discretization of the elasticity problem. Lee [15] adopted MGM for the discrete system from P1 conforming mixed element. Xu [16] focused on a MGM for Wilson nonconforming element. P1 nonconforming mixed element MGM has been provided by Brenner [17]. In 2009, Lee [18] developed a robust MGM for the higher order conforming element. A type of multigrid method based on the local relaxation has been applied to the system discretized by an adaptive nite element method [19]. In recent years, Xiao [20,21] proposed the algebraic multigrid method which is based only on information available from the linear system to be solved. To the best of our knowledge, there is almost no corresponding work about MGM for combined hybrid elements developed by Zhou and Nie [22,23]. In this paper, we will present the convergence of the multigrid algorithm for a series of combined hybrid quadrilateral or hexahedron elements.
In combined hybrid element formulation, one can eliminate the stress unknowns and obtain a global system solely involving the displacement parameters. Once the displacement is computed, the stress can be obtained through local operations on element. So, we need an efficient solver for the global system. In this paper, we use multigrid method to this system. e work [17] presented the multigrid method for P1 nonconforming mixed element and provided a direct convergence analysis in L 2 norm. In order to prove the convergence of the proposed method, we will use the idea in [17]. By the framework developed there, it is critical to establish a transfer operator which is bounded in L 2 norm. However, it is not trivial and there exist difficulties in the following two aspects: (1) how to design an appropriate intergrid transfer operator for the non-nested multigrid method and (2) how to prove the bounded property of the operator on quadrilateral or hexahedron elements other than just rectangular or cube element.
In this paper, an intergrid transfer operator defined on quadrilateral or hexahedron elements is given and then its stable property is verified by direct computation and the scaling argument. Based on this property and the proof idea in [17,24], the optimal convergence of kth level iteration can be achieved in L 2 norm. Subsequently, we prove that the solution of the full MGM satisfies the same type of error estimates as the discretization error. Furthermore, numerical examples are supplied to justify the convergence theory and demonstrate the effectiveness of the method. At last, we propose an effective multigrid method for nearly incompressible elasticity problem. e remainder of this paper is organized as follows. In the next section, some basic conclusions of combined hybrid elements are reviewed. In Section 3, an intergrid transfer operator is given and its stable property in L 2 norm is analyzed; then, MGM is described in Section 4. By introducing a number of technical lemmas, contracting property of the kth level iteration and the convergence theorem for the full MGM are followed in Section 5. In Section 6, the numerical examples of the elasticity mechanical problems are investigated. Some concluding remarks are discussed in the final section.
Combined Hybrid Element Method
A linear elasticity problem is where u is the displacement field, σ is the stress tensor, ε is the strain, f is the body force, D is the elastic modulus matrix, and Ω is a convex polygonal domain in R n with boundary zΩ. Assume that T 0 � Ω j denotes a quadrilateral or hexahedron subdivision of Ω, with the mesh size h 0 � max diam Ω j : Ω j ∈ T 0 . For 1 ≤ k ≤ J, T k is obtained from T k−1 by connecting the midpoints of the opposite edges of T k−1 , and h k−1 ≤2h k . From now on, we always assume that the mesh partition is shaped regular and quasiuniform.
To relax continuity requirement of the displacement and stress, the following piecewise Sobolev spaces and a Lagrange multiplier space have been used (see [22] for details): On the above spaces, Hellinger-Reissner principle based on domain decomposition and its dual principle are incorporated into a system by a weight factor α(0 < α < 1), and then the so-called combined hybrid variational principle [12,13] for (1) is formulated as follows: find (σ, u, u c ) ∈ V × U × U c such that and n represents the unit outer normal to zΩ j . According to [12,13], there exists a unique solution We consider the discrete formulation of (3) in the following. Firstly, Wilson's interpolation space U k is introduced to approximate U. en, where v i (i � 1, . . . , 4) are the function values at vertexes of Ω j and λ i (i � 1, 2) are the mean values of the second derivatives as follows: where J j is the Jacobian of F j . Hence, v is uniquely determined by v i (i � 1, . . . , 4) and λ i (i � 1, 2) on Ω j . Since U k is weak compatible, an interpolation operator T c : U k ⟶ U c exists, i.e., en, T c (U k ) is employed as the discrete space of U c . By such arrangement, the variable spaces are simplified.
Secondly, the stress discrete spaces can be one of the following spaces: the piecewise constant stress space, the piecewise linear stress space, Pian and Sumihara's stress space [25], or the piecewise linear stress space constrained by the energy compatibility condition [22]. ey can be written as V 0,k , V 1,k , V P−S,k , and V 0−1,k , respectively. e corresponding discrete formulation of (3) is to find (σ k , u k ) ∈ V k × U k such that and U k × V 0−1,k correspond to four kinds of combined hybrid quadrilateral elements, denoted by CH(0), CH(1), CH(P-S), and CH(0-1).
Furthermore, for the hexahedron element case, Wilson interpolation space is still adopted to approximate U. Complete linear space or the one with bilinear terms, restricted by the energy compatibility condition, is used for the stress discrete space and represented by H 0−1,k or H 0−1+,k . e combinations U k × H 0−1,k and U k × H 0−1+,k correspond to the two kinds of combined hybrid hexahedron elements CHH(0-1) and CHH(0-1) + [23]. s k (·, ·) is positively definite. b 2,k (·, ·) and b 1,k (·, ·) are bounded.
en, a linear solver operator T k : U k × T c (U k ) ⟶ V k exists. So, the stress can be linearly expressed by the displacement as follows. at is to say By eliminating the stress, a finally generalized displacement scheme, equivalent to (9), is as follows: find u k ∈ U k , such that where According to [22], a unique solution (σ k , u k ) ∈ V k × U k exists for (9) and the method has the following discretization error estimate for the displacement: In the following proof, C (with or without a subscript) denotes a generic positive constant independent of h k .
The Intergrid Transfer Operator
e multilevel Wilson interpolation spaces are not nested, so the main complication in a non-nested setting is to design an appropriate intergrid transfer operator to pass data between the meshes. In this section, we establish an efficient intergrid transfer operator and prove its bounded property in L 2 norm.
Let M be a quadrilateral element in T k−1 , as shown in Figure 1. a i (i � 1, . . . , 4) are four vertices, and a j (j � 5, . . . , 8) are midpoints of the four edges. By connecting a 5 , a 7 and a 6 , a 8 , M is divided into four subelements
ere exists a constant C such that
Proof. Noting that So, we only need to prove 4 j 1 A careful and direct computation yields Let v j,i (I k k−1 v)(a j,i )(i 1, . . . , 4) be the function values at four vertexes of subelement M j (j 1, . . . , 4) and λ j,i (i 1, 2) be the internal degrees of freedom in M j (j 1, . . . , 4). Making an analogy with (18), we have By (19) and the de nition of I k k−1 , we obtain 4 j 1 It follows from (18) and (20) e scaling argument and (21) give where |M j | and |M| denote the areas of M j and M, respectively. erefore, □
The Multigrid Method
is section is devoted to describing the kth level iteration and the full multigrid algorithm, a nested iteration of the former. To facilitate the description and analysis, some auxiliary operators are introduced rstly. e operator A k : U k ⟶ U k is de ned by Clearly, A k is symmetric and positive de nite. e ne to coarse operator I k−1 k : U k ⟶ U k−1 should satisfy e kth level iteration: Let us consider the following algebraic equations e solution, obtained by the kth level iteration with initial guess y 0 , is denoted by If k > 1, the procedure can be divided into three steps:
Mathematical Problems in Engineering
Presmoothing step. y m 1 ∈ U k will be formulated recursively by where Λ k denotes the upper bound of the spectral radius of A k . en, Postsmoothing step: Go on the iteration as (28) with the initial value y m 1 +1 , i.e., So, the result by the kth level iteration is where m 1 and m 2 are non-negative integers.
On the basis of the kth level iteration, the full multigrid algorithm will be constructed as follows.
e full multigrid algorithm: For k � 1, the solution u 1 of (11) is obtained by a direct method. For k > 1, the approximations u k are obtained recursively from r is a positive integer independent of k.
Convergence Analysis
In this section, we focus on the proof of the contracting property of the kth level iteration and the convergence of full MGM for the combined hybrid elements. Following [17,26], the contracting property of the kth level iteration mainly depends on that of the two-level iteration. us, we begin with the analysis of the two-level grid algorithm.
Assume that the relaxation operator in smoothing step is defined by R k ≔ I − 1/Λ k A k . A trivial computation gives y − y m � R m k (y − y 0 ). Let y and y m 1 +m 2 +1 be the accurate solution and the two-level grid approximation solution of (27), respectively, then where q is the exact solution of the residual equation on the (k − 1)th level. Set e m 1 ≔ y − y m 1 and e 0 ≔ y − y 0 ; then, we have the following lemma.
Proof. By (24)-(26), we have Owing to the positivity of a k−1 on U k−1 , the proof is complete. It follows from (34) and Lemma 2 that e above equality implies the relation between the initial error and the finial error of the two-level grid algorithm. If R m k and I − I k k−1 P k−1 k can be estimated, the convergence of two-grid algorithm will be obtained.
First, we introduce a series of mesh-dependent norms on U k . From the spectral theorem, there exist eigenvalues 0 < μ 1 ≤ μ 2 ≤ · · · ≤ μ n k and eigenfunctions ψ 1 , ψ 2 , · · · ψ n k ∈ U k , (ψ i , ψ j ) L 2 (Ω) � δ ij , such that Given any v ∈ U k , then v � n k i�1 c i ψ i . e norms ‖|v|‖ s,k are defined by □ Lemma 3. [17], [27] e following properties of the meshdependent norms hold Next, we will estimate the operators P k−1 k and I k k−1 , which play important roles in the proof of the approximation property.
ere exists a constant C such that
Proof. It follows from Lemma 3,(26), and Lemma 1 that and ξ k−1 ∈ U k−1 satisfies en, there exists a positive constant C such that Proof. Let ξ k−1 − P k−1 k ξ k � η. ‖|η|‖ 0,k−1 will be estimated by the duality argument. Assume that (σ η , w, w c ) ∈ V × U × U c satisfies By (46), the definition of η, and (26), we have It follows from (47), (43), and (42) that We mainly estimate ‖w k−1 − I k k−1 w k−1 ‖ L 2 (Ω) in the following. Assume that Π k−1 : U ⟶ W k−1 is the bilinear interpolation operator, where W k−1 is the isoparametric bilinear finite element space with respect to T k−1 . From the definition of I k k−1 , we have I k k−1 v � v, ∀v ∈ W k−1 . Hence, By Lemma 1, the interpolation error estimate, and discretization error estimate, we get Since ‖w‖ H 2 (Ω) ≤ C‖η‖ L 2 (Ω) , it is evident that which together with (48) and Lemma 3(i) implies the desired result (44).
e proof of Lemma 6 is trivial. We refer the reader to Lemma 3.7 in [17] for more details.
Based on the above lemma, we give the approximation property, which is basically relied on the estimate of I − I k k−1 P k−1 k .
Lemma 7 (Approximation property). ere exists a constant
C > 0 such that Proof. Given any v ∈ U k , take ϕ � P k−1 k v, and then and ψ k−1 ∈ U k−1 satisfies (57) It follows from I k k−1 k−1 ψ � k−1 ψ and (26) that Lemma 3 (ii) together with the discretization error estimate, the interpolation error estimate, and (44) yields e smoothing property is mainly measured by R m k in the following lemma. e proof is standard [17,26] and is omitted here.
Combining Lemma 7 with Lemma 8, we have the contracting property of the two-level grid algorithm.
Theorem 1.
ere exists a constant C > 0 such that A standard perturbation technique [24,26] yields the contracting property of the kth level iteration.
Theorem 2.
For any 0 < c < 1, as long as m 1 is chosen to be large enough, then Proof. For k � 1, MG(1, y 0 , b) � A −1 1 b � y, so the conclusion is trivial. For k < n − 1, we assume that eorem 2 is true. Let us take account of the case for k � n. It follows from (34) that where q � P n−1 n e m 1 satisfies A n−1 q � b and q p , the approximation of q, is obtained by applying the (n − 1)th level iteration p times.
Numerical Example
In order to illustrate the theory developed in the previous sections, numerical results for two-and three-dimensional problems are reported in this section.
e material properties used here are Young's modulus E � 1500 and Poisson's ratio ] � 0.25. e initial mesh subdivision is shown in Figure 2, and a sequence of refinements is produced by connecting the midpoints of opposite edges, as explained in Section 2. e problem (1) is numerically solved by the combined hybrid elements CH(0), CH(P-S), CH(0-1), and CH(1) on the nest mesh levels, respectively. For the resulting discrete systems, we mainly use two di erent algorithms, i.e., W-cycle MGM and full MGM. e intergrid transfer operator established in Section 3 is utilized. e damped Gauss-Seidel iteration with the damped factor ω 1.5 is the smoother and m 1 m 2 . e zero vector acts as the initial guess. e stopping criterion is k is the residual after i kth iterations. "Iter" is the number of the iterations required to achieve the desired accuracy.
e convergence factor is denoted by q m ≔ r m k / r 0 k m [29]. All experiments are run on Inter Core Duo processor(CPU @2.20GHZ, 4 GB RAM).
e numerical tests will be performed in the following aspects: (1) e performance of W-cycle MGM and full MGM is summarized in Tables 1 and 2, respectively. W-cycle MGM with i presmoothing and j postsmoothing sweeps is denoted by W (i, j).
It appears that the two methods converge when the smoothing is large enough. Moreover, it is evident that the number of the iterations and the convergence rate are all independent of the mesh size, which agrees well with the conclusion of eorems 2 and 3. (2) We test the in uence of the number of the nested iterations "r" on the convergence. As a di erent "r" is used in full MGM for CH(0-1), Table 3 gives the number of the smoothing required to achieve an relative error of less than 10 − 5 . It is observed that the number of smoothing increases as that of the nested iteration decreases. (3) We investigate the performance of V-cycle MGM. Table 4 indicates that the algorithm also has a meshindependent convergence. (4) We apply W (2,2), preconditioned conjugate gradient iteration with diagonal preconditioner (D-PCG) and SSOR preconditioned conjugate gradient method (SSOR-PCG) to the discrete systems arising from CH(0-1). e numerical results in Table 5 demonstrate that the number of the iterations and the CPU time are all optimal for W (2,2), compared with D-PCG and SSOR-PCG.
(5) We test the effectiveness of the aforementioned multigrid method for CH(0-1) when Poisson's ratio ] ⟶ 0.5. e corresponding numerical results are listed in Table 6. e observed behavior is that as ] is near 0.5, the method diverges. Reference [30] indicates that the smoothing of PCG is less sensitive to ] than that of GS, so we adopt the SSOR-PCG as the smoother. As can be seen from Table 7, the multigrid method with SSOR-PCG smoother is e cient for near-incompressible elasticity problem.
Finally, we apply the multigrid algorithm to the algebraic systems arising from the discretization of the cantilever beam problem by hybrid hexahedron elements.
e cantilever beam problem, as shown in Figure 3, is approximated by two 8-node hexahedron combined hybrid elements CHH(0-1) + and CHH(0-1), respectively. W-cycle MGM and full MGM are used for the discrete systems. e trilinear interpolation operator is utilized as the intergrid transfer operator. e damped Gauss-Seidel iteration with the damped factor ω 1.5 is the smoother and m 1 m 2 . Tables 8 and 9 give the number of the iterations (Iter) and the CPU time (Time) of the two methods. It appears that the proposed methods also have the mesh-independent convergence.
Conclusions
In this paper, MGM has been applied to the discrete systems (11) arising from combined hybrid quadrilateral or hexahedron element discretization of linear elasticity problem (1). e intergrid transfer operator is constructed on quadrilateral or hexahedron meshes, and we prove the boundedness of the operator in L 2 norm. On the basis of this crucial property, the kth level iteration is a contraction for the L 2 norm. Moreover, the approximate solution of the full MGM satisfies the same type of error estimates as the discretization error. Two numerical examples are given to illustrate the convergence and efficiency of the proposed method. Finally, we design an efficient multigrid method for near-incompressible elasticity problem.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-06-23T15:12:39.566Z | 2022-06-21T00:00:00.000 | {
"year": 2022,
"sha1": "b88460afd33559ce4247f108ad820e1a50468332",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2022/9915254.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3594777ea178f403426d1d89a778a8c0e51ea06e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
16464332 | pes2o/s2orc | v3-fos-license | A Practical Way for Computing Approximate Lower and Upper Correlation Bounds
Correlations among variables are typically not free to vary between −1 and 1, with bounds determined by the marginal distributions. Computing upper and lower limits of correlations given the marginal characteristics often raises theoretical and computational challenges. We propose a simple sorting technique that is predicated upon a little-known consequence of a well-established fact from statistical distribution theory to obtain approximate correlation bounds. This approach works regardless of the data type or distribution. We believe that it has practical value in appropriately specifying the correlation structure in simulation studies.
INTRODUCTION
It is well known that correlations are not bounded between −1 and +1 in most bivariate settings as different upper and/or lower bounds may be imposed by the marginal distributions (Hoeffding 1940;Frechet 1951). These restrictions apply to discrete variables as well as continuous ones. Although the Frechet-Hoeffding bounds were established years ago, the maximal correlation and anticorrelation aspect is not always recognized within the statistics community. In particular, we feel its utility has not been fully realized in the context of teaching statistics, and more broadly in simulation specifications. In our teaching of graduate-level courses involving computing at the University of Illinois at Chicago in the Division of Epidemiology and Biostatistics, the design and execution of simulation studies is an essential component of the coursework. The steps of model building, estimation, and testing typically require simulation to assess the validity, reliability, and plausibility of the inferential techniques, to evaluate how well the implemented statistical models capture the specified true population values, and how reasonably these models respond to departures from underlying assumptions, among other things. In such simulation studies, the specification of the correlation structure among Hakan Demirtas (E-mail: demirtas@uic.edu) is Associate Professor and Donald Hedeker is Professor of Biostatistics, Division of Epidemiology and Biostatistics (MC923), University of Illinois at Chicago, 1603 West Taylor Street, Chicago, IL 60612. The authors are grateful to Luc Devroye for helpful discussions, and to an associate editor and a referee for suggestions that considerably improved the article. variables is inevitably required. Students often are unaware of or have difficulty in deriving the correlation bounds (which are subsequently needed in the simulation design) imposed by the marginal features of the variables. This motivates the current work. In Section 4, we propose a potentially useful and practical sorting idea to address this problem.
In our experience, students often specify the correlations using the broadest possible limits (i.e., −1 to +1) in designing simulations. It is not always straightforward to realize that some correlation specifications are simply infeasible. Failure to define the association parameters within a plausible range can easily result in suboptimal, inconsistent, and/or incorrect conclusions. Furthermore, it is a common tendency among simulation designers to try extreme situations to gauge the limits of the statistical procedures or models under consideration. For this reason, these extremes need to be carefully considered and defined. The purpose of this work is to suggest a very simple sorting approach that gives approximate lower and upper bounds for correlation. Again, the theoretical rationale of this technique, which we describe in Section 2, is well-established; however, its practical merits may not be fully appreciated in teaching and designing of simulation studies.
SOME THEORY
Let (F, G) be the set of all cumulative distribution functions (cdf's) H on R 2 having marginal cdf's F and G. Hoeffding (1940) and Frechet (1951) proved that in (F, G), there exist cdf's H L and H U , called the lower and upper bounds, having minimum and maximum correlation. For all (x, y) ∈ (x, y). If δ L , δ U , and δ denote the Pearson correlation coefficients for H L , H U , and H , respectively, then δ L ≤ δ ≤ δ U . One can infer that if V is uniform in [0, 1], then F −1 (V ) and G −1 (V ) are maximally correlated; and F −1 (V ) and G −1 (1 − V ) are maximally anticorrelated. This can be proven by the Wasserstein distance between two probability measures, which is a natural way of comparing the probability distributions of two variables (Dobrushin 1970). Heuristically, with a slight abuse of notation, the Since X and Y have fixed marginals F and G, respectively, the quantity E(X 2 ) + E(Y 2 ) remains constant for H ∈ (F, G), so W (F, G) is minimized when the covariance term, E(XY ), is maximized. The choice (X, Y ) = (F −1 (V ), G −1 (V )) does this as it ensures that X and Y covary in the same direction to the greatest possible extent. Similarly, replacing Y by −Y , we see that E(XY ) is minimized by taking X = F −1 (V ) and G −1 (1 − V ). Under this choice, the minimal covariation (and the maximal Wasserstein distance) between X and Y is reached; that is, X and Y covary in the opposite direction to the greatest possible extent. For a mathematically rigorous proof, see the book by Rachev and Ruschendorf (1998, chap. 3).
SOME CASES ENCOUNTERED IN TEACHING
Random number generation is often one of the core topics in computation-oriented courses. The specification of the correlation structure in multivariate settings needs to be done meticulously so that the prescribed correlations do not go beyond the feasible correlation range. If the computing environment alerts simulation designers of this problem (e.g., by warning or error messages), then remedial action to avoid the discrepancies between expectation and reality that arise from correlation range violations is straightforward. However, depending on the nature of the computing platforms, programs, linkers, compilers, debuggers, and/or idiosyncratic aspects of what is to be simulated, students and other users may not even realize that the specified values are infeasible. More importantly, correlation bounds are typically unavailable in closed form, hence incalculable in most cases.
Below, we provide a few scenarios for the bivariate case from our teaching experience. These consist of situations in which students are asked to generate bivariate data for different types of variables. Students sometimes recognize that there are correlation bounds, and here we indicate and illustrate these for the simpler cases. However, for more general situations, as we describe, closed-form bounds are not always obtainable and students typically have no idea about how to obtain the correlation bounds. Also, while here we focus on the bivariate case, for multivariate settings correct specification of the correlation matrices requires that the specified correlation matrix is positive semidefinite, in addition to the requirement that each pairwise correlation be within the corresponding bivariate correlation range. We further discuss the multivariate case in Section 4.2.
Binary-Normal Example
The first example considers dichotomization of one of the variables. Suppose that students are to generate a binary-normal combination with specified values of correlation and proportion for the dichotomized variable. Let X and Y follow a bivariate normal distribution with a correlation δ XY . If X is dichotomized to produce X D , then the resulting correlation between X D and Y can be designated as δ X D Y (point-biserial correlation). The effect of dichotomization on δ XY (biserial or tetrachoric correlation) is given by δ where p and q are the proportions of X values above and below the point of dichotomization, respectively, and h is the ordinate of the normal curve at the same point (MacCallum et al. 2002).
Here, it is well known that the correlation range for δ X D Y is ∓h/ √ pq. Lower and upper bounds with respect to different values of p are shown in Figure 1. Even in the least extreme case (p = 0.5), the correlation bounds are different from −1 and +1; and when p gets closer to 0 or 1, the feasible correlation range becomes narrower, as can be seen in Figure 1.
Bivariate Binary Example
The second example considers generation of correlated binary data. If students are given a task to simulate bivariate binary outcomes, Y 1 and Y 2 , such that E[Y j ] = p j where j = 1, 2 and corr(Y 1 , Y 2 ) = δ 12 , then δ 12 is bounded below by max[−( and Piedmonte 1991). Lower and upper bounds with respect to p 2 while fixing p 1 at 10 different values (from 0.05 to 0.95 with increments of 0.10) are shown in Figure 2. As |p 1 − p 2 | becomes larger, the upper bound moves away further from +1. When the absolute difference gets smaller, then the lower bound moves away further from −1, as demonstrated in Figure 2. A comprehensive table of bounds in this setting is given by Demirtas (2006).
Bivariate Lognormal Example
Suppose Y 1 and Y 2 are jointly normally distributed with zero means, variances, σ 2 i , i = 1, 2, and correlation δ y . Let X i = exp(Y i ) for i = 1, 2. Then, X 1 and X 2 are jointly lognormally distributed with correlation δ X = for δ x can be obtained by substituting −1 and +1 for δ y . Lower and upper bounds with respect to σ 2 while fixing σ 1 at nine different values (from 0.1 to 0.9 with increments of 0.1) are shown in Figure 3. The most striking trend is seen when σ 2 becomes large (e.g., 2.5): the feasible correlation range is merely a small subset of the nominal (−1, +1) range. As the above three examples have illustrated, the correlation bounds can often be dramatically different from the usual (−1, +1) range; and while, as indicated, there are established formulas for the bounds, students are not always aware of these. Alternatively, the approach we describe in Section 4, which students can readily apply, yields the admissible bounds in a quick and simple manner.
Possible Consequences of Violating Correlation Bounds: Bivariate Lognormal Case
Here, we illustrate some consequences of violating the correlation bounds for the lognormal case. To do so, we employ a rather general data generation technique that is often utilized in simulating multivariate continuous data, namely, the use of Fleishman polynomials (Fleishman 1978;Vale and Maurelli 1983;Demirtas and Hedeker 2008;Headrick 2010). This approach is a moment-matching procedure where any given continuous variable in the system (i.e., set of variables that are to be generated) is expressed by the sum of linear combinations of powers of a standard normal variate. It involves solving a set of nonlinear equations via an optimization routine such as Newton-Raphson to compute the coefficients of polynomials marginally, and calculating intermediate correlations among normal variables that form a basis for desired nonnormal continuous distributions to attain the specified correlation structure (Headrick 2010).
Considering the lognormal case, one reason that we have for resorting to the Fleishman polynomials to generate such data is to elucidate a major point: in one specification (σ 1 = 0.6, σ 2 = 0.3), students would get a warning message and "unavailable" results for a subset of the specified correlation values due to range violation (see Figure 4), whereas another specification (σ 1 = 0.65, σ 2 = 0.65) can lead to false correlations (see Figure 5). [These comments pertain to programs written in R version 2.12.0 for Windows (R Development Core Team 2010), and codes for implementing multivariate Fleishman polynomials. Other computing environments (programs and/or operating systems) or applications may give rise to different consequences.] Here, we specify a correlation value, generate data, Figure 4. Plot of the specified (true) versus empirical (computed) correlations for bivariate lognormal data with σ 1 = 0.6 and σ 2 = 0.3 generated by Fleishman polynomials. Empirical correlations cannot be computed when the specified correlation is below −0.81 and above 0.97 (i.e., the computer program gives a warning message). and compute sample (empirical) correlations using the simulated data. If a procedure is working well, the specified (true) and empirical (computed) correlations should be close to each other. Specifically, in Figure 4, we plot specified versus empirical correlations after generating data using the first set of parameter values above, and see that correlations are not computable below and above the two thresholds with a "not available" message. That is, in these ranges, students would get an error message and data would not be generated. This is a lucky case, because students become aware of the misspecification problem. To demonstrate how bad the situation can become, we generate a similar plot ( Figure 5) with the second set of parameter values, and see that the empirical correlations turn out to be badly misleading below a certain desired correlation value. For instance, when the true value is −0.8, the empirical value is way off (−0.43). As can be seen in Figure 5, when the true correlation is below −0.65, the sample correlation of the generated data is far from the truth by any acceptable standards. Here, the computer program gives no indication that there is anything wrong, hence one can easily proceed with implausibly generated data with false correlations under this second set of specifications. Considering the fact that random number generation is only the first step of a much bigger simulation scheme in most cases, there is a serious potential for obtaining incorrect results from the subsequent stages. In Section 4.1, we provide further explanation of the problematic behavior in Figures 4 and 5.
More Complicated Cases
For the above scenarios, the correlation bounds are derivable. However, this is not always the case. What if the correlation bounds are not derivable in closed form (which is certainly plausible for many real-life applications)? In these situations, how can one obtain the range of correlation values that are within the distributional limits for simulation purposes? As an example, suppose that there are two variables where one follows a normal mixture distribution, πN(μ 1 , σ 2 1 ) + (1 − π)N(μ 2 , σ 2 2 ) with the mixing proportion π , and the other follows a generalized lambda distribution (Ramberg et al. 1979) whose quantile function is λ 1 + [p λ 3 − (1 − p) λ 4 ]/λ 2 where 0 ≤ p ≤ 1. In this particular case, the boundary values for correlation cannot be expressed analytically.
In the next section, we outline a flexible, easy-to-implement sorting algorithm to compute the empirical correlation bounds among virtually any type of variables that can be encountered in practice. We show that the empirical bounds found by this technique agree closely to the theoretical correlation bounds when these are known. We further elucidate some of the advantages and extensions of our approach in more general situations.
GENERATE, SORT, AND CORRELATE (GSC)
The GSC algorithm for the bivariate case is as follows: 1. Generate random samples from the intended distributions independently using a large number of observations (e.g., N = 100,000). 2. To estimate the lower bound, sort the two variables in opposite directions (i.e., from smallest to largest for one of the variables, and from largest to smallest for the other). Then, compute the sample correlation. 3. To estimate the upper bound, sort the two variables in the same direction, and compute the sample correlation.
Steps 2 and 3 correspond to maximal anticorrelation and correlation, respectively. In effect, this is equivalent to the inverse cdf method. If a random number generation routine is available to simulate samples from the marginal distributions of interest, then one can generate random realizations, and sort them in these two different ways, as described above, to compute the approximate correlation limits.
Agreement With Theoretical Correlation Bounds
Following the examples given in Section 3, we now illustrate the numerical efficiency of this approach with N = 100,000 observations. For the binary-normal case, under a standard bivariate normal distribution assumption, let one of the variables be dichotomized at the mean. In this situation, the correlation bounds are known to be ∓ 2 π = ∓0.7979, which is exactly what the GSC approach yields on average across 1000 replications, where the empirical, central 95% confidence interval is (∓0.7967, ∓0.7991). In what follows, we report these intervals in parentheses. For the binary-binary case, if p 1 = 0.5 and p 2 = 0.6, the theoretical upper and lower limits are 0.8165 and −0.8165, respectively. Using the GSC approach, we obtain limits of 0.8164 (0.8107, 0.8220) and −0.8165 (−0.8222, −0.8108). For the lognormal case, Greenland (1996) described two scenarios where σ 2 1 = σ 2 2 = 0.5 or 3 with corresponding lower limits of −0.6065 and −0.0498, respectively (upper limits are 1). Employing the GSC approach by generating two lognormally distributed variables independently, then reverse-sorting and correlating these variables yields values of −0.6066 (−0.6128, −0.6001) and −0.0499 (−0.0656, −0.0376), respectively, which are nearidentical to the theoretical lower bounds on average. Depending on the underlying distributional shapes, some variation is naturally expected. However, in simulation practice, people vary correlation specifications with nice incremental numbers, and slight deviations from the true bounds are unlikely to cause misspecifications that arise from bound violations; one can get closer to the true bounds with a little extra effort (i.e., by running the proposed procedure sufficiently many times and then using the average), which would not be computationally expensive as it will be done only once before specifying a plausible correlation range in simulations. Now, returning to the two interesting specifications described in Section 3.4, that could lead to bad consequences. When σ 1 = 0.6 and σ 2 = 0.3, the empirical (based on the GSC algorithm) and theoretical (based on the formula in Section 3.3) approaches both yield the lower and upper bounds of −0.8154 are 0.9763, respectively. This is consistent with the vertical lines that indicate the feasible range in Figure 4. Similarly, when σ 1 = σ 2 = 0.65, the lower bound is −0.6554, below which, as noted, incorrect empirical correlations would be obtained (Figure 5). Thus, using the GSC approach, students would realize that specification of a correlation below −0.6554 is not possible (and so would avoid generating data with "incorrect" correlations).
Advantages and Extensions
This method has a few key advantages. First of all, it works for many kinds of variables (e.g., continuous, binary, ordinal, count). While there may be a better measure of association depending on the types of variables, it is still good to know the limits of the Pearson correlation. Second, the variables need not come from the same distribution. As illustrated above, we obtained the correlation between a continuous and a binary variable, assuming underlying normality for both, but the two distributions could be completely different. Third, in some cases, a mathematical solution for the correlation bounds may be very difficult to obtain, or may not even exist in closed form. In such cases, the proposed approach is very useful by providing results for these intractable situations. Finally, these boundary arguments extend to the multivariate case. The overall correlation bound matrices can be formed using the bivariate correlation bounds. In this situation, one must realize that ensuring all bivariate correlations are within the feasible limits does not guarantee positive semidefiniteness, so one must still pay attention to this.
To illustrate how these bounds might be used in a multivariate setting, consider the following. What often happens in simulation practice is that one specifies equally spaced correlation values between −1 and +1 for each pair. However, when the limits are different from −1 and +1 (e.g., those given in Table 1), this naive approach will inevitably yield problems. Instead, if one uses the derived bounds from the GSC method, then the specified range could be defined accordingly to avoid such range violations. We hasten to add that closed-form solutions do not exist for many pairwise combinations (e.g., those in Table 1). Furthermore, this method is only designed to produce the approximate correlation bounds that should not be exceeded; availability of bivariate or multivariate data generation routines for a particular setting is a separate issue. Finally, as correctly pointed out by a reviewer, even when positive semidefiniteness holds and the correlation range is not violated, some correlation structures may not be achievable due to operational and/or procedural problems that are specific to the random number generation method in use [e.g., the tetrachoric correlation matrix may not be positive semidefinite even when there is no misspecification in the correlated binary data generation technique of Emrich and Piedmonte (1991)].
Another area of application of this approach is in situations where there are insufficient data available to accurately compute a correlation matrix. In such cases, the correlations are frequently derived from expert opinion, usually the combined opinions of many experts (Lurie and Goldberg 1998). With fixed marginals, it is important to check whether the expertderived correlations violate the range constraints.
Finally, as mentioned, in designing simulation studies, researchers typically identify a range for model parameters including correlations. In doing so, researchers often simulate data under relatively extreme cases. A correlation range that includes impossible values may lead to inferential biases, especially if the computational procedure that is being assessed does not give error or warning messages to alert the simulation designer. Clearly, in this situation, checking the correlation bounds before simulating the data using our simple approach could prevent problems and/or subsequent misinterpretations regarding the computational procedure.
CONCLUSION
In this article, we have described a simple approach that can be used to obtain approximate correlation bounds under a wide variety of situations. In our teaching experience in computational statistics, this approach can help students from making unintended errors when devising simulations involving specification of a correlation matrix. As long as one is capable of generating the univariate marginals, it is straightforward to perform the sorting operation to identify the correlation bounds regardless of data type or distribution. We believe that this simple technique is a minor but valuable addition to the teachers' toolkit. Additionally, this approach is also potentially useful to statisticians who are evaluating models or methods via simulation. [Received May 2010. Revised April 2011 | 2014-10-01T00:00:00.000Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "475fb1f334702430553a43cf78d7fdd32e14dd6c",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/A_Practical_Way_for_Computing_Approximate_Lower_and_Upper_Correlation_Bounds/10755389/files/19266839.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "91dddfdc38f0ef4e8d846a1fbfc7a03dec160907",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
391529 | pes2o/s2orc | v3-fos-license | Immune regulation of the L5178Y murine tumor-dormant state. I. In vivo and in vitro effects of prostaglandin E2 and indomethacin on tumor cell growth.
Immunization and intraperitoneal challenge of DBA/2 mice with L5178Y lymphoma cells results in the suppression and maintenance of the L5178Y cells in a tumor-dormant state in the peritoneal cavity for many months. Cell-mediated immune responses involving lymphocytes and macrophages are involved in maintenance of the tumor-dormant state. Macrophages that have increased immunosuppressive activity and that produce increased amounts of PGE2 appear in the peritoneal cavity of tumor-dormant mice before the breakdown of the tumor-dormant state and formation of ascitic tumors. We report here that the tumor-dormant state can be terminated with formation of ascitic tumors by treatment of tumor-dormant mice with PGE2. Treatment with indomethacin results in inhibition of tumor cell growth and elimination of all recoverable tumor cells. Cultures of peritoneal cells (PC) from mice harboring L5178Y cells in a tumor-dormant state were used to analyze the PGE2 and indomethacin effects. Tumor cells did not grow out in the high-cell density PC cultures prepared from many tumor-dormant mice, but addition of PGE2 to these cultures resulted in tumor cell growth. The tumor cell growth that did occur in the PC cultures from some tumor-dormant mice was associated with PGE2 production by the associated host cells, and the addition of indomethacin to these cultures inhibited both PGE2 synthesis and tumor cell growth. Removal of plastic-adherent cells from the PC cultures eliminated the restraint on tumor cell growth. These experiments suggest that L5178Y tumor cells are maintained in a tumor-dormant state by host peritoneal cells, which are under PGE2 regulation.
Materials and Methods Animals. 8-12-wk-old female DBA/2 mice were obtained from The Jackson Laboratories, Bar Harbor, ME. All mice were fed Purina mouse chow and given acidified tap water ad libitum . They were housed in a temperature-controlled room with a cycle of 12 h of light and 12 h of dark.
Tumor Cells. The L5178Y cell line is a nonmetastatic T cell lymphoma induced in DBA/2 mice with methylcholanthrene (1) .
Partial Peritoneal Lavage (PPL).' Mice received an i.p. injection of 2.5 ml of sterile pyrogen-free PBS, and the abdomen was massaged thoroughly to mix the PBS with the peritoneal contents. The mice were then lightly anesthetized with ether and restrained on a board, ventral side upward . A small area of the ventral surface was shaved, and 0.4-2 ml of the PBS was removed from the peritoneal cavity with a 5 ml syringe fitted with a 25-gauge needle . The volume removed from each sampling was recorded. The recovered peritoneal cells (PC) were pelleted and resuspended in the same volume of MEM .
Serial End-point Dilution Assay (SEPD)for Counting Tumor Cells (9). Single-cell suspensions of peritoneal cell populations containing tumor cells were prepared from tumordormant mice. 100 j,l of MEM was added to each well of a 96-well flat-bottomed microtiter plate (Costar, Cambridge, MA) . 100 jul of each cell suspension was then added to the first well of each row, and a series of twofold dilutions was made through 24 wells. Each PC suspension was titrated in quadruplicate . The plates were incubated at 37°C in a humidified 5% C02 atmosphere and examined microscopically after 3 and 14 d incubation for tumor cell growth . Wells containing the highest dilution of each cell suspension that yielded positive tumor cell growth were identified, and the number of tumor cells in each initial cell suspension was calculated.
Classification of Tumor-dormant Mice. Immunized and challenged mice were classified as tumor-dormant if the number of tumor cells in the peritoneal cavity on the 25th d after L5178Y cell challenge days post challenge ; as determined by PPL and the SEPD assay, was 1-2 X 105. Tumor-dormant mice were classified as "tumor-emerging" when the number of tumor cells was >2 X 105 and <10'. Tumor-dormant mice were classified as "tumor-emergent" when the number of tumor cells in the peritoneal cavity was >10'. Tumor-dormant mice whose peritoneal cells yielded tumor cell growth when placed in culture at 4 X 105 cells/well in a 96-well microtiter plate are referred to as "in vitro tumorprogressor" tumor-dormant mice. Mice whose peritoneal cells yielded significantly fewer tumor cells after 7 d culture as compared with the inception of culture are referred to as "in vitro tumor-regressor" tumor-dormant mice.
Complete Peritoneal Lavage (CPL). Mice were killed by cervical dislocation, and the PC were removed in two successive 5-ml peritoneal washouts with PBS . The PC were pelleted by centrifugation, resuspended in 4 ml MEM, and counted by hemocytometer . This technique recovered >99% of the PC (9) .
In Vitro Tumor-dormant Peritoneal Cell System. Tumor-dormant mice were killed and CPLs were performed . The recovered PC were brought to a concentration of 4 X 10 6 cells/ml . A volume of 0.5 ml of the PC suspension from each mouse was used for quantitation of tumor cells using the SEPD assay, and the numbers of tumor cells ?er 4 X 105 PC was calculated. 1/1o ml of the remaining PC suspension (containing 4 X 10 PC) from each mouse was added to individual wells of a 96-well microtiter flat-bottomed culture plate (Costar) . The wells were then divided into sets as indicated in the text, and the wells of each set received 0.1 ml of MEM, PGE2, or indomethacin, depending on the experimental design . The plates were incubated for 7 d at 37°C. The cells in the wells were then suspended and the tumor cells were quantitated by the SEPD assay and compared with the number of tumor cells calculated to be in the wells at the start of the cultures.
In Vitro PC Culture Supernatant PGE2 RIA. Peritoneal cells were removed from individual tumor-dormant mice and the number of tumor cells per 4 X 105 PC for each mouse was determined by SEPD assay. Cultures were then prepared as described in the previous section. After 7 d incubation at 37'C, the cell-free supernatants were removed and tested for PGE2 by RIA (' 25 I-PGE2 RIA kit purchased from New England Nuclear, Boston, MA) . This assay could measure as little as 0 .13 pg/ml PGE2. The cells from the wells supplying the supernatants were then suspended, and the number of tumor cells in each suspension was quantitated by the SEPD assay. The number of tumor cells in each well at the beginning and end of culture was then compared, and each well was classified as yielding tumor cell growth or no growth .
Separation of Nonadherent PC. Nonadherent PC were separated from whole PC by adherence to plastic petri dishes.
PGE2 or Indomethacin Preparation. PGE2 (Upjohn Co., Kalamazoo, MI) and Indomethacin (Sigma Chemical Co., St. Louis, MO) were dissolved in 100% ethanol in a concentration of 10 m /ml as stock solution and kept in a -20°C freezer until used. The working solution, 10-M, was prepared by dilution in PBS or MEM .
In Vivo Protocol to Determine Effects of PGE2 or Indomethacin on Tumor-dormant State. The protocol illustrated in Fig . 1 was used. A PPL was performed on all L5178Y cell-immunized and challenged mice 25 d after L5178Y cell challenge . The number of tumor cells in each lavage was determined by the SEPD assay, and mice were classified as tumor-dormant on the basis of the number of tumor cells in their peritoneal cavity (see section on classification of tumor-dormant mice) . The tumor-dormant mice were then divided into groups, with each group having equal numbers of mice with comparable tumor burdens . The groups were then treated with PGE2 i.p . for 10 d, or with Indomethacin as indicated in the text. 4 d after the end of PGE2 treatment, or as indicated in the indomethacin experiments, the mice were killed and CPLs were performed . 10% of the PC from each mouse was used to quantitate the number of tumor cells in the lavage, and the remaining PC were placed in culture flasks at a low cell/surface area ratio to minimize cell-cell contact and to permit a single tumor cell to grow if present . The tumor cell numbers in each mouse before and after treatment were then compared.
Results
Effect of PGE2 on L5178Y Tumor-dormant State. To determine whether the PGE 2 produced by peritoneal macrophages in tumor-dormant mice before formation of ascitic tumors is responsible for termination of the tumor-dormant state, we administered 100 Wg of PGE 2 i .p. to mice daily for 10 d using the protocol shown in Fig . 1 . In this protocol, the number of tumor cells in each mouse was quantitated before treatment by PPL and SEPD assay, and after treatment by CPL and SEPD assay .
In the experiment shown in Tumor Cell Growth in Cultures ofPeritoneal Cellsfrom Tumor-dormant Mice. To study the mechanisms by which PGE2 produced its tumor cell growth-enhancing effect in tumor-dormant mice, we used an in vitro system . In this assay, described previously (1), 4 X 105 PC from a tumor-dormant mouse are placed in individual wells of a 96-well microtiter plate. We reported previously (1) that the small number of tumor cells present in these PC do not grow out during incubation at 37°C. Fig. 2 shows the growth of tumor cells in PC cultures prepared from a large number of tumor-dormant mice. PC from 26 tumor-dormant mice were harvested, the number of tumor cells per 4 X 105 PC in each mouse was quantitated by the SEPD assay, and the remaining PC were placed in wells of a 96-well microtiter plate, 4 X 105 cells/well, three wells/mouse. The plates were incubated for 7 d at 37°C, and the number of tumor cells in each well was quantitated by the SEPD assay. The mean number of tumor cells in the PC at inception and termination of culture for each mouse are shown in Fig. 2 . During the 7-d incubation period, tumor cells proliferated in the PC cultures prepared from some tumor-dormant mice, and the mice providing these PC are referred to as "in vitro tumor-progressor" mice. Tumor cells failed to grow out in the PC cultures prepared from other tumor-dormant mice, and the mice providing these PC are referred to as "in vitro tumor-regressor" mice . Mice. The parallel growth curves of tumor cells in the PGE2-treated and untreated wells of PC prepared from mouse 1 in the above experiment suggested that PGE2 was being produced in the untreated wells. To test this possibility, we prepared PC cultures from 17 tumor-dormant mice, one well/mouse, and after 7 d culture, we collected the cell-free supernatants from each well and measured the PGE2 in the supernatants by RIA. The PC in these wells were then suspended, and the number of tumor cells in each suspension was determined by the SEPD assay. As seen in Table II, there was an excellent correlation between the ability of tumor cells to proliferate and the amount of PGE2 in each well. However, there was no correlation between the total number of tumor cells in specific wells and the amount of PGE2 (data not shown) .
To identify the cell population that produces PGE2 in the PC cultures from in vitro tumor-progressor mice, and to exclude the possibility that these cultures contained PGE2-producing L5178Y cells, we transferred the PC from in vitro tumor-progressor cultures to tissue culture flasks . The L5178Y cells proliferated in the flasks and were passaged until no host cells remained . The supernatants of these tumor cell cultures were then tested for PGE2 and none was found . We can conclude from this experiment that PGE2 is produced by host cells rather than by tumor cells.
Effect of Indomethacin on the Tumor-dormant State. The ability of PGE2 to cause formation of ascitic tumors in tumor-dormant mice and promote tumor cell growth in vitro suggested that administration of an inhibitor of PGE2 synthetase, indomethacin, might eliminate tumor cells from tumor-dormant mice and inhibit tumor cell growth in vitro. To test this possibility, indomethacin was administered into the peritoneal cavity of tumor-dormant mice from Alzet miniosmotic pumps. These pumps, implanted subcutaneously on the ventral surface with a size PE 60 catheter leading into the peritoneal cavity, delivered a continuous amount of indomethacin, 50 Ag/d, or ethanol for 14 d; new pumps were not reimplanted in the 21-d-incubation mice. At selected days after implantation of the pumps, mice were killed and all PC were placed in culture, 10% of the PC in a SEPD assay, and the remaining PC in culture flasks at a low cell/culture surface area ratio to minimize cell-cell contact during the incubation period . As seen in Table III, each well in this experiment in order to inhibit endogenous PGE2 production . We first determined whether indomethacin would interfere with the tumor cell growth-enhancing effects of PGE2. As seen in Fig. 5, PGE2 enhanced tumor cell growth . Indomethacin at 10 -6 M inhibited tumor cell growth, but had no inhibitory effect on the tumor cell growth-enhancing effect of 10 -6 M PGE2 . Fig. 6 shows the effect of various doses of PGE2 in the presence of 10-6 M indomethacin on tumor cell growth . The addition of 10 -5-10-7 M PGE2 stimulated tumor cell growth, and lower doses of PGE2 had no effect .
Identification of Host Cells that Restrain Tumor Cell Growth in PC Cultures from
In Vitro Tumor-regressor Tumor-dormant Mice. To identify the PC population that was responsible for the restraint on L5178Y cell growth in wells prepared from in vitro tumor-regressor mice, we separated the nonadherent peritoneal cells by a plastic-adherence step, and cultured both the complete and the nonadherent PC at high cell density. Since L5178Y cells are nonadherent, they remained in the nonadherent cell population . PGE2 was added to one-half of the number of wells of each PC population, and tumor cells were quantitated 7 d later by the SEPD assay.
As seen in Fig. 7, tumor cell growth did not occur in the untreated complete PC cultures, but did occur when PGE2 was added to the cultures . In the untreated nonadherent PC cultures, tumor cells grew and PGE2 had no effect on tumor cell growth . These findings indicate that adherent cells are involved in the restraint on tumor cell growth . Yet to be determined is the role of the nonadherent PC in tumor cell growth restraint, and whether the adherent cells lyse or arrest the proliferation of tumor cells, or are required as accessory cells for nonadherent cells to restrain tumor cell proliferation .
Discussion
Prostaglandins of the E series are important regulators of many components of the immune response (11). Among its other effects on the immune system, PGE2 has been reported to (a) inhibit the production of IL-2 (12, 13), (b) stimulate the production of suppressor T lymphocytes (14)(15)(16), and (c) inhibit the clonal proliferation of B lymphocytes (17). Macrophages produce large amounts of PGE2 on appropriate stimulation (18,19). This PGE2 can downregulate macrophage tumoricidal activity and may be an important self-regulating feedback mechanism for macrophage activation (20)(21)(22). Macrophages from tumor-bearing mice produce increased amounts of PGE2 (23), and patients with various types of cancer have either increased levels of PGE2 and PGE2 metabolites (24)(25)(26)(27), or have impaired immune responses that can be stimulated in vitro with indomethacin (28). Administration of inhibitors of PG synthesis in mice has been found to retard or suppress tumor growth (29,30), and to improve cell-mediated immune functions when added to their spleen cells in vitro (29,31). However, in some tumors, such as B-16 melanoma, the major cyclooxygenase metabolite is PGD2 rather than PGE2, and inhibitors of PG synthesis produce tumor enhancement rather than tumor inhibition (32) .
To date, the effects of PGE2 and indomethacin on tumor growth have been evaluated in animal models in which tumor cells have either been induced or are growing after transplantation . Animal models of tumor dormancy are clinically relevant, in that small numbers of lethal tumor cells persist for prolonged periods of time without producing disease (33,34). Such models may be analogous to those patients who are in clinical remission after treatment of a primary tumor and who will develop a recurrent tumor. Likely examples of tumor-dormant states in human cancer are adenocarcinomas of the breast, which can grow out in the surgical scar tissue of a mastectomy as long as 50 yr after surgical removal of a histologically identical adenocarcinoma (35), and malignant melanomas, which can grow out in the liver many years after removal of a uveal malignant melanoma (36,37). In this report we described the effects of PGE2 on tumor cell growth under conditions in which tumor cells were maintained in a tumor-dormant state. Previous evidence for the regulatory role of PGE2 in the L5178Y tumor-dormant system consists of our finding (6) that macrophages with increased immunosuppressive activity appear in the peritoneal cavity of tumor-dormant mice just before the appearance of ascitic tumors. These macrophages produce an immunosuppressive factor in vitro, now identified as PGE2 (our unpublished data), production of which is inhibited by indomethacin (7). Our demonstration that exogenous PGE2 stimulates formation of ascitic tumors in tumor-dormant mice provides further support for the hypothesis that PGE2, which is produced by peritoneal macrophages, terminates the tumor-dormant state.
We found that the PC cultures from in vitro tumor-progressor tumor-dormant mice (in which tumor cells proliferate progressively) produce more PGE2 than PC cultures from in vitro tumor-regressor tumor-dormant mice (in which tumor cells do not proliferate), and that the addition of PGE2 to tumor-regressor cultures converts them into tumor-progressor cultures. The variations in the ability of tumor cells to proliferate in PC cultures from tumor-dormant mice may reflect the variations in the duration of the tumor-dormant state among mice . It is possible, however, that the differences between tumor-regressor and tumor-progressor cultures may reflect ongoing cycles within an individual mouse, of macrophage activation and deactivation mediated by IFN--Y released from stimulated T cells, and by PGE2 released from activated macrophages. If this is so, then an individual mouse may go through cycles of tumor-progression and tumor-regression, the net result of which would be to maintain cytolytic or cytostatic control on tumor cell proliferation without killing all tumor cells. The demonstrations that indomethacin can both eliminate recoverable tumor cells from tumor-dormant mice and inhibit the proliferation of tumor cells in PC cultures from in vitro tumor-progressor tumor-dormant mice support the hypothesis that macrophage activation and deactivation is a feature of the tumordormant state, and that the inhibition of a suppressor of macrophage activation permits macrophages to kill all tumor cells.
The complete analysis of the L5178Y tumor-dormant model is limited by certain characteristics of the model. Since large numbers of peritoneal cells are required for the preparation of multiple PC cultures, it is necessary to kill the tumor-dormant mouse to obtain these cells. It is therefore not possible, in an individual mouse, to measure the changes in antitumor activity of peritoneal cell populations as that mouse proceeds through the tumor-dormant state towards termination. It is also not possible to compare one tumor-dormant mouse with another because of the large variations in the activity of the immune response and in the duration of the tumor-dormant state among mice that have been immunized and challenged with L5178Y cells at the same time.
The in vitro assay described here does provide a system in which the PC population of a single tumor-dormant mouse can be studied under controlled conditions . An important feature of this in vitro system is that it is quantitated in the same way as the in vivo model, e.g., by tumor cell proliferation . The tumor cell growth-enhancing effects of exogenous PGE2, when administered both in vivo and in vitro, and the tumor cell growth-inhibiting effects of indomethacin, when administered both in vivo and in vitro, support the validity of the in vitro system as a correlate of the in vivo model. This in vitro system, therefore provides an opportunity to evaluate the mechanisms that maintain and terminate the tumor-dormant state in vivo . Experiments to identify the mechanisms by which PGE2 and indomethacin produce their tumor cell growthenhancing and -inhibiting effects in tumor-dormant mice are in progress, and will constitute a subsequent paper in this series .
Our analysis of the L5178Y tumor-dormant model to date suggests that L5178Y cells are maintained in a tumor-dormant state by peritoneal macrophages and T lymphocytes. These cells are upregulated to a cytotoxic state by lymphokines released from L5178Y-stimulated helper T lymphocytes, and down-regulated to a noncytotoxic state by PGE2 released from activated macrophages. The PGE2 may downregulate macrophage cytolytic activity by acting directly on macrophages and downregulate T lymphocyte cytolytic activity either directly or indirectly by stimulating suppressor T lymphocytes. Both macrophages and cytolytic T lymphocytes may be downregulated before all tumor cells are killed, thereby permitting some tumor cells to escape lysis and persist in a tumordormant state. The tumor-dormant state may end when phenotypic variants of the L5178Y cells, which are less susceptible to immune lysis and less immunogenic than the L5178Y population used to initiate the tumor-dormant state, are selected by the immune response and become dominant in the peritoneal cavity of tumor-dormant mice (9, 10). These poorly immunogenic tumor cell variants may not stimulate helper T lymphocytes to produce the regulatory lymphokines that are needed to maintain an effective anti-tumor cell response and maintain the tumor-dormant state. The L5178Y tumor-dormant model can be used to study immunoregulatory circuits that affect tumor growth, and to test therapeutic agents that can be given to patients who are in clinical remission after treatment of a primary tumor and who have a high statistical probability of developing a recurrent tumor.
Summary
Immunization and intraperitoneal challenge of DBA/2 mice with E5178Y lymphoma cells results in the suppression and maintenance of the L5178Y cells in a tumor-dormant state in the peritoneal cavity for many months . Cell-mediated immune responses involving lymphocytes and macrophages are involved in maintenance of the tumor-dormant state. Macrophages that have increased immunosuppressive activity and that produce increased amounts of PGE2 appear in the peritoneal cavity of tumor-dormant mice before the breakdown of the tumor-dormant state and formation of ascitic tumors .
We report here that the tumor-dormant state can be terminated with formation of ascitic tumors by treatment of tumor-dormant mice with PGE2 . Treatment with indomethacin results in inhibition of tumor cell growth and elimination of all recoverable tumor cells. Cultures of peritoneal cells (PC) from mice harboring L5178Y cells in a tumor-dormant state were used to analyze the PGE2 and indomethacin effects. Tumor cells did not grow out in the high-cell density PC cultures prepared from many tumor-dormant mice, but addition of PGE2 to these cultures resulted in tumor cell growth . The tumor cell growth that did occur in the PC cultures from some tumor-dormant mice was associated with PGE2 production by the associated host cells, and the addition of indomethacin to these cultures inhibited both PGE2 synthesis and tumor cell growth . Removal of plastic-adherent cells from the PC cultures eliminated the restraint on tumor cell growth . These experiments suggest that L5178Y tumor cells are maintained in a tumor-dormant state by host peritoneal cells, which are under PGE2 regulation .
We thank Karen Harris and Kathleen Gestite for their excellent technical assistance .
Received for publication 21 April 1986 and in revisedform 7July 1986. | 2014-10-01T00:00:00.000Z | 1986-10-01T00:00:00.000 | {
"year": 1986,
"sha1": "d7769fa64e492e0121624f573fd8fa798f110ceb",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/164/4/1259.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce8620805c2fb722ae4fa7b493f864ca6a1d8eda",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
236269624 | pes2o/s2orc | v3-fos-license | Compton Shifting with Scattering Angle and Masses of Order 10-31kg To 1-0-27kg
The objective of this work is to study and observed the Compton shift of elementary particles masses lies in between within the angle between 0 radians to 3 radians. The observation shows Compton shifting ranges from . This nature of shifting is the same for all the considered masses for the same angle of scattering. Therefore this scattering is suitable for other elementary particles that lie in between considered masses. Hence the validity of Compton shifting can be used for other elementary particles scattering as elementary particles masses in between having similar nature or original Compton scattering of electron-photon interaction.
The standard model described the interaction of elementary particles with all existence forces like electromagnetic force, strong force, weak force, and gravitational force. For example, among the elementary particle gauge bosons (W and Z ) are mediated by the weak force, gluons mediate the strong nuclear force while photon mediates the electromagnetic force.
Fermions are the particle that follows Fermi-Dirac Statistics, which are half-odd integer spin like 1/2, 3/2, 5/2, etc., and obeys Pauli Exclusion Principle. There are 12 fundamental fermions without their constituent particle and divided into two categories, Quarks and Lepton.
No two fermions have the same quantum numbers. The repulsion for fermions or compelling attraction for bosons, or exchange force generally [1].
Standard Model (SM) categories boson's elementary particles based on integer spin
(including zero spins), unlike fermions. In SM, all mesons are considered bosons and are responsible for the electroweak force. Interaction between bosons are massive W + , W − and Z 0, and the charge-changing are weak interaction in between (W + and W − bosons); also, the neural particle is weak interactions in between (Z 0 boson) [2].
Bosons have an integer spin, including zero spin and don't Pauli's Exclusion Principle and occupy the same quantum state. Boson follows Bose-Einstein's statistics with intermediate vector bosons like photons, gluons, weak bosons (particles W and Z), and mesons [3].
In the quantum mechanical case, bosonic particles are indistinguishable, and particles are labeled with integer spin: 0, 1, 2, etc., while for fermion particles are distinguishable and particle is not marked with integer spin 1/2, 3/2, etc. The distribution of the fermion and boson Where is the energy of ith-state, = 1 Μ is the chemical potential. According to SM, all elementary particles are either bosons or fermions, and spin identifies them [4] Among different experiments and theories, the photoelectric effect is one theory with an experiment that shows energy quantized and absorption of energy from photon and emission of energy during transmission from material shows photon nature. Moreover, the Compton effect support this theory and develop a relation shown in equation (1), which also indicates that intensity and scattered wavelength are independent while scattering angle and incidence wavelength are dependent on each other as Where 0 is the rest mass of the electron, Compton Effect is defined as a scattering of incidence photon by bounded or unbounded charge particle, and in this scattering, some photon energy is transformed to charge particle with which photon goes interaction. This theory derived from classical theory and used for special relativity and quantum mechanics. This scattering help to understand the interactions of high energy physics theory in general, electromagnetic photon radiation with materials, like gamma rays interaction and X-ray production and Bremsstrahlung [5]. Compton length of the electron played a central part in several areas of physics and used to find the rest mass of an electron as described by Prasannakumar et al. [6], indirectly employed in the relativistic wave equation of Klein and Gordon, in the Dirac equation [7] as well as in the non-relativistic Schrodinger equation [8].
Compton scattering energy photon ranges up to a few MeV; therefore, it is one of the best studies for firmly bound atomic K-shell electrons, and some experimental data range between 0.279 to 1.0002 more detail in [9]. The interaction of matter and energy of X-rays (less than 1MeV) has different types of scattering like coherent (Rayleigh) scattering, incoherent (Compton) scattering, and photoelectric absorption [10]. In Rayleigh scattering, low-energy photons are considered; therefore, neither atom ionized nor electron goes to an excited state. On the other hand, in Compton scattering, the energy of a photon is high; therefore, atomic free-electron interact with incidence photons [11]. Thus, Compton scattering is observed in insufficient atomic numbers, while Rayleigh scattering is observed in higher atomic numbers [12].
Different experimental observations and theoretical development for non-relativistic kinematics show that physical parameters of Compton scattering are not appropriate. The change in wavelength in Compton scattering can be expressed as Δ = ′ − = 2ℎ = 0.048 52 ˚. For X-rays, the incidence for interaction and result shows that change in wavelength is about 4.852% when X-rays photon incidence with the energy of 12.41 keV. The experimental result shows that if the photon's energy is less than the rest energy (511KeV) of the photon, we can apply the Compton Effect for non-relativistic rather than relativistic cases [13]. The contribution of Bose in quantum statistical physics also shows that light is accepted as particles, and the generalization of concept by Einstein leads the concept of bosonic particles and obeys Bose-Einstein statistics. The nature of photon (particle and wave) was compounded in 1924 by Broglie and forward as both nature in a single particle but not at an instant and develop the relation as = ℎ/ to study the wave-like properties, where wavelength and p is momentum, and later names as packet energy called quanta. In 1927 after discovering electron properties as wave-like nature by Davisson and Germer experimentally, the dual nature of photon (wave and particle) was accepted in the scientific world until a contradiction is going on [14].
2. Mathematical formulation to study the Compton shifting for the different mass of the elementary particle Compton scattering and the Compton wave derivation: Considering Compton scattering as shown in figure 1, the conservation of energy in Compton scattering is given by Here = ℎ and ′ = ℎ ′, before and after scattering and rest mass-energy of the electron is = 2 . From relativistic energy-momentum, the energy of ′ = √( ′ ) 2 + ( 2 ) 2 .
On substituting these values on equation (2), we obtained an expression of energy as, Now the representation of equation (3) and conservation of momentum ′ = − ′ one can derive a relation, For, = = ℎ and = the arrangement of equation (4) becomes It is similar to equation (1), on assuming Δ = ′ − and = ℎ = 0.00249 = 2.43 is Compton wavelength as This equation (6) shows that shifting wavelength depends upon the scattering angle with the mass of the electron. For long wavelengths λ >>λc (i.e., hf<<mec 2 ), the scattering is closely elastic. In term of energy, one can be derived and extend relation (6) as Here is the energy of incident gamma rays and ′ is scattered gamma rays at an angle for considered that = and ′ = equation (7) is modified as The energy difference − denotes how much energy has been deposited to the electrons in the crystal by scattering [15]. The Compton wavelength of photons or gluons is described in the standard model (SM) of the elementary particle [16].
Result and Discussion
To study Compton shifting nature of elementary particles masses in between 10 × −27 to 10 −31 , for this, we develop code from MATLAB. We consider the two variables here to study Compton shifting the nature one is masses, and the other is the angle.
To study the Compton shifting for emitted particle electron for mass 3.11 × 10 −31 , with incidence photon energy which is higher than the threshold, are visualized below with different angle,
Conclusion
The observation shows that Compton shifting depends on both elementary particles masses and scattering angle. The Compton shifting is decreased with increasing masses and increases with scattering angle. As we consider for the masses of the elementary particles of the order 10 −31 10 −27 the nature of shifting is the same for all and ranges in between maximum to minimum10 −11 10 −18 . The shifting is maximum for electrons range mass and minimum for proton and neutrons ranges masses, shifting the order of 10 −11 for electrons mass and 10 −18 for proton and neutron masses. Hence, it concluded that Compton shifting is valid for masses from 10 −31 10 −27 that is electron mass to proton and neutron.
Declaration statement
Availability of data and materials: Data are not taken from any sources. The data was generated using MATLAB software for this work based on the equation derived above; the plot above is drawn using MATLAB code
Funding: No fund
Authors Contribution: Equally contribute | 2021-07-26T00:05:20.737Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "bf39625a0a3cc3c33538708ef06f59f4973bbdba",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-611926/v1.pdf?c=1631885282000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2725048ff2262dccdd7c81e56fe87d5b12c332b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
27761750 | pes2o/s2orc | v3-fos-license | Role of fibroblast growth factors in bone regeneration
Bone is a metabolically active organ that undergoes continuous remodeling throughout life. However, many complex skeletal defects such as large traumatic bone defects or extensive bone loss after tumor resection may cause failure of bone healing. Effective therapies for these conditions typically employ combinations of cells, scaffolds, and bioactive factors. In this review, we pay attention to one of the three factors required for regeneration of bone, bioactive factors, especially the fibroblast growth factor (FGF) family. This family is composed of 22 members and associated with various biological functions including skeletal formation. Based on the phenotypes of genetically modified mice and spatio-temporal expression levels during bone fracture healing, FGF2, FGF9, and FGF18 are regarded as possible candidates useful for bone regeneration. The role of these candidate FGFs in bone regeneration is also discussed in this review.
Background
Tissue engineering is an interdisciplinary field of research and clinical applications, which focuses on restoration of impaired function and morphology of tissues and organs by repair, replacement, or regeneration. It uses a combination of several technological approaches beyond traditional transplantation and replacement therapies. The key components of these approaches are using of cells, scaffolds, and bioactive factors.
Bone is a specialized connective tissue that is being continuously remodeled throughout life. However, many complex clinical conditions such as large traumatic bone defects, osteomyelitis, tumor resection, or skeletal abnormalities can impair normal bone healing. Bone tissue engineering is required for regenerating tissue from these conditions. Studies on the mechanisms of physiological, pathological skeletal development and fracture healing have provided a wealth of information towards potential methods for regulating osteoblast proliferation and differentiation to regenerate bone.
Here, we focus on one of the main components of tissue engineering, bioactive factors, especially fibroblast growth factors (FGFs) and their roles in bone regeneration.
FGF signaling in skeletal formation has been demonstrated by identification of gain-of-function mutations in human FGF receptor (FGFR) genes in craniosynostosis and dwarfism patients and skeletal phenotypes in genetically modified mice for FGFs and FGFRs [1]. FGFRs are transmembrane tyrosine kinase receptors that belong to the immunoglobulin (Ig) superfamily consisting of extracellular, transmembrane, and intracellular tyrosine kinase domains. Binding of FGFs to FGFRs activates intracellular downstream signaling pathways such as RAS-MAP and PI3K-AKT [2]. The FGFR family consists of four members, FGFR1 to FGFR4. Among the four FGFRs, skeletal mutations have been found in FGFRs1-3 expressed in the osteoblast cell lineage. Most of the mutations are point mutations, and distinct mutation sites result in different syndromes [1]. Some of the mutations have been introduced into mice and confirmed to affect skeletal development.
FGFs and bone regeneration
The mammalian FGF family contains 22 members. Some of them are intracellular FGFs (iFGFs), which are expected to function without binding to FGFRs. FGF19 (FGF15 for mice), FGF21, and FGF23 are hormone-like FGFs which act in an endocrine manner in postnatal life. All other FGFs have high affinity to heparin and act in a paracrine manner by binding to the four receptors with different levels of affinities [3][4][5]. The roles of various FGFs are compiled in Table 1. Skeletal phenotypes after deletion of FGFs in mice are found in FGFs 2,8,9,10,18,and 23 [6], which confirm the indispensable function of FGF/FGFR signaling in the process of osteogenesis. It is of note that FGF/FGFR signaling does not directly induce osteoblast differentiation but is known to modulate osteoblast differentiation. However, the exact mechanism of FGF/FGFR signaling in bone healing or regeneration has not been elucidated. Schmid et al. [7] reported expression levels of different FGFs by reverse transcriptase polymerase chain reaction (RT-PCR) during normal healing of tibial fracture in mice. Throughout the healing process, FGFs 2, 5, and 6 were upregulated with different levels. FGF9 was highly expressed at the early stage of healing. FGFs 16 and 18 were transcribed at the late stage. Upregulation of FGFs 1 and 17 was delayed after callus formation. This study also identified concordance between the expression of the particular FGFs and their known receptors during different stages of fracture repair. Among three FGFRs expressed in the osteoblast cell lineage, FGFR3 showed the greatest change in expression levels. This study provided the idea of how FGFs work at the different stages of healing, which could be applied to bone regenerative therapy.
Considering clinical applications, studies involving modification of FGF signaling by ligands are more practical compared to those involving modulating FGFRs. Animal studies revealed that the expression of FGFs 8 and 10 is required for the early stage of limb development, which suggests that they are not directly involved in osteogenesis. Among FGFs which change their expression levels during bone fracture healing, FGF1 protects the osteoblast cell lineage from cell death [8]. FGF5 is associated with the hair follicle cycle. FGF6 is involved in muscle regeneration and those events that occur during the healing process. Therefore, in this review, we chose FGFs 2, 9, and 18 to discuss about their properties and applications for bone regeneration.
FGF2
FGF2 is the most common FGF ligand that is being used in the regenerative medicine field including bone regeneration. It has been well known that FGF2 is a critical component of maintenance of many kinds of stem cell cultures [9]. Stabilization of FGF2 levels in a culture medium using polyesters of glycolic and lactic acid (PLGA) microspheres as a FGF2 release controller successfully improved the expression of stem cell markers, increased stem cell numbers, and decreased spontaneous differentiation [10]. FGF2-deleted mice showed a significant decrease in bone mass and bone formation without gross abnormalities. Bone marrow stromal cells (BMSCs) from the FGF2 −/− mice demonstrated decreased osteoblast differentiation, which can be partially rescued by addition of exogenous FGF2 in vitro [11]. Furthermore, FGF2 −/− BMSC-derived osteoblasts displayed a marked reduction in inactive phosphorylated glycogen synthase kinase-3 (GSK-3) as well as a significant decrease in Dkk2 mRNA, which plays important roles in osteoblast differentiation. These results suggested that FGF2 is an endogenous, positive regulator of bone mass [12]. In contrast, non-specific overexpression of FGF2 (Tg-FGF2) in mice exhibits a dwarf phenotype with impaired bone mineralization and osteopenia [13]. Addition of FGF2 into a culture medium of a mouse osteoblast-like cell line, MC3T3-E1, activated cell proliferation and suppressed mineralization [14]. In this study, treatment of the cells with FGF10 as an experimental control did not show any effects. These observations suggested that FGF2 could work in both directions for osteogenesis promotion and inhibition. It is important to elucidate conditions for positive and negative osteogeneses.
FGF9
FGF9 −/− mice showed disproportionate shortening of the proximal skeletal elements (rhizomelia), which suggests that FGF9 promotes chondrocyte hypertrophy and vascularization of the cartilage anlagen [15]. A missense mutation of FGF9 in mice resulted in decreased heparin binding, which caused elbow-knee synostosis [16]. A similar mutation was also found in humans [17]. FGF9 +/− mice did not seem to have a particular phenotype. However, bone healing of a 1-mm unicortical defect was impaired with decreased levels of neovascularization and osteoclast recruitment. This condition was rescued by exogenous addition of FGF9 (2 μg) with collagen sponge but not by exogenous FGF2 application [18]. These reports elucidated the specific functions of FGF9 in bone healing.
Bone healing of a 1-mm unicortical defect in diabetic model mice (db/db) was significantly delayed with decreased levels of osteogenesis marker expressions. Treatment of FGF9 with collagen sponge to the defect in the db/db mice induced better bone healing [19]. Treatment with FGF9-soaked collagen sponge to mouse circular calvarial bone defects of a diameter of 2 mm showed sufficient bone regeneration in postnatal day 7 (P7) mice but not in postnatal day 60 (P60) mice [20]. Addition of FGF9 with various concentrations into dexamethasonecontaining media for inducing osteogenesis of BMSCs and dental pulp stem cells resulted in stimulation of proliferation but not differentiation [21].
FGF18
Deletion of FGF18 in mice resulted in delayed suture formation, reduced osteoblast lineage cell proliferation, delayed osteoblast differentiation, and perinatal death. The long bones of FGF18 −/− mice showed reduced osteoblast differentiation but increased chondrocyte proliferation and differentiation. These results suggested that FGF18 demonstrated a positive effect on osteogenesis by enhancing cell proliferation and differentiation but a negative effect on chondrogenesis [22,23]. However, it was also proposed that FGF18 transduced the signal through FGFR3 to enhance cartilage formation [24].
In vitro analysis on mesenchymal stem cells (MSCs) derived from the bone marrow suggested that FGF18 enhanced osteoblast differentiation by activation of FGFR1 or FGFR2 signaling [25]. They also showed that overexpression of FGF18 by lentiviral infection or direct addition of FGF18 into the culture medium could induce the expression of osteoblast marker genes in C3H10T1/ 2 fibroblastic cells. Treatment of FGF18 on rat-derived MSCs under a differentiation-inducing condition showed elevated expression of osteoblast differentiation markers and mineralization [26]. Low-dose FGF18 treatment with bone morphogenetic protein 2 (BMP2)-dependent osteogenic induction of MC3T3-E1 cells enhanced mineralization whereas high-dose treatment inhibited the process (unpublished observation of Sachiko Iseki). FGF18-soaked heparincoated acrylic beads accelerated osteoblast differentiation in mouse fetuses by upregulating the expression of BMP2 in osteoblast cell lineage cells [27]. In accordance with the above reports, FGF18 application with BMP2 in cholesteryl group-and acryloyl group-bearing pullulan (CHPOA) nanogels stabilized BMP2-dependent bone regeneration of critical-sized bone defects on mouse calvarium [28].
Application of FGFs in bone regeneration
The above discussions suggest that although FGFs do not have osteoinductive property, they function as an accelerator of osteogenesis under the appropriate conditions. It is possible that FGF2 and FGF9 work on proliferation of osteoblast cell lineage as well as induction of angiogenesis, and FGF18 functions in promotion of osteoblast differentiation. Tables 2 and 3 show some of the in vivo experiments in which FGFs were applied to non-critical-and critical-sized bone defects for bone healing, respectively. Further applications of FGFs have been elaborated by Du et al. and Gothard et al. [29,30].
Systemic or subcutaneous injections of FGF2 could enhance osteogenesis. However, it was shown that systemic injections of FGF2 caused adverse extraskeletal effects [31]. Therefore, local administration has been chosen as a more preferable method for applying bioactive factors.
FGF2 has been used for inducing angiogenesis and enhancing osteogenesis in non-critical-sized bone defects by activating proliferation of osteoblast cell lineages. FGF9 is also suggested to be involved in angiogenesis by controlling VEGFa expression [18]. As long as osteogenesis is Collagen sponge Bone histomorphometry Enhances angiogenesis and bone regeneration [19] The table shows the various growth factors and their combinations used for regeneration of non-critical-sized defects, their dose, the site of application, the carrier used for the application, and the investigations through which the effects of bone healing have been studied i.p. intraperitoneal injection [20] The table shows the various growth factors and their combinations used for regeneration of critical-sized defects, their dose, the site of application, the carrier used for the application, and the investigations through which the effects of bone healing have been studied taking place to recover the bone defect, FGF2 can support or even enhance the healing. Recent studies suggest that high-dose FGF2 inhibits progression of osteoblast differentiation [20,32,33] (also unpublished observation of Sachiko Iseki) and low concentration of FGF2 enhanced osteogenesis [33,34]. In contrast, it is likely that high-dose FGF18 can promote osteoblast differentiation in vivo [20,28], while FGF18 treatment in vitro inhibits mineralization [14]. Kang et al. developed a sequential delivery system with fiber scaffolds in which FGF2 was released first and then FGF18 [35]. Applying this scaffold to rat calvarial criticalsized bone defects resulted in better bone volume and density, although the amount of FGFs applied to the defect was not clear. This study suggested that it is critical to control the amount or release speed of soluble factors for the bone regeneration process.
Conclusions
FGFs play an important role in the development and regeneration of various tissues. In this article, we have summarized the prime functions of all FGFs, and further, we have discussed elaborately about FGFs 2, 9 and 18, which play a major role in bone regeneration. We have also discussed about different carrier systems for FGF delivery in different animal models for bone regeneration. With the ongoing advancements in the field of cellular and molecular biology, we could expect that more detailed functioning of FGF/FGFR will be elucidated. Further, with the advent of novel carriers and protein delivery systems, it could be possible that the spatio-temporal release of FGFs can be controlled precisely as needed. This would improve our understanding and help us to clinically translate the use of FGFs to achieve effective bone regeneration. | 2017-08-01T05:07:37.660Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "65771f10376feee2b50121f706f1de50dc3ed7d2",
"oa_license": "CCBY",
"oa_url": "https://inflammregen.biomedcentral.com/track/pdf/10.1186/s41232-017-0043-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65771f10376feee2b50121f706f1de50dc3ed7d2",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
116052753 | pes2o/s2orc | v3-fos-license | Numerical study of a VM type multi-bypass pulse tube cryocooler operating at 4K
VM cryocooler is one kind of Stirling type cryocooler working at low frequency. At present, we have obtained the liquid helium temperature by using a two-stage VM/pulse tube hybrid cryocooler. As a new kind of 4K cryocooler, there are many aspects need to be studied and optimized in detail. In order to reducing the vibration and improving the stability of this cryocooler, a pulse tube cryocooler was designed to get rid of the displacer in the first stage. This paper presents a detail numerical investigation on this pulse tube cryocooler by using the SAGE software. The low temperature phase shifters were adopted in this cryocooler, which were low temperature gas reservoir, low temperature double-inlet and multi-bypass. After optimizing, the structure parameters and the best diameters of orifice, multi-bypass and double-inlet were obtained. With the pressure ratio of about 1.6 and operating frequency 2Hz, this cryocooler could supply above 40mW cooling power at 4.2K, and the total input power needs no more than 60W at 77K. Based on the highest efficiency of 77K high capacity cryocooler, the overall efficiency of this VM type pulse tube cryocooler is above 0.5% relative Carnot efficient.
Introduction
Small scale and high efficiency 4 K cryocooler has many important applications in fields of superconductive electronics, aerospace exploration, medical science and etc. GM cryocooler and GM type pulse tube cryocooler (GPTC) are the only commercial 4K cryocooler, but the efficiency of this type cryocooler is relatively low due to the low efficiency of its GM compressor. The large bulk and oil filtering system of the GM compressor also limit its applications in many fields. In recent years multi-stage 4K Stirling pulse tube cryocooler (SPTC) [1] [2] [3] develops raptly because of its compact, long-life, and low vibrations. This type of cryocooler is driven by the linear compressor which normally works at the frequency above 20Hz. However, the rare earth material, such as Er3Ni and HoCu2, need to be used to increase the volume heat capacity of regenerator below 10 K. Due to mechanical reasons, this type material is difficult to be processed into screen meshes and the shape of sphere is always used. Compared with the low frequency cryocooler, the high frequency oscillating flow would result in a higher pressure drop in regenerator and lower efficiency of regenerator. Therefore, it is necessary to develop the high efficiency 4K SPTC operating at low frequency. Vuilleumier (VM) cryocooler is one kind of closed-cycle Stirling type cryocooler driven by a thermal compressor. VM cryocooler normally works at low frequency, so it has potential for working with high efficiency at 4K. Combining thermal compressor with the pulse tube, it would to be a new type of high efficiency, compact, long-life, low vibration VM type 4K cryocooler (VM-PTC). Dai et al. [4] have introduced the concept of VM-PTC and built a this type cryocooler, which obtained the lowest temperature of 3.5 K by using 77 K and 20 K pre-coolers. This work also demonstrated its capacity of obtaining liquid helium temperature. Matsubara et al. [5] also presented the concept of thermally actuated He-3 PTC. In his cryocooler, the TCP worked between the temperatures of 300 K and 40 K. Nowadays, Wang et al. [6] developed a 15 K single-stage VM-PTC by using an 80 K Stirling type PTC as pre-cooler.
In our previous works [7] [8], a VM/PT hybrid cryocooler has been built and successfully obtained the temperature below 4K. In order to reducing the vibration and improve the stability of this cryocooler, a pulse tube cryocooler was designed to get rid of the displacer in the first stage. This paper built a numerical model of VM-PTC by using the SAGE software. The double-inlet, multibypass and cold gas reservoir were used as its phase relationship shifter. After optimizing, this cryocooler could supply above 40mW cooling power at 4.2K, and the total input power needs no more than 60W at 77K. generate a pressure wave. To obtain the liquid helium temperature, a multi-bypass configuration was adopted in this VM-PTC. The multi-bypass was invented by Zhou et al [9] and has been experimentally proven to be effective way to improve the performance of PTC. Figure 1 also shows the structure diagram of multi-bypass. For practical reasons, the coaxial arrangement is adopted in our design.
Physical and mathematical model
In our design, the orifice, cold gas reservoir and cold double-inlet were used as the phase relationship shifters of PTC. The phase shifters of PTC and cold cavity of thermal compressor were located at low temperature 77K. For the regenerators of PTC, the layered structure was adopted. The Stainless Steel Screen was used at the 77K end, and the lead sphere, Er3Ni, and HoCu2 were used at the cold end. The detail structural parameters are listed in the Table 1.
Mathematical model
The numerical simulation was conducted by using SAGE software, a 1D simulation program based on thermal and fluid dynamic equations [10]. The software provides a series of modules to simulate different components of cryocooler. All the modules are connected by energy flow, mass flow and pressure wave. So it is convenient to connect different modules to build different cryocoolers. Besides, this software also has a function of optimization. The structural parameters could be optimized automatically. Figure 2 showed the numerical model in the SAGE software. The radial heat transfer of pulse tube and regenerator was considered to simulate the coaxial structure PTC. The double-inlet, multi-bypass and the orifice were calculated by using the module of capillary tube. The double-inlet, orifice and gas reservoir kept at the constant temperature 77K. The cold end kept at the temperature 4.2 to calculate its cooling power. Figure 3 showed the pressure ratio of cold end and the PV power in the middle cavity changed with the opening of double-inlet. With the opening of double-inlet, the pressure ratio of cold end would be increased, and the PV power in the middle cavity would be decreased. This showed that the doubleinlet will increase the cooling capacity of cold end and decrease the power consumption. An appropriate double-inlet would increase the efficiency of the cryocooler. With the increasing of the diameter of double-inlet, the cooling capability of cold end would increase and then the cooling power would increase. But with the opening of double-inlet, the DC flow rate will increase at the same time. The DC flow will directly flow through the regenerator, which will add the thermal loss in the regenerator. So when the double-inlet opened too large, the cooling power will decrease because of the DC flow. Figure 4 also showed the influence of multi-bypass on the cooling power. With the different multi-bypass, there is a corresponding optimal double-inlet. For this case, the optimal multi-bypass is about 0.4mm, and the corresponding optimal double-inlet is about 0.95mm. The cooling power is about 42mW@4.2K at this condition. Figure 5 showed the influence of double-inlet on the T1 and DC flow. With opening of double-inlet, the DC flow rate will increase, which has been discussed in the last paragraph. The double-inlet will make the T1 decrease at first. When the DC flow rate is too large, the T1 will increase then. Fig.6 The distribution of acoustic power Figure 6 showed the distribution of acoustic power in the regenerator and pulse tube. The axial structure was adopted in the simulation, so the direction of regenerator and pulse tube were opposite. The negative acoustic power in the pulse tube meant its direction was negative. The acoustic power was about 13W at inlet of regenerator. It was consumed about 7W in the regenerator Ⅰ. Through the multi-bypass, a little acoustic power flowed into pulse tube. At the inlet of regenerator Ⅱ, the acoustic power was about 5W, and it was consumed about 4W in the regenerator Ⅱ. The pulse tube barely consumed the acoustic power because the pressure drop in the pulse tube was very small. At the hot end of pulse tube, the acoustic power increased sharply because of the double-inlet. There was about 1W acoustic power flow through the double-inlet and consumed in the orifice.
Acoustic power and thermal efficiency
For this case, the largest cooling power at 4.2K was about 42mW. The PV power consumption in the middle cavity was about 25W. And the heat loss (including the shutter loss, heat conduction loss leakage gas loss and et al.) was about 35W. So the total heat consumption at 77K was about 60W. According to the highest efficiency 77K cryocooler, its relative Carnot efficiency is above 28% [11] . Its input power could keep below 600W to supply 60W cooling power at 77K. So the overall efficiency of this VM type pulse tube cryocooler is above 0.5% relative Carnot efficient.
Preliminary experimental result
Based on the simulation result, the experiment system has been built. Figure 7 showed the preliminary result of this VM type pulse tube cryocooler. At present, the opening of multi-bypass was 0.38mm, and the opening of double-inlet was 0.75mm. The lowest temperature of cold end was 6.1K; the temperature of multi-bypass was 48.9K. The operating frequency was 2Hz, and the average pressure was 1.8MPa. The multi-bypass and double-inlet should be optimized in the next works.
Conclusion
This paper studied a new type 4K cryocooler, VM type multi-bypass pulse tube cryocooler, by using numerical method. By using the optimization function of Sage software, the structure parameter of this cryocooler was obtained. The influence of double-inlet on the pressure ratio, PV work consumption, cooling power and DC flow rate was studied. The acoustic power and thermal efficiency of this cryocooler was also analyzed. At last, an experiment based on the simulation results was built, and a preliminary temperature 6.1K was obtained. | 2019-04-16T13:27:16.486Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "c85da4c27acd451c0153c85d318f45a837405371",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/278/1/012048/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cbb29f1ab41610527c13d7b655c1bc38091686a2",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
7453260 | pes2o/s2orc | v3-fos-license | Nutrition and prevention of Alzheimer’s dementia
A nutritional approach to prevent, slow, or halt the progression of disease is a promising strategy that has been widely investigated. Much epidemiologic data suggests that nutritional intake may influence the development and progression of Alzheimer’s dementia (AD). Modifiable, environmental causes of AD include potential metabolic derangements caused by dietary insufficiency and or excess that may be corrected by nutritional supplementation and or dietary modification. Many nutritional supplements contain a myriad of health promoting constituents (anti-oxidants, vitamins, trace minerals, flavonoids, lipids, …etc.) that may have novel mechanisms of action affecting cellular health and regeneration, the aging process itself, or may specifically disrupt pathogenic pathways in the development of AD. Nutritional modifications have the advantage of being cost effective, easy to implement, socially acceptable and generally safe and devoid of significant adverse events in most cases. Many nutritional interventions have been studied and continue to be evaluated in hopes of finding a successful agent, combination of agents, or dietary modifications that can be used for the prevention and or treatment of AD. The current review focuses on several key nutritional compounds and dietary modifications that have been studied in humans, and further discusses the rationale underlying their potential utility for the prevention and treatment of AD.
OVERVIEW
Alzheimer's dementia (AD) is the most commonly recognized cause of dementia in the aging population (Brookmeyer et al., 2007). First described by the German psychiatrist and neuropathologist, Alois Alzheimer, in 1906, the eponymous syndrome is recognized worldwide as a major cause of morbidity and mortality in the aging population contributing to a major burden on healthcare systems (Brookmeyer et al., 2007). Amyloid plaque deposition in the brain, the development of neurofibrillary pathology, neuronal loss and dysfunction of neuroanatomic systems, including those affecting cholinergic transmission, are among the commonly described findings in persons with AD (Jicha and Carr, 2010). In addition to these core pathological features of AD, much evidence has accumulated that increased oxidative stress, defects in mitochondrial dysfunction and cellular energy production, and chronic inflammatory mechanisms contribute to the degenerative cascade in this complex disease (Rao and Balachandran, 2002;Cunnane et al., 2011;Rubio-Perez and Morillas-Ruiz, 2012). A cure for the condition is not known and current treatment approaches include maximizing transmission of acetylcholine and other neurotransmitters while also using other medications and multidisciplinary strategies to treat associated comorbidities focused on improving quality of life and alleviating symptom burden (Jicha and Carr, 2010).
A recent NIH scientific roundtable concluded that there is no definitive evidence for benefit of any preventive strategy for AD, however, this working group also concluded that preventative strategies focused on modifying environmental risk factors deserves further scientific exploration and study (Daviglus et al., 2011). In response to this consensus statement, there was a broadbased push back by many members of the Alzheimer's research community (particularly from the animal model and epidemiological research folks whose data sets were marginalized and patronized as being of "poor scientific quality"). Researchers in the field of AD were also underrepresented in the consensus group leading to further debate as to the validity of the consensus statement. As such it is clear that the field is far from consensus on the debate of whether viable interventions for the prevention of AD exist. While the search for a cure remains elusive, there is much hope that preventative strategies, such as dietary modification and nutritional supplementation, may reduce the global burden of AD.
A nutritional approach to prevent, slow, or halt the progression of disease is a promising strategy that has been widely investigated. Much epidemiologic data suggests that nutritional intake may influence the development and progression of AD (Gillette-Guyonnet et al., 2013). Modifiable, environmental causes of AD include potential metabolic derangements caused by dietary insufficiency (Knopman et al., 2001;Kamphuis and Scheltens, 2010;Cunnane et al., 2011;Cardoso et al., 2013;Hu et al., 2013;Lopes da Silva et al., 2014). In addition, many nutritional supplements and dietary modifications may directly influence the pathological contributions of increased oxidative stress, defects in mitochondrial dysfunction and cellular energy production, chronic inflammatory mechanisms, and even direct pathways to amyloid accumulation and neurofibrillary degeneration that contribute to the degenerative cascade in AD (Figure 1; Rao and Balachandran, 2002;Lau et al., 2007;Weih et al., 2007;Pasinetti and Eberstein, 2008;Kamphuis and Scheltens, 2010;Cunnane et al., 2011;Pocernich
FIGURE 1 | Diagram of multiple influences of dietary constituents on cellular pathways and process linked to neurodegeneration in AD.
Antioxidants, trace minerals, flavonoids, metabolic substrates and modulators, vitamins, and omega-3 fatty acids, among others, have all been shown to downregulate the many pathological processes linked to the development of AD, including aging, amyloid deposition, neurofibrillary degeneration, synapse loss, inflammation, metabolic compromise, loss of vascular integrity, and neuronal injury and loss. Note: specific dietary factors may have more than one potential mechanism of action on the pathogenic processes contributing to neurodegeneration in AD. Links between pathological processes implicated in the development of AD may not be linear, but rather additive and are shown in a circular fashion without implication for specific linkages or temporal associations between such processes. Solfrizzi et al., 2011;Rubio-Perez and Morillas-Ruiz, 2012;Thaipisuttikul and Galvin, 2012;Gillette-Guyonnet et al., 2013;Hu et al., 2013;Shah, 2013). Nutritional modifications have the advantage of being cost effective, easy to implement, socially acceptable and generally safe and devoid of significant adverse events in most cases. Many nutritional interventions have been studied and continue to be evaluated in hopes of finding a successful compound that can be used for the prevention and or treatment of AD (Thaipisuttikul and Galvin, 2012;Hu et al., 2013).
As a discussion of the full complexity of dietary and nutritional interventions for the prevention of AD could fill a several volume book series, for the sake of presenting concise determinate clinical data, the current review focuses on several key nutritional compounds that have been studied in humans, and further discusses the rationale behind their use. Three fundamentally important considerations in interpreting the available human data on nutritional supplementation for the prevention of AD include: (1) Despite our understanding that regulation of metabolic processes are complex and may require multiple nutritional influences to correct aberrant processes leading to AD, traditional experimental design and study to date has largely focused on the use of single agents. Studies of combination therapies and interventions are still in their infancy, but may be extremely important for the identification of truly effective nutritional strategies for the prevention of AD; (2) Considerations of dietary excess and the negative consequences of some dietary constituents (especially those characterizing the current "Western" diet, may interfere with the potentially positive effects of specific nutritional interventions). This has been clearly shown in regards to benefits of omega-3 fatty acids that are neutralized by high dietary intake of omega-6 and saturated fats; (3) Our current understanding of AD includes a prodromal or preclinical phase ranging from 10 to 20 years in which preventative strategies may need to be maintained in order to see their effects. At best, available prevention data is derived from trials with short intervention periods typically extending to 18 months, with few studies extending to a 3 year period ( Table 1). In addition, there is a relative paucity of "gold-standard," randomized, double-blind, placebo-controlled, clinical trials investigating the impact of nutritional supplementation in AD. Despite this caveats, researchers, and clinicians remain actively engaged in testing the hypothesis that nutritional approaches may prove benficial for the treatment of AD, while conceding that continued work in this regard is imperative and must remain a high priority for funding authorities across the globe.
ANTIOXIDANTS
Increased oxidative stress has been postulated to be a major initiating factor or contributor to the neurodegeneration seen in None.
Mild to moderate AD, n = 409 Treatment was effective in reducing homocysteine levels (p < 0.001), but had no effect on rate of change in ADAS-cog score. None.
(Continued)
Frontiers in Aging Neuroscience www.frontiersin.org AD (Filipcik et al., 2006). There is much debate as to the type of antioxidant that may afford the most protection. Some antioxidants preferentially target cytosolic oxidative stress pathways whereas others preferentially serve as mitochondrial cofactors that may reduce intrinsic oxidative stress mechanisms (Filipcik et al., 2006). Antioxidants function in many ways, reducing oxidized membrane lipids, preventing carbonylation of proteins, limiting nucleic acid damage, and influencing stress kinase pathways. Such pleiotropic mechanisms of action inherent in the properties of any given antioxidant preclude precise determination of specific cellular injury or pathway modulation that might be responsible for the beneficial effects of antioxidants in in vitro and in animal models of AD. Irrespective of the many potential sites of action and influences on complex cellular pathways influenced by antioxidants, the hypothesis that antioxidant therapy may prevent or slow the development of AD has given rise to many human clinical studies that have provided a much data on the potential risks and benefits of antioxidant therapy in the treatment of AD. Laboratory research has demonstrated that cytosolic antioxidants, such as vitamin E (α-tocopherol), can prevent AD-like changes in the brains of genetic AD mouse models (Nishida et al., 2006). Other antioxidants such as selegiline, a monoamine oxidaseβ inhibitor, that also inhibits oxidative deamination, have also shown neuroprotective properties in animal models of degenerative disease although definitive human data is lacking in this regard (Tatton, 1993). The Alzheimer's Disease Cooperative Study Group (ADCS) evaluated these findings in a pivotal clinical trial that examined whether treatment with selegiline or α-tocopherol slowed the progression of disease in patients with moderately severe impairment from AD. Treatment with selegiline (15 mg twice daily) and α-tocopherol (1000 IU twice daily) increased median survival by 215 and 230 days over the placebo, respectively (Sano et al., 1997). These findings led to widespread use of a daily dose of 2000 IU α-tocopherol to slow progression in AD. Widespread use of α-tocopherol supplementation at this dose permeated the field until a meta-analysis suggesting an absolute increase in all-cause mortality of 39/10,000 subjects in the pooled data (p = 0.035; absolute risk ratio = 0.0039; number needed to harm = 256) from doses higher than 400 IU per day was published in Miller et al. (2005). The debate on whether vitamin E is protective or deleterious has raged on, with many cautious of the potential adverse effects of such high dose supplementation until 2013 when the ADCS study results were replicated in a large scale "gold-standard" clinical trial, again supporting the use of α-tocopherol at doses of 2,000 IU per day for the treatment of AD (Dysken et al., 2014). Vitamin E should be taken in conjunction with vitamin C as a recharging antioxidant that maximizes the dose of vitamin E. Wheat germ, sunflower, and safflower oils, leafy green vegetables, and asparagus are among the best food sources of vitamin E.
In addition to its effects in combination with vitamin E, vitamin C (ascorbic acid) has been widely studied for the prevention and treatment of AD (Bowman, 2012). Vitamin C is an essential vitamin that cannot be produced by humans from glucose or other substrates. Fortunately dietary sources of vitamin C are common and include citrus fruits, berries, and many vegetables that are a common part of most human diets worldwide. Several epidemiologic and cohort studies have investigated the association of vitamin C dietary intake and supplementation with AD and cognitive function in elderly humans. The data from well-characterized cohorts and epidemiologic populations include 33,252 subjects combined and only 3 of these 8 studies have suggested benefit of vitamin C intake (reviewed in Bowman, 2012). Further work examining plasma levels of vitamin C in seven studies (n = 1951 combined) showed a significant inverse association of vitamin C with AD and cognitive impairment that was universal across all cohorts studied. Four small studies have looked directly at CSF levels of vitamin C in relation to cognitive performance and AD (n = 122 combined), with 3 of the 4 studies showing a positive relationship. These data suggest that dietary intake may be less important than the biological levels attained through dietary intake of vitamin C (reviewed in Bowman, 2012). Unfortunately "gold-standard," large, multi-center, prospective, randomized, placebo-controlled trials have not been conducted to date. Such studies are needed to determine if prospective supplementation with vitamin C can modify the pathological process responsible for AD. Nonetheless, risks associated with vitamin C supplementation have not been seen in any human studies to date, and the cumulative available data from cross sectional, retrospective cohort, and epidemiological studies suggest that a potential benefit may exist.
Other studies have shown some benefit from mitochondrial cofactors that may function to reduce oxidative stress such as coenzyme Q10 (CoQ10) in reducing amyloid plaque deposition in mice models of AD (Yang et al., 2010). CoQ10 plays a role in the ubiquitin-proteasome complex pathway in the mitochondria that deals with aerobic respiration and cellular breakdown product disposal; a process that is thought to contribute to amyloid plaque formation and accumulation. CoQ10 is easily available and has not associated with any significant adverse events in numerous studies across disparate medical conditions (Hidaka et al., 2008). CSF penetration of CoQ10 is limited however, with maximal availability in the heart, liver, and kidneys (Bhagavan and Chopra, 2006). Nevertheless, it remained a promising agent in the array of antioxidants being used in AD until a recently published head-to-head comparison of cytosolic (α-tocopherol) and mitochondrial (CoQ10) antioxidants was performed in patients with mild to moderate AD (Galasko et al., 2012). The results of this short 4-month study demonstrated benefit of cytosolic antioxidants in reducing isoprostane levels in cerebrospinal fluid, whereas CoQ10 showed no biological benefit in the central nervous system (CNS; Galasko et al., 2012).
Other molecules of recent interest include latreperdine (Dimebon) which was a repurposed antihistamine from Russia that led the field in interest and speculation as a new potential agent for the treatment of AD in the earlier part of this decade (Doody, 2009). The mechanism of action for latreperdine was an enigma until a wealth of data was produced suggesting its mechanism of action was in blocking the mitochondrial membrane pore complex and reducing or protecting mitochondria from the adverse effects of oxidative stress (Bharadwaj et al., 2013). Unfortunately the intrigue and excitement was short-lived as latreperdine soon Frontiers in Aging Neuroscience www.frontiersin.org proved to be ineffective in modifying AD progression in several global Phase-III studies (Bharadwaj et al., 2013). The question as to whether other mitochondrial antioxidants may have utility in the prevention and treatment of AD remains unanswered. Organic selenium is another antioxidant that is being investigated currently in human clinical studies (Zhang et al., 2010). It is available in several forms, with varied biological actions and efficacy in laboratory paradigms. While inorganic selenium is poorly absorbed, selenomethionine, and yeast-selenium are much more bioavailable after supplementation (Zhang et al., 2010). Animal studies suggest profound effects on brain health and the ability to significantly reduce AD pathology in genetic mouse models (Xiong et al., 2007). These findings have led to the use of antioxidant therapy including vitamins c, e, and selenium in the largest prevention trial of AD ever conducted, the PreADViSe trial, Prevention of AD by Vitamin E and Selenium (Kryscio et al., 2013). This trial remains ongoing and currently no data exists for benefits of selenium supplementation in the prevention or treatment of AD in humans.
Lipoic acid, β-carotene, and other bioflavonoids are just a few of the other antioxidants that have been evaluated for their actions on mitochondrial enzymes like superoxide dismutase and α-ketoglutarate dehydrogenase. Results from clinical trials with these agents have not been sufficient or warranted changes in recommendations for practice. Clearly further work including early stage prevention trials are needed before the utility or lack thereof of antioxidant therapy in the prevention and treatment of AD can be established.
OMEGA-3 FATTY ACIDS
Fatty acids are found in all the body cells as part of the cell membranes and play a major role in cell membrane stability, fluidity, and synaptic connectivity (Jicha and Markesbery, 2010). Fatty acid oxidation by free radicals results in cell membrane damage and has been postulated to contribute to the pathogenesis of AD, although other roles of unsaturated fatty acids such as lipid raft formation and maintenance of synaptic integrity and function have taken on a primary role in the hypothesis that supplementation with such agents may prevent or slow AD progression (Jicha and Markesbery, 2010). Docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) are the major omega-3 fatty acids that have been studied in human clinical trials to date (Jicha and Markesbery, 2010). DHA is the most abundant omega-3 fatty acid in the brain. Polyunsaturated fatty acids (PUFAs) like DHA and EPA allow for lipid raft formation across their unsaturated moieties to absorb the oxidative stress of free radicals and increase cell membrane fluidity necessary for lipid raft creation and formation of effective synaptic contacts (Jicha and Markesbery, 2010). It is currently unclear what the best source of and or combination of specific omega-3 fatty acids might be for the prevention or treatment of AD and so many different sources and formulations have been tested in human clinical trials.
Fish oil, rich in omega-3 fatty acids, when consumed regularly has been found to lower the incidence of AD in epidemiologic studies (Jicha and Markesbery, 2010). Several of these studies have found confounders such as apolipoprotein E (ApoE) allele status and omega-6 intake that have limited the reliability of studies across cohorts (Jicha and Markesbery, 2010). Several studies have demonstrated a lack of association of omega-3 supplementation on a background of a diet high in omega-6 fatty acids, which act similarly to saturated fatty acids decreasing membrane fluidity in opposition to the effects of omega-3 fatty acids (Jicha and Markesbery, 2010). These data suggest that nutritional benefit may be overcome or canceled in the presence of negative dietary contributions. It is intriguing that the dramatic increase in the prevalence of AD over the last century not only parallels the increase in average lifespan, but also an increase from 2 to more than 20 of the ratio of omega-6 to omega-3 PUFAs in the average Western diet (Yehuda et al., 2005). ApoE status may be a further modulator of omega-3 benefit that is discussed further below.
Nonetheless, the epidemiologic and scientific data have led to several large scale clinical trials of omega-3 fatty acids in AD and in normal aging. The MIDAS study found that daily dietary supplementation with 900 mg of DHA produced 7 year age improvement in cognition over just 24 weeks as compared to placebo in elderly patients with cognitive decline . One large scale trial of fish oil supplementation in Europe showed potential benefit for only those subjects with Folstein Mini-Mental Status scores greater than or equal to 27 (Freund-Levi et al., 2006). Another large scale, multicenter, double-blind, randomized, placebo-controlled trial of high dose DHA supplementation failed to meet primary endpoints for success, but in preplanned secondary analysis a benefit was seen for subjects with mild to moderate AD who were negative for the ApoE ε4 allele (Quinn et al., 2010). The effect of ApoE ε4 responsible for negating the beneficial effects of omega-3 fatty acids is poorly understood, but has been seen in both epidemiologic studies and prospective clinical trials. Carriers of this allele appear to show no benefits of DHA supplementation. This could be due to differences in lipid transport mechanisms or to direct effects on β-amyloid production and clearance. The potential for prevention of AD with omega-3 fatty acid supplementation in non-carriers of the ApoE ε4 allele deserves further study as this population represents nearly 60% of those affected by AD (Jicha and Carr, 2010). Prevention or slowing of the disease process in even this subgroup of sporadic AD cases would have a major impact on the prevalence and health care cost burden associated with AD worldwide. While further work is clearly needed, our current data demonstrates that DHA is well tolerated and has not shown any significant adverse events in these studies and further large scale trials in both normal aging and AD are currently underway.
In addition to potential direct effects on neurodegeneration in AD, the benefits of omega-3 fatty acids for reducing cerebrovascular disease are widely recognized and may provide additional benefit in those suffering from AD, in whom cerebrovascular disease may be the most common comorbidity (Jicha and Carr, 2010). Omega-3 fatty acids reduce circulating cholesterol levels, inhibit systemic inflammation in the circulatory system and vasculature, and inhibit platelet aggregation (Jicha and Markesbery, 2010). The potency of such effects in limiting cognitive decline in AD patients with comorbid cerebrovascular disease may be fundamental for the benefit of omega-3 fatty acids seen in human Frontiers in Aging Neuroscience www.frontiersin.org studies. Further studies aimed at distinguishing direct effects on AD pathogenesis and those associated with the abrogation of cerebrovascular disease are needed to resolve this fundamentally important issue.
B VITAMINS AND FOLATE
Several studies have shown that elevated serum homocysteine levels and lower vitamin B12 levels are associated with an increased risk of AD (Seshadri et al., 2002;Morris, 2003Morris, , 2012Malaguarnera et al., 2004). Serum folate levels have not shown the same association with AD (Morris, 2002). However, the association of folate with homocysteine metabolism is well known. Most of the B complex vitamins have been shown to be directly or indirectly associated with neuronal health due to their involvement with neuronal metabolic pathways. Essential functions of the B vitamins include their role in energy metabolism through their action as methyl donors. Deficiency of such vitamins has been directly linked to the development of specific neurological disorders, such as subacute combined degeneration (B12 deficiency), pellagra (vitamin B6 deficiency), Wernicke's/Korsakoff 's (B1 deficiency). Current clinical recommendations include routine laboratory analysis of such vitamin levels, and repletion if found to be deficient in the setting of cognitive impairment or decline (Knopman et al., 2001). Only limited evidence is available on the effect of supplementation with these vitamins in replete states, although one study found a beneficial effect of B complex vitamin supplementation in slowing the onset or progression of AD in cerebral gray matter although more data analysis is needed to elucidate the exact benefits of these vitamins (Douaud et al., 2013).
Cerefolin NAC is a product containing methylcobalamin, methylfolate and acetylcysteine and is approved for treating vitamin deficiencies associated with memory loss (Thaipisuttikul and Galvin, 2012). Evidence for its use is limited and research is ongoing, despite wide media and consumer appeal. Other clinical trials investigating folic acid and vitamin B supplementation have failed to show convincing results in neurologic improvement despite universal correction of deficiencies and downstream homocysteine levels, and as such the role of high dose B vitamin and folic acid supplementation in the prevention of AD has yet to be established (Aisen et al., 2008;Dangour et al., 2010;Daviglus et al., 2010;Morris, 2012).
MEDIUM CHAIN TRIGLYCERIDES (AXONA AND COCONUT OIL)
While glucose acts as the primary metabolic substrate in the brain, ketone bodies derived from medium chain triglycerides (MCTs) can serve as an alternative energy source in the brain. Much research has suggested that a localized, brain-specific, insulin resistance develops in AD leading to neuronal dysfunction and death (Craft et al., 2013;de la Monte and Tong, 2014). These data have prompted some to propose ketogenic supplements as a possible strategy for the treatment of AD (Kashiwaya et al., 2013;Zhang et al., 2013;Zilberter et al., 2013).
Patients with memory impairment showed significant improvement in memory after receiving oral supplementation with MCTs that produced higher ketone levels in the blood (Reger et al., 2004). Higher plasma levels of ketones like βhydroxybutyrate were directly correlated with higher performance in memory scores during this study (Reger et al., 2004). Interestingly enough, the benefits of higher ketone levels were only seen in ApoE4 negative patients during this study, similar to the selective benefits of omega-3 fatty acid supplementation seen in ApoE ε4 non-carriers described above.
Widespread media attention has recently been paid to the notion that coconut oil may be a treatment for AD (DeDea, 2012). This was prompted by the posting of a short video that went "viral" on You-tube showing dramatic improvements in a patient with AD with such dietary modification. The video included commentary from his wife, a pediatrician, who was familiar with the ketone diet used for intractable seizures in this population. The video and case itself has not been published scientifically and it is uncertain if it actually represents true events or is fictitious in nature. In addition, the claim of efficacy for coconut oil use in AD has not been substantiated in any case report in the medical and scientific literature. The postulated mechanism for the effect was the content of caprylic acid, a MCT, that restored brain function as portrayed in this video (DeDea, 2012). Definitive scientific and or clinical evidence for an effect of coconut oil for the prevention or treatment of AD remains to be seen, as no clinical trial data is as of yet available to substantiate or refute these claims.
A nutritional supplement, labeled as a medical food (Axona ® ), containing octanoic acid or caprylic acid has been studied for its potential benefits in AD (Henderson et al., 2009). Supplementation with Axona ® in AD patients has been shown to produce improvement in cognition when measured at 45 and 90 days of supplementation (Henderson et al., 2009). Again, the benefits are seen only in ApoE4 ε4 allele negative patients and are short-lived (Henderson et al., 2009). While the benefit remains uncertain, the risks appear minimal, and this strategy is taking root in the armament of treatments many clinicians offer their patients with AD. The most common reasons for discontinuation of Axona ® were adverse events like diarrhea, flatulence, and dyspepsia (Henderson et al., 2009). Caution should be exercised before prescribing Axona ® or other supplements containing MCTs in patients with diabetes, gastrointestinal inflammation, metabolic syndrome, and renal disturbances as these may amplify the risk of adverse events.
COMBINATION MEDICAL FOODS (Souvenaid ® )
Nutritional approaches to influencing the course and progression of AD are among the newer strategies being implemented by scientists. Specially designed medical foods in various combinations are being used to target the underlying process in AD. The rationale behind this approach is that AD results from or causes biochemical alterations in metabolic pathways that are complex and dependent on multiple nutritional compounds and co-factors. Targeting such complex pathways may require combination therapy for maximal success and the use of single agent supplementation may simply be ineffective outside of the rare patient that may have a specific nutritional deficiency. Current innovative approaches include supplying such nutrient combinations in an artificial form to enhance synaptic health and neuronal stability.
Frontiers in Aging Neuroscience www.frontiersin.org
Souvenaid ® is a medical food designed to contain just such a nutrient combination. Also known as Fortasyn Connect ® , this medical food is designed to promote synaptic formation and function (Kamphuis et al., 2011;Shah et al., 2013). The product includes precursors (uridine monophosphate; choline; phospholipids; EPA; DHA) and cofactors (vitamins E, C, B12, and B6; folic acid; selenium) necessary for the formation of synapses, integrity of neuronal membranes, resistance to oxidative stress, and optimal metabolic activity in the brain (Kamphuis et al., 2011;Shah et al., 2013).
The existing data have shown mixed results for this approach. One study did not find significant benefits from using Souvenaid ® while another showed that patients in the early stage of AD with higher baseline function showed cognitive improvement with higher doses of Souvenaid ® (Kamphuis et al., 2011;Shah et al., 2013). Additional trials with refined patient populations and optimized outcome measures may be needed to establish the benefit of this approach.
DIETARY INFLUENCES ON AD AND DIETARY MODULATION FOR THE PREVENTION OF AD
While the available scientific research and clinical trial data on nutritional interventions in AD have focused on single agents or simple combination therapies, a more holistic approach involves a broader examination of complex dietary influences that may not be easily adapted to scientific study and rigorous clinical examination. Evolutionary Discordance Theory suggests that chronic disease conditions are caused by evolutionary changes in diet (Cordain et al., 2005). This theory has some basis in observational data and on the consideration of the primal hominid diet in the hunter-gatherer Paleolithic era as essential consisting of minimally processed, wild plant and animal foods (Cordain et al., 2005). The rise of many chronic health conditions have paralleled the development of our agrarian culture, now consuming a diet rich in highly processed carbohydrates, refined sugars, alcohols, saturated vegetable oils, dairy products, and fatty domestic meats (Cordain et al., 2005). In light of this theory it is intriguing that an increased risk for AD has been linked to all these evolutionary dietary changes, which include an increase in glycemic index and fatty acid composition of the diet, alterations in macroand micronutrient composition, increase in sodium intake, and decrease in fiber content (Rao and Balachandran, 2002;Lau et al., 2007;Weih et al., 2007;Pasinetti and Eberstein, 2008;Kamphuis and Scheltens, 2010;Cunnane et al., 2011;Pocernich et al., 2011;Solfrizzi et al., 2011;Rubio-Perez and Morillas-Ruiz, 2012;Thaipisuttikul and Galvin, 2012;Gillette-Guyonnet et al., 2013;Hu et al., 2013;Shah, 2013). Such dietary changes have been linked to the rise in prevalence of metabolic syndrome that has been further associated with the rise in AD prevalence in western society.
Metabolic syndrome is defined by the presence of at least three of the following: abdominal or central obesity (waist circumference >102 cm for men and >88 cm for women); elevated plasma triglycerides (TGs; ≥150 mg/dl); low high-density lipoprotein (HDL) cholesterol (<40 mg/dl for men and <50 mg/dl for women); high BP (≥130/≥85 mm Hg) or normotensive on hypertensive treatment; high fasting plasma glucose (≥110 mg/dl) or euglycemic on antidiabetic treatment (Grundy et al., 2005). This combination of central obesity, hyperlipidemia, hypertension, and insulin resistance or type II diabetes has been strongly linked with increased risk for development of AD . The metabolic syndrome has been clearly associated with increased risk for cerebrovascular disease which is a substantial comorbidity and confounding contributor to the development of AD in community based cohorts, but also may directly increase systemic and CNS inflammation processes linked to AD through induction of adipokines that modulate these processes (Panza et al., 2010;Solfrizzi et al., 2010). While the components of metabolic syndrome are highly dependent on diet, contributions of other lifestyle factors leading to the metabolic syndrome, such as reduced exercise, should not be minimized (Grundy et al., 2005). While several studies examining pharmacologic manipulation of hypertension and hyperlipidemia have demonstrated cognitive benefit in the aging population, no interventional studies focused on nutritional correction of the complete metabolic syndrome in the prevention of AD exist in the literature today (Tzourio et al., 2003;Masse et al., 2005;Rockwood, 2006). Despite such positive results, large scale, multi-center, randomized, placebo-controlled human clinical trials of statins to reduce cognitive decline associated with AD have failed to date, leaving many to question the potential impact of modulating a single or isolated component of the metabolic syndrome in light of the complex, multifaceted, pathways involved in the development of AD (Sano et al., 2011).
Caloric restriction may counter the negative consequences of metabolic syndrome, has been investigated widely, and has shown cross species benefits in slowing aging process and extending expected lifespan (Gillette-Guyonnet and Vellas, 2008). The brain benefits of caloric restriction may be attributable to induction of neurogenesis and enhancement of synaptic plasticity, potentially protecting the brain from age-related senescence and allowing increased recovery and compensation following neuronal injury or in the face of degenerative processes such as AD (Gillette-Guyonnet and Vellas, 2008). While animal data is robust, little human data has been acquired to date on the role of caloric restriction in preventing AD.
People living in the Mediterranean region, especially Naples and the surrounding parts of Italy, have been found to have the highest life expectancies in the world and have also been found to have lower incidence of AD. This observation has convinced many authorities to attribute this to their dietary habits. The Mediterranean diet is extremely well known and popularly recommended as a preventive measure for AD and cognitive impairment (Feart et al., 2010;Sofi et al., 2010). Although there is no fixed recipe for this diet, generally speaking, a diet with generous servings of fruits, vegetables, whole grains, beans, nuts, seeds with low to moderate amounts of fish, poultry, dairy products, and low amounts of red meat is seen in most variations of this diet. Such diets have been shown to modulate inflammatory, oxidative stress-induced, regenerative, and cellular health processes that may play critical roles in modulating risk for AD. Olive oil forms an important component of this diet and is rich in monounsaturated fatty acids (MUFAs) and oleocanthal, the latter being shown to have inhibitory properties on fibrillization of tau protein, thus Frontiers in Aging Neuroscience www.frontiersin.org conferring it with potentially protective properties in AD (Monti et al., 2011). Epidemiologic studies have shown that higher adherence to a Mediterranean-style diet results in reduced mortality in patients with AD in a dose responsive manner with greater compliance and dosage being associated with greater benefits (Scarmeas et al., 2007). Research has also shown that greater consumption of Mediterranean diet with higher compliance was associated with a lower risk of AD (primary prevention) and slower progression of symptoms (secondary prevention) of mild cognitive decline into AD (Scarmeas et al., 2006;Solfrizzi et al., 2011). The benefits of the Mediterranean diet have been attributed to the presence of MUFA and PUFA in addition to the right mix of micro and macronutrients that promotes neuronal health and minimizes AD risk. "Gold-standard" clinical trial is not yet available for such dietary interventions given the complexity of practical implementation. Additional considerations not answered by the epidemiologic data include possible effects of genetic background (ApoE status as seen for other nutritional approaches), duration of dietary modification needed before benefit may be evident, and confounding lifestyle factors that may influence response to such dietary approaches or influence the outcome disease state and processes. Despite these caveats, the adage "You are what you eat" may hold true and the health benefits of a Mediterranean-style diet are well accepted across multiple disciplines of medicine with no adverse potential or limitations for the general population. Only limited studies of other diets have been published to date with no clear evidence emerging.
Unfortunately, as clinical interventions of total dietary intake and composition are limited by the complexity of the human diet, the best available data for dietary influences in the prevention of AD come from retrospective cohort and epidemiologic studies rather than from prospective clinical trials. This is in large due to the many confounds inherent in total dietary regulation required to establish causality and the potential benefit of prospective dietary modification. With these caveats in mind, it is still worthwhile to consider the emerging evidence that modification of dietary patterns, including caloric restriction, and avoiding the current "Western" diet, may prove to be effective strategies for the prevention of AD.
OTHER NUTRITIONAL SUPPLEMENTS EVALUATED IN HUMAN CLINICAL TRIALS
Apart from the nutritional substances mentioned above, a myriad of other natural supplements have potential to modify the development and progression of AD, although definitive evidence for efficacy is still lacking. Many of these agents are commonly used in traditional medicine culture in India, China, and other areas of Eastern Asia and have generated a great deal of interest in their potential therapeutic benefit for AD in Western society.
Huperzine A is a traditional Chinese herb, with pharmacological actions as an acetylcholinesterase inhibitor (AChEI) among other properties that has been evaluated in several small clinical trials that have suggested potential benefit in AD patients (Yang et al., 2013). In the laboratory, huperzine A has been shown to reduce amyloid plaque formation and abrogate cell death by altering neuronal iron content in animals models of AD (Huang et al., 2014). A recent multicenter clinical trial of Huperzine A failed to meet primary endpoints for success suggesting its potential benefits may be limited in human disease, however, the study suffered from several limitations, including the concomitant use of pharmacological AChEI that have diluted the effects of the herb that would otherwise have been seen in drug-naïve populations (Rafii et al., 2011).
Gingko biloba, another Chinese herb, has also been studied for potential benefit in AD. It contains ginkgolide B, a platelet activating factor (PAF) antagonist, and has been used in stroke trials due to this property. A large scale primary and secondary prevention study analyzed the efficacy of gingko biloba in preventing the development of or slowing progression of mild cognitive impairment into AD and found no benefits (DeKosky et al., 2008;Snitz et al., 2009). Despite this failure, it continues to be a popular treatment for a variety of medical conditions and may yet hold promise for the treatment of vascular cognitive impairment and dementia related to other non-AD causes.
Resveratrol is a polycyclic aromatic compound found in skins of grapes, raspberries, mulberries in varying concentrations that has been found to have powerful antioxidant properties (natural plant polyphenol; Pervaiz and Holme, 2009). Laboratory studies have shown multiple mechanisms of action including function as a powerful antioxidant, decreased amyloid plaque deposition in animal models of AD, and inhibition of intracellular inflammatory pathways that predispose to cellular breakdown and cell death (Anekonda, 2006;Pervaiz and Holme, 2009). Perhaps the most interesting data of all links resveratrol to sirtuin pathways that are key mediators of cellular aging processes, suggesting it may promote longevity (Anekonda and Reddy, 2006). Animal studies have demonstrated that resveratrol improves the health and survival of mice fed a high calorie diet (Baur et al., 2006). Thus, resveratrol may not only influence AD pathways, but may also enhance metabolic activity, thereby reducing risks for obesity and cerebrovascular disease that can contribute to the development of dementia, in addition to reductions in cellular aging processes (Baur et al., 2006). As age remains the major risk factor for AD, resveratrol is being tested currently in a multicenter clinical trial in mild to moderate AD patients in addition to a widespread range of other age-related diseases 1 . Available safety data suggests that supplementation with resveratrol is relatively benign although no current recommendations as to its usefulness van be made without more human data including the results of the ongoing trial.
Turmeric is a spice used commonly in eastern cuisines and has been associated with antioxidant properties that have spiked interest in its role on treating or preventing cancer, aging and AD. Curcumin, the active ingredient of turmeric, has been shown to have antioxidant scavenging properties and increase glial fibrillary acid protein expression in hippocampi of mice and increase spatial memory Fang et al., 2014). Curcumin exhibits powerful anti-inflammatory effects through its downregulation of proinflammatory transcription factors including nuclear factorkappa B, signal transducer and activators of transcription-3, and Wnt/beta-catenin (Aggarwal, 2010). Curcumin has additionally been shown to directly reduce amyloid plaque formation and associated inflammation in the brains of AD Mouse models . Two studies have published or presented the data from these trials to date, both showing excellent safety profiles of curcumin, but neither showing benefit in regards to clinical efficacy or influences on traditional AD biomarkers (Baum et al., 2008;Ringman et al., 2008). Several other human trials are currently underway 2 , however, definitive data is not yet available on the influence of this nutritional agent in the prevention or treatment of AD in humans.
SUMMARY
A nutritional approach to preventing AD appears to be an innovative and safe approach that may be extremely cost effective, allow ease of administration, and importantly, serve as a socially acceptable intervention or adjunctive approach in the prevention and treatment of AD. Despite years of scientific, medical, and clinical advances in this area, much remains to be discovered and proven in terms of specific nutritional interventions for the prevention of AD. Promising agents such as vitamins, energy substrates, flavonoids, lipids, and modified diets functioning as anti-oxidants, metabolic-enhancers, immune-modulators, and direct disease-modifying agents await further investigation. To date no definitive evidence for disease modification outside of animal and in vitro experiments exist, and yet human clinical data is beginning to suggest that such interventions deserve further study ( Table 1).
Yet it is possible, despite the wealth of retrospective and prospective human data available that nutritional interventions for the prevention of AD may be effective, it is equally possible that they may serve to only supplement direct disease-modifying treatment, and in themselves are ineffective at modulating the AD disease state. Human evolutionary dietary data does not take into account the extended lifespan see in modern day humans, and so evidence supporting evolutionary reversion of diet may be misleading. Evidence from studies of omega-3 fatty acids like DHA and energy substrates like Axona ® suggest that disease modulation of nutritional intervention may be genetically modulated (Reger et al., 2004;Quinn et al., 2010). It is possible that dietary modulation and nutritional supplementation serves only to bolster normal health mechanisms that are a natural deterrent of chronic health conditions such as AD without really possessing any discrete disease specificity. Future investigations in the area of nutritional and dietary prevention of AD will have to consider these potential confounds and overcome them if nutritional and dietary supplementation and modification are ever to become part of the clinical care paradigm for the prevention and treatment of AD.
While the field of interventional nutrition in AD is expanding at an exciting pace and the latest developments in this field are being closely followed by researchers, clinicians, the public, and the lay media, definitive clinical trials are lacking secondary to funding limitations and opportunities, adherence to traditional empiric clinical methodologies that seek to investigate single mechanisms of disease pathogenesis and intervention, and the development of strategies to overcome basic issues of control populations and incorporation of often complex interventional strategies for the multi-faceted approaches that may be required to move the investigation of nutritional interventions for the prevention of AD forward.
Given the complexity of the field in regards to nutritional interventions for the prevention of AD and the considerations and limitations an available to date, it is important to draft recommendations for advancing the field in this regard. Recommendations stemming from this review include: (1) Expanding research funding opportunities beyond those available from the National Center for Complementary and Alternative Medicine (NCCAM), to include funding from other NIH centers and to encourage state-of-the-art and "gold-standard" research from industry and private organizations; (2) Design of trial methodology and data analysis techniques to account for complexities in dietary patterns that may influence investigations of single nutriceutical agents or simple combinations of such agents, that may be amenable to the desired goal of rigorous prospective scientific investigation of comprehensive dietary alterations in the prevention of AD; (3) Incorporation of longer trial periods reflecting our increased understanding of the extensive (10-20 year) prodromal period of AD where prevention may be most effective; and (4) Broad scientific discovery from biological samples derived from prospective clinical interventions focused on investigating the basic mechanisms underlying the potential beneficial effects of nutritional interventional strategies including metabolomic, antioxidant, immunologic, neuroprotective and cellular regenerative discoveries. The development of a fundamental framework incorporating such features is necessary for advancing the field of nutritional interventional strategies for the prevention and treatment of AD. | 2016-05-12T22:15:10.714Z | 2014-10-20T00:00:00.000 | {
"year": 2014,
"sha1": "8fc549a0e739e790c29c27e3b3642e7d476d425b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2014.00282/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf1d30785f6fcd365e7438b1d5f5a3429d928d9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
157067025 | pes2o/s2orc | v3-fos-license | National health information systems for achieving the Sustainable Development Goals
Objectives Achieving the Sustainable Development Goals will require data-driven public health action. There are limited publications on national health information systems that continuously generate health data. Given the need to develop these systems, we summarised their current status in low-income and middle-income countries. Setting The survey team jointly developed a questionnaire covering policy, planning, legislation and organisation of case reporting, patient monitoring and civil registration and vital statistics (CRVS) systems. From January until May 2017, we administered the questionnaire to key informants in 51 Centers for Disease Control country offices. Countries were aggregated for descriptive analyses in Microsoft Excel. Results Key informants in 15 countries responded to the questionnaire. Several key informants did not answer all questions, leading to different denominators across questions. The Ministry of Health coordinated case reporting, patient monitoring and CRVS systems in 93% (14/15), 93% (13/14) and 53% (8/15) of responding countries, respectively. Domestic financing supported case reporting, patient monitoring and CRVS systems in 86% (12/14), 75% (9/12) and 92% (11/12) of responding countries, respectively. The most common uses for system-generated data were to guide programme response in 100% (15/15) of countries for case reporting, to calculate service coverage in 92% (12/13) of countries for patient monitoring and to estimate the national burden of disease in 83% (10/12) of countries for CRVS. Systems with an electronic component were being used for case reporting, patient monitoring, birth registration and death registration in 87% (13/15), 92% (11/12), 77% (10/13) and 64% (7/11) of responding countries, respectively. Conclusions Most responding countries have a solid foundation for policy, planning, legislation and organisation of health information systems. Further evaluation is needed to assess the quality of data generated from systems. Periodic evaluations may be useful in monitoring progress in strengthening and harmonising these systems over time.
AbstrACt
Objectives Achieving the Sustainable Development Goals will require data-driven public health action. There are limited publications on national health information systems that continuously generate health data. Given the need to develop these systems, we summarised their current status in lowincome and middle-income countries. setting The survey team jointly developed a questionnaire covering policy, planning, legislation and organisation of case reporting, patient monitoring and civil registration and vital statistics (CRVS) systems. From January until May 2017, we administered the questionnaire to key informants in 51 Centers for Disease Control country offices. Countries were aggregated for descriptive analyses in Microsoft Excel. results Key informants in 15 countries responded to the questionnaire. Several key informants did not answer all questions, leading to different denominators across questions. The Ministry of Health coordinated case reporting, patient monitoring and CRVS systems in 93% (14/15), 93% (13/14) and 53% (8/15) of responding countries, respectively. Domestic financing supported case reporting, patient monitoring and CRVS systems in 86% (12/14), 75% (9/12) and 92% (11/12) of responding countries, respectively. The most common uses for system-generated data were to guide programme response in 100% (15/15) of countries for case reporting, to calculate service coverage in 92% (12/13) of countries for patient monitoring and to estimate the national burden of disease in 83% (10/12) of countries for CRVS. Systems with an electronic component were being used for case reporting, patient monitoring, birth registration and death registration in 87% (13/15), 92% (11/12), 77% (10/13) and 64% (7/11) of responding countries, respectively.
Conclusions Most responding countries have a solid foundation for policy, planning, legislation and organisation of health information systems. Further evaluation is needed to assess the quality of data generated from systems. Periodic evaluations may be useful in monitoring progress in strengthening and harmonising these systems over time.
IntrOduCtIOn
Data should guide governments as they plan, budget, and act for health. The Sustainable Development Goal (SDG) for health, ensure healthy lives and promote well-being for all at all ages, requires data on disease strengths and limitations of this study ► To our knowledge, this is the first detailed multicountry assessment of national case reporting, patient monitoring and vital statistics systems. ► Given that this survey was administered electronically, there may have been differences in how respondents interpreted question and answer choices. ► Knowledge and experience of respondents may have varied from office to office. ► Given that the survey represents 15 countries globally, the results may not be globally representative. ► Given that survey respondents did not answer all questions, there are differences in the denominator across questions. Open access transmission, service coverage and outcomes and causes of death (table 1). 1 These data can come from various sources including surveys, longitudinal studies and data systems. Given that surveys and longitudinal studies often are time-limited, require external resources and take time to design and administer, the role of systems in generating population disaggregated, geographically specific and timely data is becoming more important. 2 WHO has specified that key data sources for health information systems include individual records (such as case reports and disease registries), service records from health providers, civil registration and vital statistics, among others. 3 For the purposes of this survey, we honed in on three core systems used for disease identification, service provision and vital status monitoring. These include: (1) communicable disease case reporting from individual records, (2) patient monitoring from service records and (3) vital statistics derived from civil registration systems. Communicable disease case reporting is traditionally used to monitor trends in disease transmission across different geographic settings and among different populations as part of routine surveillance. 4 Patient monitoring can be used to monitor health service coverage, such as treatment for HIV, tuberculosis, childhood immunisations, among others as part of universal healthcare coverage. 5 Well-functioning civil registration and vital statistics (CRVS) systems produce data on registered births, deaths (including cause of death) as well as marriages, adoptions and divorces; public health authorities primarily focus on registration of births, deaths and causes of deaths for decision making. 6 For case reporting, many of the global norms and standards trace back to disease-specific reporting requirements, the Integrated Disease Surveillance and Response (IDSR) framework and to the International Health Regulations. 7 8 Patient monitoring, and other health information systems, are transitioning from paper-based to electronic-based systems. 9 The Statistical Commission of the United Nations provides comprehensive principles and recommendations for CRVS systems to achieve universal coverage, continuity, confidentiality and regular dissemination in order to be a dependable and primary data source for vital statistics. 10 Although WHO collates global health data in its Global Health Observatory, 11 to our knowledge there are few publications evaluating contributing systems in detail. 12 The objective of this article is to summarise the status of case reporting, patient monitoring and CRVS systems among a sample of low-income and middle-income countries.
MethOds survey design
The survey team, comprised global experts in informatics, surveillance and programme, jointly developed a survey covering policy, planning, legislation and organisation of case reporting, patient monitoring and CRVS systems. This survey was primarily designed to assess the state of information systems that could potentially be leveraged for HIV-related clinical surveillance, monitoring progress towards meeting national and global goals and improving national responses. 13 The survey was piloted prior to full implementation by review from system-specific experts and staff working in country offices for content and usability of the survey tool. The survey was administered through a tool developed in Excel (Microsoft, Seattle,
Open access
Washington, USA). The tool consisted of multiple-choice questions and text boxes through which respondents could elaborate on their selections (online supplementary tables S1-S3).
definitions For the purposes of establishing a common framework for administration of this tool, we developed definitions for case reporting, patient monitoring and CRVS systems: ► A functioning case reporting system routinely collects information on diagnosed disease-specific cases. These cases may be reported from health facilities or providers to a central level. At subnational and national levels, these data can be used to track epidemics and quantify the burden of disease in order to inform public health programmes. For example, some countries may use IDSR to report individual and aggregated newly diagnosed cases of communicable disease. ► Patient monitoring systems collect routine data from health facilities related to clinical patient management. Patient monitoring systems are often used to measure service coverage and quality. Data are often used to assess the health sector response from the facility to the national level. table S4). CDC country staff overseeing strategic information (encompassing health information systems, surveillance and monitoring and evaluation) were selected as key informants and were contacted by email to complete the tool. One staff member was contacted per country. Respondents were encouraged to liaise with their national government counterparts for questions to which they did not know the answer. Questions that the counterpart did not know, and for which they were unable to liaise with their counterpart, were left blank. We administered the questionnaire via email in January 2017. Up to three follow-up emails were sent to non-respondents from February to May 2017. The results were then reviewed with government counterparts for validity.
data management and analysis
Country key informants entered their responses directly into the Excel tool. All country files were cleaned and merged into a Stata database (StataCorp, College Station, Texas, USA). The Stata database was then exported to Excel (Microsoft, Redmond, Washington, USA) for analysis. Any response that was left blank or indicated 'not applicable' was excluded from the denominator when percentages were calculated. With countries acting as our unit of measure we had limited statistical power and chose not to conduct statistical tests but rather describe the results of the survey using proportions. Since different questions were left blank or indicated not applicable from key informants, most of the descriptive analyses have different denominators. Tableau (Tableau, Seattle, Washington, USA) was used for creating maps with Open-StreetMap images while Excel was used to create descriptive tables. The United Nations Human Development Index was used to summarise life expectancy, mean years of schooling and gross national income per capita. 14 World Bank thresholds were used to classify countries as low, lower-middle or upper-middle income. 15 Patient and public involvement This survey included countries rather than patients as a unit of measure. Patients and the public were not involved in the design or planning of the study. Open access (70%) with a small proportion being linked to CRVS systems (10%). These findings and others are presented in table 3. Key informants from 13 of 15 responding countries (87%) reported an electronic component in the country's case reporting system, and 8 of these 13 (62%) countries collect data on individual cases (figure 1). Eleven of the 13 (85%) responding countries reported that the coverage of the case reporting system exceeded 75% (figure 1).
Patient monitoring systems
Key informants from 13 of 14 (93%) countries that responded to the patient monitoring systems section of the survey indicated that the Ministry of Health was responsible for patient monitoring. The primary use of patient monitoring data was to monitor service coverage (reported by 12 of 13 countries, 92%); however, 8 of 13 (62%) and 10 of 13 (77%) reported using patient monitoring data for service quality improvement and commodity forecasting, respectively. Multilateral (9 of 12 countries, 75%) and bilateral (9 of 12 countries, 75%) financial support was more common for patient monitoring compared with case reporting. Five of 12 countries (42%) of patient monitoring systems used the same system for monitoring in the private and public health sector. Two of 14 (14%) countries used patient monitoring for social health insurance reimbursement. Patient monitoring systems were linked to case reporting (43%) and laboratory information systems (71%), vital statistics (43%) and health insurance systems (14%). These findings and others are presented in table 4. Key informants from 11 of 12 (92%) responding countries reported an electronic component in the country's patient monitoring system, and 7 of these 11 (64%) countries collect data on individual patients (figure 2). Seven of the 11 (64%) responding countries reported that the coverage of the patient monitoring system exceeded 75% ( figure 2).
CrVs systems
Key informants from 8 of 15 (53%) countries that responded to the CRVS systems section of the survey indicated that the Ministry of Health was responsible for CRVS, in 7 of 15 responding countries (47%) the Ministry of Interior (or similar) was responsible for CRVS, and in 4 of 15 countries (27%) the Ministry of Justice was responsible for CRVS. There were some countries in which multiple Ministries were responsible for CRVS. There was legislation mandating birth and death registration in 13 of 14 (93%) countries. Birth and death data were used to quantify service need (7 of 12 countries, 58%), analyse cost-effectiveness (6 of 12 countries, 50%), measure impact of disease programmes (7 of 12 countries, 58%) and to measure the national burden of disease (10 of 12 Open access Ministry of Justice or similar 4 15 27 Law exists that mandates birth and death registration 13 14 93 Vital statistics data are used in country 13 15 87 To quantify health service need 7 12 58 To analyse cost-effectiveness 6 12 50 To measure impact of disease programmes 7 12 58 National burden of disease estimates 10 12 83 Vital statistics system is currently funded 13 Open access countries, 83%). Birth and death registration was required to access government services in all 15 responding countries (100%). These findings and others are presented in table 5. Key informants from 10 of 13 responding countries (77%) reported an electronic component for birth registration, and 9 of these 10 (90%) countries collect data on individual births (figure 3). Key informants from 7 of 11 responding countries (64%) reported an electronic component for death registration, and 6 of these 7 (85%) countries collect data on individual deaths ( figure 4). Key informants from 8 of 12 (67%) reported that the country used the tenth revision of the International Classification of Disease (ICD-10) for reporting the cause of death while 2 of 12 (17%) responding countries indicated that the vital statistics system used verbal autopsy to ascertain the cause of death ( figure 5). Eight of 15 (53%) and 7 of 15 (47%) responding countries reported that the coverage of the vital statistics system registering births and deaths, respectively, exceeded 75% (figures 4 and 5, respectively).
dIsCussIOn
Case reporting, patient monitoring and CRVS systems were widely implemented and used in responding countries. These systems generate critical data for public health planning, budgeting and action. There was funding for these systems from national budgets, bilateral arrangements and multilateral mechanisms, suggesting some level of political commitment for their development and implementation. Many countries also reported use of electronic and individual-level data, suggesting that more granular and accessible data may be available for end-users. Overall, these are encouraging trends which will hopefully continue in order to accelerate progress towards meeting the SDGs. Importantly, these results are indicative of systems interpreted by key informants as meeting the survey definitions and do not speak to the breadth of coverage relative to specific diseases or interoperability. The majority of responding countries had >75% geographic coverage of their case reporting system. Moreover, most responding countries had an electronic component to their system. Electronic systems could help store increased volumes of data over time, store more detailed data prospectively and provide more rapid access to such data compared with paper-based systems. 16 Understanding the number of diagnosed cases of diseases can directly inform programme response to contain transmission. 8 All responding countries used case reporting data to achieve this. Future qualitative studies may help understand the ways in which case reporting data are used to Open access contain disease transmission. For example, in Uganda a command centre was created to house an interdisciplinary rapid response team to receive, evaluate and distribute information as the centre of communication and coordination response operations. 17 Many diseases require their own diagnostic commodities as part of national diagnostic algorithms. For example, HIV requires combinations of two or three rapid tests to diagnose each case. 18 Approximately half of responding countries used case reporting data for commodity forecasting. As observed with medicines, central procurement, informed by case reporting data, could provide cost savings and increase availability of diagnostics at service delivery sites. 19 The primary use of data from patient monitoring systems by responding countries was to monitor coverage of services. This is likely due to the importance of monitoring the coverage of key health sector interventions for reproductive health, communicable diseases and national immunisation schedules. 5 Countries may also have disease-specific patient monitoring systems. Many countries are embarking on the development of national health insurance schemes as part of universal healthcare coverage. 20 Given the wide geographic scale, and use of individual-level electronic data in many settings, there may be an opportunity to leverage these systems for processing claims and co-payments for services rendered. 21 Based on this survey, some countries are using the same system for social health insurance while others have linked the patient monitoring system to the health insurance system. Lessons learnt from each of these scenarios should be further examined and documented.
Overall, more countries reported systems for registering birth events relative to deaths. This is consistent with globally available data suggesting that birth registration rates are higher than death registration rates. 6 ICD-10 remains the global norm for classifying the cause of death within the health sector. 22 In this survey, the majority of responding countries reported use of ICD-10 for classifying the cause of death. Death registration, and methods to ascertain the cause of death, are more heterogeneous in communities. Verbal autopsy has shown promise as an option to incorporate within CRVS systems when medical certification of cause of death is not possible 23 and many countries reported using this approach. Vital statistics were required for a wide range of government services. The most common government service requiring birth registration was school enrolment; this requirement has been shown to be associated with higher coverage of national birth registration rates. 24 25 The most common requirement for death registration was the need for a burial permit. This requirement may also be important in improving national death registration rates. 26 27
Open access
There were several cross-cutting issues relevant to case reporting, patient monitoring and vital statistics systems. For example, there were a range of approaches for identification of people in systems. These included using national identification, health identification and system-generated identification. Across all systems national identification was used most often. Given the global momentum behind achieving SDG target 16.9, achieving free and universal legal identity by 2030, use of national identification may increase further with time. 28 Security measures to protect data from unauthorised use has emerged as a critical issue in light of the transition to electronic data systems. 29 In this survey, physical barriers, software barriers, legal barriers, encryption and use of unique identifiers were security measures used. Software and physical barriers were most common, suggesting opportunities for using encryption, legal protection measures and unique identifiers. Unique identifiers can offer complementary protections by limiting the number of locations, both paper and electronic, where names are used but do have additional risks such as re-identification of an identity from an available data source that uses the same unique identifier. Linking different information systems can provide improved inferences for patients longitudinally over their life course. 30 The majority of case reporting systems were linked to patient monitoring and laboratory information systems with a small proportion being linked to vital statistics. The majority of patient monitoring systems were linked to case reporting and laboratory information systems with a minority linked to vital statistics and health insurance systems. Linking systems with health insurance may have implications on improved data quality since the data will directly affect staff remuneration for services rendered. 31 One of the major limitations of this survey was the low response rate. Specifically, there were limited responses from the Americas, Central Asia and Eastern Europe. These regions comprise middle-income countries that may have a different health information system context. Reducing the number of questions and administering the survey later in the year may help improve the number of respondents in the future. We relied on knowledge and experience of participating staff members which may vary from office to office. Although attempts were made to extract missing information, and verify provided information from government counterparts, there were still questions without answers from some respondents. This may have been because they had less developed systems or because they did not know the answer at the time they filled the survey. Requiring all questions to be answered could improve our confidence in the final estimates. Moreover, since we conducted this survey electronically, Open access there may have been differences in the way questions were interpreted across different key informants. This could have affected their answer selection. For example, linkage could be interpreted as interoperability across different systems or producing summary information for the same location and time while coverage could have used health facilities, regions or other measures as a denominator. Including more definitions in the survey tool could establish common terminology during future iterations of this survey. The electronic format of the survey also meant that there were limited opportunities to qualify answers. For example, although we collected information on whether individual or aggregated data were available in electronic systems, we did not describe pathways of data flow. In the future, use cases, success stories and lessons learnt may be based on specific answers during subsequent qualitative interviews of stakeholders. During the implementation of this survey, CDC placed additional field staff in countries through its Division of Global Health Protection. In the future, it may be worth reaching out to key informants in CDC countries irrespective of their programme focus to have the widest reach. Some important aspects of health information systems, such as interoperability, standards and required workforce competencies, were not covered in this survey and may merit further exploration. Since some countries may manage civil registration and vital statistics separately there is potential for confusion from key informants on how to respond to questions encompassing CRVS holistically. Finally, evaluating the quality of data generated from systems requires different methods that should be evaluated as part of future assessments.
To our knowledge, this is the first detailed assessment of national case reporting, patient monitoring and vital statistics systems. Most responding countries have a solid foundation for policy, planning, legislation and organisation of health information systems. There are opportunities to link systems, strengthen security measures for electronic data and use data more effectively. Periodic evaluations may help understand progress in strengthening and harmonising these systems over time to achieve the SDGs. Map disclaimer The depiction of boundaries on the map(s) in this article do not imply the expression of any opinion whatsoever on the part of BMJ (or any member of its group) concerning the legal status of any country, territory, jurisdiction or area or of its authorities. The map(s) are provided without any warranty of any kind, either express or implied.
Competing interests None declared
Patient consent for publication Not required.
ethics approval The Office of Science from the Center for Global Health at CDC deemed this survey to not require CDC Institutional Review Board review and approved the survey protocol for implementation.
Provenance and peer review Not commissioned; externally peer reviewed. data sharing statement Requests for de-identified data should be addressed to the corresponding author.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2019-05-19T13:02:53.867Z | 2019-05-16T00:00:00.000 | {
"year": 2019,
"sha1": "b75ec6c083e2b25bff3fa495f83c13f6d5c04920",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/5/e027689.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b75ec6c083e2b25bff3fa495f83c13f6d5c04920",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
102487773 | pes2o/s2orc | v3-fos-license | Spatio-Temporal Analysis of Vegetation Dynamics as a Response to Climate Variability and Drought Patterns in the Semiarid Region, Eritrea
: There is a growing concern over change in vegetation dynamics and drought patterns with the increasing climate variability and warming trends in Africa, particularly in the semiarid regions of East Africa. Here, several geospatial techniques and datasets were used to analyze the spatio-temporal vegetation dynamics in response to climate (precipitation and temperature) and drought in Eritrea from 2000 to 2017. A pixel-based trend analysis was performed, and a Pearson correlation coefficient was computed between vegetation indices and climate variables. In addition, vegetation condition index (VCI) and standard precipitation index (SPI) classifications were used to assess drought patterns in the country. The results demonstrated that there was a decreasing NDVI (Normalized Difference Vegetation Index) slope at both annual and seasonal time scales. In the study area, 57.1% of the pixels showed a decreasing annual NDVI trend, while the significance was higher in South-Western Eritrea. In most of the agro-ecological zones, the shrublands and croplands showed decreasing NDVI trends. About 87.16% of the study area had a positive correlation between growing season NDVI and precipitation (39.34%, p < 0.05). The Gash Barka region of the country showed the strongest and most significant correlations between NDVI and precipitation values. The specific drought assessments based on VCI and SPI summarized that Eritrea had been exposed to recurrent droughts of moderate to extreme conditions during the last 18 years. Based on the correlation analysis and drought patterns, this study confirms that low precipitation was mainly attributed to the slowly declining vegetation trends and increased drought conditions in the semi-arid region. Therefore,
Introduction
Climate is a prime driver of vegetation dynamics and it dictates the distributions of plant species and vegetation [1].Vegetation change is affected by precipitation and temperature, and recurrent droughts, which can be the impacts of climate change and global warming [2].With increases in climate variability and a warming trend in Sub-Saharan Africa, and particularly in East Africa, there have been changes towards extreme rainfall events, and an increase in seasonal mean temperature [3].Such trends can bring about an impact on vegetation change and agricultural production, especially in countries like Eritrea, where the majority of its population depends on rain-fed agriculture.There is a need to develop proper drought monitoring methods in order to mitigate and adapt to climate change impacts.Drought monitoring usually has many aspects, such as hydrological, meteorological, and agricultural aspects, and analyzing vegetation dynamics and climate variables together can lead to a better understanding of interconnected and correlated drought-related parameters [4].
The use of remote sensing and geographic information systems (GIS) technology is increasing globally in the identification of patterns in spatial data and vegetation dynamics based on the time-series of satellite imageries [5,6].The traditional methods of mapping of large areas mainly based on field surveys are not cost-effective to track vegetation changes, but the geospatial technology provides better means of studying vegetation dynamics [7].The available remotely-sensed time-series of vegetation indices have been used to analyze the trends of vegetation change and drought assessment [5].
Since the 1980s, many researchers have utilized Normalized Difference Vegetation Index (NDVI) as an indicator of vegetation growth and status, and to identify the spatial density and phenology of vegetation.Moreover, NDVI has been used to explore the relationship between NDVI and climate factors in different geographic areas and ecosystems [8].One of the most commonly used NDVI datasets for vegetation dynamics analysis and drought monitoring is the Moderate Resolution Imaging Spectro-radiometer (MODIS) products, which can give continuous and relatively high-resolution images as time series [9,10].
Different indices have been developed to quantify drought in relation to vegetation dynamics.MODIS-based Standard Vegetation Index (SVI) has recently been used to support drought monitoring, while Standardized Precipitation Index (SPI) is commonly used to characterize meteorological drought in the range of time scales [11,12].Vegetation Condition Index (VCI), which can be derived from NDVI or Enhanced Vegetation Index (EVI), is also one of the most utilized indices for drought monitoring [13].However, in most of the studies using such indices for vegetation dynamics and drought trend studies, they are usually done at global and regional scales, which is not an appropriate scale for decision makers to take proper mitigation measures.The integration of remotely sensed data with climate datasets, e.g., Climate Engine powered with Google Earth, has provided a very valuable spatial dataset to researchers and provided a convenient platform to further reprocess and extract useful spatial information at convenient scales.Indicators such as the SPI and NDVI can generate the best correlation-anticipation relationships for early signs of drought impacts and drought monitoring purposes, such as identifying hot spots at local scales [14,15].
Eritrea is located in the Horn of Africa in the eastern part of the Sahel zone.It is climatologically under the influence of the Sahara Desert and the Arabian Desert.Climate variability in Eritrea is enormous [3], and there is a high rate of vulnerability to the effects of climate change because the population depends mainly on rain-fed agriculture [16].Eritrea has been vulnerable to climate change, especially over the past 60 years, as the temperature has been risen by approximately 1.7 • C.This has had a tremendous impact on biodiversity and the overall loss of resilience of the ecosystem [17].Moreover, the country is prone to climate variability and recurrent droughts, almost 50% of the country receive less than 300 mm of precipitation annually, and the amount varies from year to year [17].However, there are no studies carried out to investigate how intensively climate variability has affected the terrestrial vegetation and ecosystem services.
Natural vegetation in Eritrea is sparse and is characterized by acacia trees (Acacia ssp.) and bushes.Most of the vegetation types are semi-arid, with either cultivated annual plants or deciduous perennials produced during the short rainy seasons.The country was endowed with a variety of indigenous tree species which had been devastated due to the recent war, land use changes, and recurrent droughts.Deforestation is one of the most serious environmental consequences caused by land use change and climate change, due to prolonged drought periods and erratic rainfall patterns [18].Different adaptation and intervention measures, such as soil and water conservations, are practiced, but as stated in the report of Ministry of Land, Water and Environment (MoLWE, 2015), mitigation and adaption measures are subject to continuous scrutiny.In addition, research applying time series data is required to quantify the outcomes of the interventions.Agricultural production in Eritrea is low, which can be greatly attributed to the fragile ecosystems with frequent droughts and seasonal temperature increases [19].The United Nations Office for Disaster Risk Reduction (EM-DAT) stated that drought events covered 50% of all natural hazards between 1990 and 2014 in Eritrea [20], and is, thus, the main natural hazard.The drought conditions had been exacerbated by war and political conflict as the country had experienced thirty years of prolonged war for independence and two years of border conflict.Population growth and re-settlement programs in forested areas of the Anseba and Gash Barka regions could have increased pressure on land cover changes of the country.
Apart from basic vegetation cover maps of Afri-Cover Project (1999) and land cover maps of Food and Agricultural Organization (2009), the availability of spatial data is limited.Thus, there is a great demand for spatial data, which can provide time-series of vegetative trends and drought vulnerability maps for decision makers to properly manage natural resources, to monitor drought impacts, and to plan sustainable ecosystem functioning.Moreover, there are plant species that are at the verge of extinction because of climate change and invasive exotic plants like prosopsis (Temri mussa).The occurrence of droughts needs to be verified through spatial analysis of historical datasets of vegetation changes and climatic variables.
The overall objective of this research is to identify the causes and the relationships between vegetation dynamics and climate variability, and drought conditions in Eritrea.The specific objectives are: (i) to analyze spatio-temporal trends of vegetation dynamics in the last 18 years based on climate and remote sensing data; (ii) to identify the main causes of vegetation change in relation to climatic factors (temperature and precipitation); and (iii) to assess drought patterns using vegetation condition and standard precipitation indices.
Study Area
Eritrea consists of three major terrestrial geographic regions: the central highlands, the eastern lowlands, and western lowlands.The dominant eco-geographical regions of global vegetation classifications are the Sahelian Acacia Savana and Ethiopian Xeric grassland and shrublands (Figure 1a).World Food and Agricultural Organization (FAO) in 1997 identified six agro-ecological zones in Eritrea: Semi-deserts, Arid Lowlands, Moist Lowlands, Moist Highlands, Arid Highlands, and Sub-humid Highlands (Figure 1b).A large part of the country falls under the category of Arid Lowlands and Semi-Desert Zones. Figure 2a shows the administrative regions of the country and Figure 2b represents the land cover types extracted from the European Space Agency (2016) imagery dataset.The most dominant land cover types are Bare Areas and Cropland, accounting for 38% and 18% of the land surface, respectively.The highland plateau of Eritrea descends gently to western lowlands, but quite rapidly to eastern lowlands, and many places are characterized by undulating hills and steep slopes.The central highlands reach up to 3000 m above sea level, while the Danakil depression found in the Southern Red Sea region is more than 100 m below sea level, being one of the deepest and hottest places in the world (Figure 3).The topographic variation affects the rainfall and temperature regimes of the country, which in turn influences the spatio-temporal variability of vegetation dynamics.Figure 2a shows the administrative regions of the country and Figure 2b represents the land cover types extracted from the European Space Agency (2016) imagery dataset.The most dominant land cover types are Bare Areas and Cropland, accounting for 38% and 18% of the land surface, respectively.The highland plateau of Eritrea descends gently to western lowlands, but quite rapidly to eastern lowlands, and many places are characterized by undulating hills and steep slopes.The central highlands reach up to 3000 m above sea level, while the Danakil depression found in the Southern Red Sea region is more than 100 m below sea level, being one of the deepest and hottest places in the world (Figure 3).The topographic variation affects the rainfall and temperature regimes of the country, which in turn influences the spatio-temporal variability of vegetation dynamics.Eritrea's climate is classified as Hot Semi-Arid Climate (BSh) or Hot Desert Climate (BWh) in Köppen's climate classification.Mean temperature ranges from 18 °C in the highlands to 35 °C in the lowlands and is the main factor in defining the Eritrean agro-ecological zones [21].Rainfall in most parts of Eritrea is low and highly variable, and according to MoLWE report [17], it ranges from 50 mm along the coastal area to 1000 mm in the eastern escarpment, while 40% of the country receives precipitation between 300-600 mm in 2015.The rainfall is inadequate to support agricultural production, especially during drought years.Based on the Climate Hazards Group InfraRed Precipitation with Station data of daily satellite extractions (CHIRPS daily), the average annual precipitation between 2000 and 2017 in Eritrea is 334 mm, while almost 50% of the country receives less than 300 mm.Eritrea's climate is classified as Hot Semi-Arid Climate (BSh) or Hot Desert Climate (BWh) in Köppen's climate classification.Mean temperature ranges from 18 • C in the highlands to 35 • C in the lowlands and is the main factor in defining the Eritrean agro-ecological zones [21].Rainfall in most parts of Eritrea is low and highly variable, and according to MoLWE report [17], it ranges from 50 mm along the coastal area to 1000 mm in the eastern escarpment, while 40% of the country receives precipitation between 300-600 mm in 2015.The rainfall is inadequate to support agricultural production, especially during drought years.Based on the Climate Hazards Group InfraRed Precipitation with Station data of daily satellite extractions (CHIRPS daily), the average annual precipitation between 2000 and 2017 in Eritrea is 334 mm, while almost 50% of the country receives less than 300 mm.
Datasets
An 18-year long time series of NDVI and EVI data at 250-m resolution from Moderate Resolution Imaging Spectro-radiometer (MODIS) Terra 16-day data (LPDAAC NASA products) were exploited in our study.Six products from MODIS13Q1 version were used for quantifying Vegetation Condition Index (VCI) and trend analysis.The products provide detailed measurements of pixel values at 16day time steps, like the MODIS NDVI, while the EVI is a supplement, as it can detect the reflected light in areas of high biomass and corrects distortions of background scattering, which is applicable in drought-related studies [22].VCI for the multi-annual growing seasons was also generated from the NDVI dataset to be related to drought monitoring purposes.VCI was calculated in ArcGIS using local cell statistics of the Spatial Analysis tool for each main growing season in a year (please refer to Section 2.3.3 for details).
We applied the Climate Engine data source powered by the Google Earth Engine as the main source of both remote sensing and climate datasets.The data were filtered and spatially processed in easily downloadable file formats, such as GeoTIFF and Comma-separated values , through the web browser (http://climateengine.org/app/).Climate Engine has revolutionized the service of providing processed spatial datasets and eased computational barriers by using Google's parallel cloud computing platform [23].
The datasets used from the Climate Engine include MODIS datasets, SPI, and precipitation datasets of CHIRPS daily at 0.05° resolution.CHIRPS data were helpful in extracting multi-annual and growing season averages of the total precipitation for the time series trend analysis and drought monitoring purposes.It is recommended to use precipitation datasets, such as CHIRPS, when working with drought assessment and monitoring activities [24].
Multi-annual and seasonal mean temperatures of Climate Forecast System Reanalysis (CFSR) datasets (1/5 and 3/10 degrees) from the National Center for Environmental Prediction were also accessed and re-processed.Moreover, to report and aggregate the results of both MODIS vegetation indices and climatic datasets, land cover datasets at 20-m resolution from the European Space Agency (ESA, 2016) were re-sampled, processed, and used.Shapefiles of administrative units and agro-
Datasets
An 18-year long time series of NDVI and EVI data at 250-m resolution from Moderate Resolution Imaging Spectro-radiometer (MODIS) Terra 16-day data (LPDAAC NASA products) were exploited in our study.Six products from MODIS13Q1 version were used for quantifying Vegetation Condition Index (VCI) and trend analysis.The products provide detailed measurements of pixel values at 16-day time steps, like the MODIS NDVI, while the EVI is a supplement, as it can detect the reflected light in areas of high biomass and corrects distortions of background scattering, which is applicable in drought-related studies [22].VCI for the multi-annual growing seasons was also generated from the NDVI dataset to be related to drought monitoring purposes.VCI was calculated in ArcGIS using local cell statistics of the Spatial Analysis tool for each main growing season in a year (please refer to Section 2.3.3 for details).
We applied the Climate Engine data source powered by the Google Earth Engine as the main source of both remote sensing and climate datasets.The data were filtered and spatially processed in easily downloadable file formats, such as GeoTIFF and Comma-separated values , through the web browser (http://climateengine.org/app/).Climate Engine has revolutionized the service of providing processed spatial datasets and eased computational barriers by using Google's parallel cloud computing platform [23].
The datasets used from the Climate Engine include MODIS datasets, SPI, and precipitation datasets of CHIRPS daily at 0.05 • resolution.CHIRPS data were helpful in extracting multi-annual and growing season averages of the total precipitation for the time series trend analysis and drought monitoring purposes.It is recommended to use precipitation datasets, such as CHIRPS, when working with drought assessment and monitoring activities [24].
Multi-annual and seasonal mean temperatures of Climate Forecast System Reanalysis (CFSR) datasets (1/5 and 3/10 degrees) from the National Center for Environmental Prediction were also accessed and re-processed.Moreover, to report and aggregate the results of both MODIS vegetation indices and climatic datasets, land cover datasets at 20-m resolution from the European Space Agency (ESA, 2016) were re-sampled, processed, and used.Shapefiles of administrative units and agro-ecological zones of Eritrea were also used to integrate data and to summarize outputs as zonal statistics tables.
Linear Regression Model
A linear regression model was applied to detect trends in vegetation greenness change using the main growing season and annual datasets from Climate Engine from 2000 to 2017.A pixel-based linear regression model (LRM) was applied to analyze the NDVI spatial trend during the study period.The slope was calculated from mean values of NDVI as the dependent variable and the time as an independent variable.In order to analyze MODIS NDVI change through time, a pixel-based smallest power-function linear regression formula, as proposed and used by previous authors, was applied, as given in Equation ( 1) [25][26][27]: where Slope represents the changing trends in MODIS NDVI values, and n is the sample number (18 years).Y i is the value of the dependent variable and X i the time as an independent variable, both representing the ith year.
The slope values close to zero means there was not much variation in vegetation greenness through time, while slope values greater than zero show an increasing vegetation trend over time, and the trend may decrease if the slope values are less than zero.The significance of the trend was tested using F-Test (coeff function) and a p-value of less than 0.05 (95% confidence level) in MATLAB R2016a software for each pixel.As originally stated by Weatherhead [28] and later used by Eckert et al. and Jiang et al. [26,29], an equivalent NDVI change based on trend values and p-value thresholds was used to re-categorize the pixel values into a significant decrease, not-significant decrease, significant increase, and not-significant increase.
Pearson Correlation Coefficient
In order to give a broader picture of the correlation and spatial patterns between vegetation indices and climate factors, a pair-wise Pearson Correlation Coefficient (PCC) was first computed using R software to measure the linear association among the variables, including NDVI, EVI, temperature, and precipitation for both annual and growing season months.PCC plots were generated from the mean values of the vegetation indices and climatic variables to show distributional maps using R-software; the NDVI gave higher output values than EVI in the study area, as such NDVI values were used to calculate the trends and correlations.
To understand in detail the role of climatic factors on vegetation dynamics and to depict spatial relationships through time, a pixel-based PCC analysis between MODIS NDVI and climate variables (Precipitation and Temperature) were computed using the commonly used approach, as shown in Equation ( 2) [25,30,31]: where r xy is the PCC of x and y variables, y i is the dependent variable representing mean NDVI of the ith year, and x i is the independent variable representing either average total precipitation or mean temperature of the ith year.Y is the average NDVI for the study period (18 years), and X is the total mean precipitation or temperature for the study period.The values of PCC extend from negative to positive correlation (−1 to +1).Time series spatial patterns for the study period were performed and correlation coefficient maps were generated in MATLAB and further processed in ArcGIS software.The PCC was evaluated using F-Test (CORRCOEF function) at each pixel and a p-value less than 0.05 was considered statistically significant.The climatic raster datasets were resampled using the bilinear method and reprocessed to match with the 250-m resolution of MODIS Vegetation Indices.Some pixels with extremely low precipitation values were marked as no-data values in extracting the correlation coefficients in both periods, as the areas represent part of the Danakil depression.
Vegetation Condition Index and Standard Precipitation Index
VCI is calculated based on inputs like NDVI and EVI, and different authors have used either NDVI or EVI as a base for calculating VCI in Equation (3) [32][33][34][35].VCI is a pixel-based analysis for finding vegetation conditions at a specific location of the pixel by considering the mean of the multi-annual variability and the minimum and maximum variability of the Vegetation Index.
VCI = (NDV
where NDVI i represents the mean Vegetation Index values for the main growing season in a certain year, NDVI min and NDVI max are the multiple-year minimum and maximum NDVI values calculated for each pixel of the main growing season from 2000 to 2017. As proposed by many authors [33][34][35], we applied a threshold of 35% to classify the VCI.NDVI drought classification of different levels of the growing season from a maximum of 35% to a minimum of 0% was extracted for each year of the observation period (Table 1).In order to specifically understand the drought conditions and vegetation development, more emphasis was given to the main season of 2017, which was mapped in detail view and in comparison to the average drought conditions of the last 17 years based on MODIS NDVI datasets.Moreover, time series VCI maps of all the years were generated and interpreted to identify areas of drought vulnerability at a pixel level of analysis.
SPI is used to determine meteorological drought, but it can be helpful for finding parallel drought patterns with VCI.It is one of the most commonly used indices for characterizing drought dynamics.We applied similar drought categories for five classes with modification in the lower limit, as in a previous study [36].An aggregated dataset of SPI is a recommended index by the World Meteorological Organization (WMO) for agricultural drought purposes [37].
SPI merely calculates the standard deviation of the observed value differences from the historical mean, where positive and negative values show deviation from the median precipitation.We aggregated and used four months mean SPI values (SPI-4) from CHIRPS datasets for the same growing season of all years in order to make a pixel-based analysis, along with their spatial extents.The values below zero ranging from mild to extreme drought conditions were compared and classified in the growing season of each year (Table 2).
Spatial and Temporal Trends in Vegetation Cover
NDVI and EVI datasets for both annual and main growing season months of the study period were aggregated to generate spatial distribution maps.Figure 4 shows aggregated pixel-based distribution maps of multi-annual mean NDVI, EVI, temperature, and average total precipitation of the study period (18 years).The mean annual NDVI and EVI from 2000 to 2017 revealed that there were high values in the southwest and in Central Highlands, while the values decreased towards the lowlands in the east and northwestern parts.
Spatial and Temporal Trends in Vegetation Cover
NDVI and EVI datasets for both annual and main growing season months of the study period were aggregated to generate spatial distribution maps.Figure 4 shows aggregated pixel-based distribution maps of multi-annual mean NDVI, EVI, temperature, and average total precipitation of the study period (18 years).The mean annual NDVI and EVI from 2000 to 2017 revealed that there were high values in the southwest and in Central Highlands, while the values decreased towards the lowlands in the east and northwestern parts.The highest values of vegetation indices (Figure 4) were dominant in the escarpment between the central highland and eastern lowlands.These areas are still rich in forests due to steep slopes and fog deposits similarly as in Saudi Arabian mountains [38] A pixel-based NDVI trend analysis was carried out to understand the specific trend and changes in vegetation cover during the study period.Figure 5 shows slope changes of the annual and growing season mean NDVI over 18 years, including the significance of the trends.Even though there were areas of increasing and decreasing NDVI trends, in both maps the overall changing trend per pixel showed more of a decreasing trend over time; thus, the change through the years was substantial.The highest values of vegetation indices (Figure 4) were dominant in the escarpment between the central highland and eastern lowlands.These areas are still rich in forests due to steep slopes and fog deposits similarly as in Saudi Arabian mountains [38] A pixel-based NDVI trend analysis was carried out to understand the specific trend and changes in vegetation cover during the study period.Figure 5 shows slope changes of the annual and growing season mean NDVI over 18 years, including the significance of the trends.Even though there were areas of increasing and decreasing NDVI trends, in both maps the overall changing trend per pixel showed more of a decreasing trend over time; thus, the change through the years was substantial.The annual and growing season trend values ranged from −0.02483 to 0.02854, and −0.03418 to 0.03547 (NDVI per year), respectively.In both periods, the lowest values appeared in the south-western parts of Eritrea, while the highest values were in the north-central parts of the country.In the entire country, 57.15% of all pixels showed a decreasing trend for the annual mean NDVI with significant decreases in many places.Similarly, 58.74% of all pixels of the country reflected a decreasing trend for the main growing season.
The annual and growing season mean NDVI trends were overlaid with agro-ecological zones (Figure 1b).Zonal statistics were extracted based on soil type with a detailed spatial sub-unit.All mean NDVI trend values for Moist Highland and Moist Lowland agro-ecological zones turned out to be of decreasing vegetation productivity.Moist Lowland zone followed by Semi-Desert zone showed the highest decrease in NDVI trends for the annual and main growing season periods.The Arid Lowland and Humid zones showed some increases in the trends, especially in the north-central part of the country.
Zonal statistics table extractions [39] based on the administrative regions of Eritrea (Figure 2a) summarized that Gash Barka region followed by Debub region showed the biggest decrease in NDVI trends in both the annual and main growing season.Northern Red Sea and Meakel regions were mainly characterized by a non-significant decrease of NDVI trends in both periods.Anseba Region recorded the highest increase in NDVI trends for the two periods.Integration of the mean annual and growing season NDVI with ESA land cover (Figure 2b) also showed shrubs and croplands showed the greatest decreasing mean NDVI trend values, however sparse vegetation and tree cover areas showed an increasing trend.Shrubs are the most dominant vegetation cover in the Central Highland and Eastern Region of Eritrea [40].
Remote Sens. 2019, 11, x FOR PEER REVIEW 10 of 23 decreases in many places.Similarly, 58.74% of all pixels of the country reflected a decreasing trend for the main growing season.The annual and growing season mean NDVI trends were overlaid with agro-ecological zones (Figure 1b).Zonal statistics were extracted based on soil type with a detailed spatial sub-unit.All mean NDVI trend values for Moist Highland and Moist Lowland agro-ecological zones turned out to be of decreasing vegetation productivity.Moist Lowland zone followed by Semi-Desert zone showed the highest decrease in NDVI trends for the annual and main growing season periods.The Arid Lowland and Humid zones showed some increases in the trends, especially in the north-central part of the country.
Zonal statistics table extractions [39] based on the administrative regions of Eritrea (Figure 2a) summarized that Gash Barka region followed by Debub region showed the biggest decrease in NDVI trends in both the annual and main growing season.Northern Red Sea and Meakel regions were mainly characterized by a non-significant decrease of NDVI trends in both periods.Anseba Region recorded the highest increase in NDVI trends for the two periods.Integration of the mean annual and growing season NDVI with ESA land cover (Figure 2b) also showed shrubs and croplands showed the greatest decreasing mean NDVI trend values, however sparse vegetation and tree cover areas showed an increasing trend.Shrubs are the most dominant vegetation cover in the Central Highland and Eastern Region of Eritrea [40].
Time-Series Correlation Between NDVI and Climatic Factors
PCC was computed in order to generalize and summarize the spatial relationship and effect of the major climatic factors, such as mean total precipitation and mean temperature with MODIS vegetation indices for the annual and main growing season months.As can be observed in Figure 4, there was a decreasing pattern in mean values of the annual NDVI and EVI when compared visually with precipitation from the central highland towards Eastern and Western lowlands of the country, and there were much higher values in the South-Western parts and Eastern Escarpment.However, NDVI and EVI values were reversely correlated with temperature.
A pair-wise PCC matrix in Figure 6 represents the correlations among the major climatic variables and MODIS vegetation indices.In the linear relationship between mean annual total precipitation and mean value vegetation indices, there was a strong positive correlation between NDVI and precipitation, with an average PCC value of 0.84 at each pixel, and the same correlation was also observed between EVI and precipitation.In the linear relationship between annual mean temperature and NDVI, there was a weak negative correlation with a PCC value of −0.44 (see the orientation of the plots).The same was also true between mean annual EVI and temperature with a PCC value of −0.4036.
Time-Series Correlation Between NDVI and Climatic Factors
PCC was computed in order to generalize and summarize the spatial relationship and effect of the major climatic factors, such as mean total precipitation and mean temperature with MODIS vegetation indices for the annual and main growing season months.As can be observed in Figure 4, there was a decreasing pattern in mean values of the annual NDVI and EVI when compared visually with precipitation from the central highland towards Eastern and Western lowlands of the country, and there were much higher values in the South-Western parts and Eastern Escarpment.However, NDVI and EVI values were reversely correlated with temperature.
A pair-wise PCC matrix in Figure 6 represents the correlations among the major climatic variables and MODIS vegetation indices.In the linear relationship between mean annual total precipitation and mean value vegetation indices, there was a strong positive correlation between NDVI and precipitation, with an average PCC value of 0.84 at each pixel, and the same correlation was also observed between EVI and precipitation.In the linear relationship between annual mean temperature and NDVI, there was a weak negative correlation with a PCC value of −0.44 (see the orientation of the plots).The same was also true between mean annual EVI and temperature with a PCC value of −0.4036.Spatial and temporal patterns of vegetation dynamics in relation to climate factors are best investigated and correlated at the pixel level of extractions [25,31].In this study, a pixel-based correlation between NDVI and climatic factors were computed and mapped from 2000 to 2017. Figure 7 shows the results of pixel-based PCC over 18 years for annual mean NDVI versus annual total precipitation and mean temperature, along with the correlation significance outputs.The result showed that there was a positive correlation between NDVI and total precipitation in 76.95% of the entire study area (28.96% was significant).However, 68.46% of the study area was negatively correlated with mean annual temperature (20.31% was significant).Based on Zonal statistics extractions, using the administrative regions of Eritrea (Figure 2a), Gash Barka region followed by Debub region showed the strongest positive and most significant correlations between annual mean NDVI and annual total precipitation.On the other hand, the Northern and Southern Red Sea regions resulted in the strongest negative correlations and the highest significant correlations between annual mean NDVI and temperature.Spatial and temporal patterns of vegetation dynamics in relation to climate factors are best investigated and correlated at the pixel level of extractions [25,31].In this study, a pixel-based correlation between NDVI and climatic factors were computed and mapped from 2000 to 2017. Figure 7 shows the results of pixel-based PCC over 18 years for annual mean NDVI versus annual total precipitation and mean temperature, along with the correlation significance outputs.The result showed that there was a positive correlation between NDVI and total precipitation in 76.95% of the entire study area (28.96% was significant).However, 68.46% of the study area was negatively correlated with mean annual temperature (20.31% was significant).Based on Zonal statistics extractions, using the administrative regions of Eritrea (Figure 2a), Gash Barka region followed by Debub region showed the strongest positive and most significant correlations between annual mean NDVI and annual total precipitation.On the other hand, the Northern and Southern Red Sea regions resulted in the strongest negative correlations and the highest significant correlations between annual mean NDVI and temperature.Figure 8 shows the PCC maps extracted for the main growing season months between mean NDVI and climatic factors during 2000-2017 along correlation significance results.The growing season mean NDVI in comparison to the average total precipitation had a much stronger positive correlation than in the mean annual values; moreover, it also had a higher negative correlation with mean annual temperature.The percentage of pixels where the growing season NDVI and average total precipitation showed a positive correlation was 87.16% (39.34% was significant), but 70.1% was negatively correlated with mean temperature (20.22% was significant).The Gash Barka region showed the strongest positive and the most significant correlations between the growing season NDVI and precipitation.On the other side, the Anseba and Northern Red Sea regions were characterized by strong negative correlations, with some significant correlations between the growing season NDVI and temperature.Figure 8 shows the PCC maps extracted for the main growing season months between mean NDVI and climatic factors during 2000-2017 along correlation significance results.The growing season mean NDVI in comparison to the average total precipitation had a much stronger positive correlation than in the mean annual values; moreover, it also had a higher negative correlation with mean annual temperature.The percentage of pixels where the growing season NDVI and average total precipitation showed a positive correlation was 87.16% (39.34% was significant), but 70.1% was negatively correlated with mean temperature (20.22% was significant).The Gash Barka region showed the strongest positive and the most significant correlations between the growing season NDVI and precipitation.On the other side, the Anseba and Northern Red Sea regions were characterized by strong negative correlations, with some significant correlations between the growing season NDVI and temperature.
VCI and Drought Patterns
VCI maps were extracted based on the main growing season and multi-year minimum and maximum NDVI values calculated for every pixel of the main growing season from 2000 to 2017.VCI was used to assess drought categories of different levels for the growing season, as shown in Table 1.More focus was given to the areas where vegetation condition was deteriorating, and then relevant drought events were mapped over time.This was linked back to the event of recurrent droughts in 1.
More focus was given to the areas where vegetation condition was deteriorating, and then relevant drought events were mapped over time.This was linked back to the event of recurrent droughts in the region [17].Figure 9 shows the temporal and spatial distributions of pixel-based mean VCI over the main growing season, which could demonstrate the agricultural drought patterns in the country.
the region [17].Figure 9 shows the temporal and spatial distributions of pixel-based mean VCI over the main growing season, which could demonstrate the agricultural drought patterns in the country.
As is depicted in the VCI maps of Figure 9, the years characterized by extensive drought were the years of 2000, 2008, 2009, 2011, 2012, and 2015 in the growing season period with mean VCI of less than 35%.Zonal statistics analysis of all the administrative regions (Figure 2) reveals that regions which experienced extreme recurrent droughts throughout the observation period include the Northern Red Sea, Anseba, and Southern Red Sea.Specifically, the south-west parts of Anseba region and south-east portion of Northern Red Sea region were most vulnerable to recurrent drought, with mean VCI values less than 10%.As is depicted in the VCI maps of Figure 9, the years characterized by extensive drought were the years of 2000, 2008, 2009, 2011, 2012, and 2015 in the growing season period with mean VCI of less than 35%.Zonal statistics analysis of all the administrative regions (Figure 2) reveals that regions which experienced extreme recurrent droughts throughout the observation period include the Northern Red Sea, Anseba, and Southern Red Sea.Specifically, the south-west parts of Anseba region and south-east portion of Northern Red Sea region were most vulnerable to recurrent drought, with mean VCI values less than 10%.
Figure 10 shows the mean VCI classes of drought classifications during the main growing season of 2017.Areas of extreme agricultural drought were highly distributed in Gash Barka, Anseba, and Northern Red Sea regions, with mean VCI values of 21.71, 24.82 and 39.62, respectively.Integration of the results with the ESA Landcover dataset revealed sparse vegetation, grassland, and cropland covers with mean VCI values of 30.28, 32.24, and 33.76, respectively, were highly exposed from moderate to extreme levels of drought conditions.
Remote Sens. 2019, 11, x FOR PEER REVIEW 16 of 23 Figure 10 shows the mean VCI classes of drought classifications during the main growing season of 2017.Areas of extreme agricultural drought were highly distributed in Gash Barka, Anseba, and Northern Red Sea regions, with mean VCI values of 21.71, 24.82 and 39.62, respectively.Integration of the results with the ESA Landcover dataset revealed sparse vegetation, grassland, and cropland covers with mean VCI values of 30.28, 32.24, and 33.76, respectively, were highly exposed from moderate to extreme levels of drought conditions.
SPI and Drought Patterns
SPI is helpful in depicting the meteorological drought patterns spatially and temporally.In Eritrea, almost 80% of the population practice rain-fed agriculture, which depends on the long rains during the main growing season.Four months of average SPI of the growing season were aggregated at pixel level, and all SPI values less than 0 were extracted to depict the drought patterns in each year of the observation period.
Figure 11 shows the four-month aggregated SPI results.The years characterized by mild to moderate drought with mean SPI values of less than 0.
SPI and Drought Patterns
SPI is helpful in depicting the meteorological drought patterns spatially and temporally.In Eritrea, almost 80% of the population practice rain-fed agriculture, which depends on the long rains during the main growing season.Four months of average SPI of the growing season were aggregated at pixel level, and all SPI values less than 0 were extracted to depict the drought patterns in each year of the observation period.
Figure 11 shows the four-month aggregated SPI results.The years characterized by mild to moderate drought with mean SPI values of less than 0.
Discussion
We analyzed the trend of the vegetation dynamics over the last 18 years in Eritrea.We found that the overall pattern showed a decreasing NDVI trend, and variation through the years was considerable.Pixel-based percentage of change in the NDVI slope indicated that more than 57% of the study area showed a decreasing NDVI trend through time at both annual and main season periods, with significant changes (Figure 5).The short-term fluctuations in the NDVI values from time to time can be an important indicator for the drivers of the change.The significant negative annual NDVI trends (Figure 5a), especially in the South-Western part of the country, can be associated with the impact of agricultural drought conditions in vegetation cover and crops (Figure 8), as the shrubs and croplands are the most affected land cover types, with decreasing mean NDVI trend values.This finding matches the result of Jamali et al. [41], which is a long term study of annual NDVI of Global Inventory Modeling and Mapping Studies (GMMIS NDVI), and showed a decreasing trend for Eritrea as one of the sites in Africa.A specific study based on climate change scenarios for 2012 confirmed that a large part of the south-western area was under potential loss of yield in rainfed sorghum due to climate change [42].A recent study by Tesfaye [43] in Ethiopia, with similar agroecological characteristics as in Eritrea, also identified a general decline in NDVI trend and an effect of net-loss in greenness.
Most of the agro-ecological zones of Eritrea, except for the Sub Humid zone, showed decreasing mean NDVI trend values.The Moist Lowland and Semi-Desert zones showed the highest decrease in NDVI trends (Figures 1b and 5b); this could be mainly due to the increasing aridity or recurrent droughts as the response to climate variables and deforestation activities.The Sub Humid agroecological zone located in the Eastern Escarpment showed an increasing mean NDVI trend, evidently because the escarpment has been protected as a national park since 2006.In the park, reforestation
Discussion
We analyzed the trend of the vegetation dynamics over the last 18 years in Eritrea.We found that the overall pattern showed a decreasing NDVI trend, and variation through the years was considerable.Pixel-based percentage of change in the NDVI slope indicated that more than 57% of the study area showed a decreasing NDVI trend through time at both annual and main season periods, with significant changes (Figure 5).The short-term fluctuations in the NDVI values from time to time can be an important indicator for the drivers of the change.The significant negative annual NDVI trends (Figure 5a), especially in the South-Western part of the country, can be associated with the impact of agricultural drought conditions in vegetation cover and crops (Figure 8), as the shrubs and croplands are the most affected land cover types, with decreasing mean NDVI trend values.This finding matches the result of Jamali et al. [41], which is a long term study of annual NDVI of Global Inventory Modeling and Mapping Studies (GMMIS NDVI), and showed a decreasing trend for Eritrea as one of the sites in Africa.A specific study based on climate change scenarios for 2012 confirmed that a large part of the south-western area was under potential loss of yield in rain-fed sorghum due to climate change [42].A recent study by Tesfaye [43] in Ethiopia, with similar agro-ecological characteristics as in Eritrea, also identified a general decline in NDVI trend and an effect of net-loss in greenness.
Most of the agro-ecological zones of Eritrea, except for the Sub Humid zone, showed decreasing mean NDVI trend values.The Moist Lowland and Semi-Desert zones showed the highest decrease in NDVI trends (Figures 1b and 5b); this could be mainly due to the increasing aridity or recurrent droughts as the response to climate variables and deforestation activities.The Sub Humid agro-ecological zone located in the Eastern Escarpment showed an increasing mean NDVI trend, evidently because the escarpment has been protected as a national park since 2006.In the park, reforestation programs are commonly practiced.In addition to rains between June and September, the escarpment receives precipitation as light rains and fog deposit in November and December [44,45].
Precipitation dictates the vegetation dynamics of Eritrea.Growth and regeneration of vegetation in most parts of the country largely depend on the amount of rainfall received every year, which is reflected in the PCC and significance outputs of the main growing season (Figure 8a,b).The positive significant correlations between precipitation and NDVI, and by implication with the VCI, indicate vegetation productivity changes and drought impacts in the region; this can be helpful for agricultural drought monitoring [32].The results correspond with the findings of Iyob [46], which stated that change in vegetation values in Eritrea can be explained mostly by precipitation.Furthermore, Ghebrezgabher et al. [40] concluded that the main factors for increased deforestation and desertification in Eritrea are rainfall variability and drought.
The strong negative correlations between the growing season mean temperatures and NDVI values (Figure 8c) show that slight fluctuation in temperature might affect the decline and growth of plants.This is a great concern with the increasing temperature of the country.MoLWE states that temperature has risen by about 1.7 • C in the past 60 years in Eritrea, with a large impact on biodiversity loss and overall loss of resilience of the ecosystem [17].Moreover, Abera et al. [47] identified that seasonality of daytime land surface temperature was negatively correlated to vegetation greenness patterns across ecoregions in the Horn of Africa.
The agricultural and meteorological drought patterns throughout the study periods were correlated temporally and spatially with the VCI and SPI-4 outputs (Figures 9 and 11); this indicates that the meteorological drought signals were followed by agricultural drought occurrences for most of the years and places.The findings of the years 2000 and 2004 matched well with the meteorological drought analysis from the High-Resolution Spot Image and records of the National Food Information System (NFIS) of the Ministry of Agriculture of Eritrea.The NFIS report for the year 2004 stated that the main season was characterized by below normal precipitation over the 24 selected stations, while 16 stations' rainfall was lower than the long-term average, at between 100 to 306 mm [48].
Gholinejad, Farajollahi, and Pouzeshshow [49] and Winkler [33] revealed that there were observations of severe drought signals, and satellite extractions by Ghebrezgabher et al. [50] also confirmed that drought was very severe in 2004 and 2009.The main driver of the drought was poor rainfall during the main season, in which 2015 can be associated with El Niño (El Niño Southern Oscillation) because drought was of concern in large parts of East Africa [33].Moreover, the analysis of the VCI during 2009 and 2015 matches with an agriculturally-relevant drought registered in the International EM-DAT (Emergency Events Database).The year 2009 was recorded as a typical drought year in Eritrea by EM-DAT, which is in line with the outputs of our VCI and SPI-4 analysis.
The outputs of both VCI and SPI-4 on the ESA land cover dataset showed that the sparsely vegetated areas, grassland, and cropland areas were highly vulnerable to drought conditions.Thus, several indigenous tree species are endangered in their habitats and rain-fed agriculture may be threatened by crop failure and production fluctuations.Farmers in Eritrea are typically dependent on grasslands for grazing livestock, and hence droughts on grasslands can be devastating.Poor rainfall areas such as Gash Barka, Northern Red Sea, and Anseba regions were the most vulnerable regions to both meteorological and agricultural drought conditions; moreover, rainfall is becoming erratic and below average because of climate variability [51].Sea-surface effects due to adjacency to the Red Sea, and an overall increase in temperature and albedo, desertification, and deforestation have all contributed to the effect of the drought on crop production and anomalies in vegetation greenness [40,47,52].Thus, there is a further need to understand and characterize dynamics of the vegetation and land cover changes in relation to drought patterns to build a base for local drought predictions in association with climate models and downscaled multi-model projections [53].
Conclusions
In this study, precipitation was found to be the main driving factor which determined the decreasing vegetation greenness dynamics in Eritrea, as about 76.95% of the entire study area (28.96% significant) showed a positive correlation between mean annual NDVI and total precipitation.The correlation was much higher between the growing season NDVI and precipitation, where there was a positive correlation in 87.16% of the study area (39.34% significant).Spatial variability and fluctuations in rainfall contributed to an overall limited vegetative cover, and there was the appearance of moderate to extreme drought conditions in sparsely vegetated grassland and cropland areas.The overall decreasing NDVI trends and vegetation greenness, especially in the southwestern part of the country, correspond to increasing aridity and recurrent drought patterns, mainly due to poor rains and rising seasonal temperatures.Impacts of the droughts and climate variability could be evidenced from shifts in agro-ecological zones and changes in the land cover, and these can cause poverty in the country.This implies that immediate action is needed to minimize such impacts in vegetation and ecosystem services, such as through integrated watershed management and climate-smart agriculture, afforestation and proper monitoring activities, complying with international environment-related conventions, and using spatial-temporal information and maintaining a sustainable environment.
. The lowest values were around the southwest part of the Southern Red Sea region close to Denakil depression areas.Those areas with low NDVI and EVI values reflect low vegetation coverage and aridity.The mean annual values of the vegetation indices were low due to variations in climatic factors and recurrent droughts affecting the distribution and terrestrial plant growth.The multi-annual mean NDVI and EVI for the whole country were 0.158 and 0.1.Mean EVI and NDVI in comparison with the annual values slightly increased during the main growing season with values of 0.184 and 0.12, respectively.
. The lowest values were around the south-west part of the Southern Red Sea region close to Denakil depression areas.Those areas with low NDVI and EVI values reflect low vegetation coverage and aridity.The mean annual values of the vegetation indices were low due to variations in climatic factors and recurrent droughts affecting the distribution and terrestrial plant growth.The multi-annual mean NDVI and EVI for the whole country were 0.158 and 0.1.Mean EVI and NDVI in comparison with the annual values slightly increased during the main growing season with values of 0.184 and 0.12, respectively.
Figure 5 .
Figure 5. Normalized Difference Vegetation Index (NDVI ) trends and their significance at annual and growing season time scales during 2000-2017.(a,b) Annual NDVI trend and its significance; (c,d) main growing season NDVI trend and its significance.
Figure 5 .
Figure 5. Normalized Difference Vegetation Index (NDVI ) trends and their significance at annual and growing season time scales during 2000-2017.(a,b) Annual NDVI trend and its significance; (c,d) main growing season NDVI trend and its significance.
Figure 6 .
Figure 6.Pair-wise Pearson Correlation Coefficient (PCC) of mean annual NDVI (X1), EVI (X2), temperature, and average annual total precipitation during 2000-2017.The numbers represent average PCC values, and strong and weak correlations donated by large and small font sizes, respectively.
Figure 6 .
Figure 6.Pair-wise Pearson Correlation Coefficient (PCC) of mean annual NDVI (X1), EVI (X2), temperature, and average annual total precipitation during 2000-2017.The numbers represent average PCC values, and strong and weak correlations donated by large and small font sizes, respectively.
Figure 7 .
Figure 7. Correlation coefficient between annual mean NDVI versus average total precipitation (a) and mean annual temperature (c) during 2000-2017 in Eritrea; respective correlation significances (b,d).
Figure 7 .
Figure 7. Correlation coefficient between annual mean NDVI versus average total precipitation (a) and mean annual temperature (c) during 2000-2017 in Eritrea; respective correlation significances (b,d).
23 Figure 8 .
Figure 8.The correlation relationship between the growing season mean values of NDVI and the growing season total precipitation (a) and mean temperature (b) during 2000-2017, respectively; correlation significant levels are shown in (b) and (d).
Figure 8 .
Figure 8.The correlation relationship between the growing season mean values of NDVI and the growing season total precipitation (a) and mean temperature (b) during 2000-2017, respectively; correlation significant levels are shown in (b) and (d).
3. 3 .
Spatial and Temporal Drought Patterns Based on VCI and SPI 3.3.1.VCI and Drought Patterns VCI maps were extracted based on the main growing season and multi-year minimum and maximum NDVI values calculated for every pixel of the main growing season from 2000 to 2017.VCI was used to assess drought categories of different levels for the growing season, as shown in Table
Figure 10 .
Figure 10.Mean Vegetation Condition Index (VCI ) drought classification for the main growing season of 2017.
5 were the years of 2004, 2011, 2013, 2014, and 2015.However, values from the summary statistics show that 2002, 2008, 2011, and 2013 had a median value of less than −1, thus moderate to extreme growing season drought conditions were dominant characteristics of these years.The most extreme median values of SPI-4 were observed in the year 2004 and 2014; the highest 1st quartile range was identified in the year 2004 and 2017, and 2015 was recorded as the highest 3rd quartile range of SPI-4.
Figure 10 .
Figure 10.Mean Vegetation Condition Index (VCI ) drought classification for the main growing season of 2017.
5 were the years of 2004, 2011, 2013, 2014, and 2015.However, values from the summary statistics show that 2002, 2008, 2011, and 2013 had a median value of less than −1, thus moderate to extreme growing season drought conditions were dominant characteristics of these years.The most extreme median values of SPI-4 were observed in the year 2004 and 2014; the highest 1st quartile range was identified in the year 2004 and 2017, and 2015 was recorded as the highest 3rd quartile range of SPI-4.
Figure 11 .
Figure 11.Aggregated SPI-4 drought conditions in Eritrea during 2000-2017.The zonal statistics report and visual analysis of the SPI-4 month-based drought signals show that areas such as Southern Red Sea Region, Gash Barka area, and the Northern part of the Southern Red Sea Region were more prone to drought, mainly in the 2004, 2009, 2011, and 2015 growing season years, with mean SPI-4 values ranging between −2 and −3.The intensity of the drought was more severe from 2009 onwards, as the difference can be easily observed in the time series images (Figure 11).
Figure 12 23 Figure 12
Figure12shows the drought classification based on SPI-4 extractions in the main growing season of 2017.Areas of extreme meteorological drought were mainly concentrated in the Northern Red Sea and Meakel regions, and some parts of Anseba region, with mean SPI-4 values of −0.95, −1.26, and −0.7 respectively.Merging the results with the ESA Landcover dataset shows that built-up areas, sparse vegetation, grassland, and cropland covers with mean SPI-4 values of −0.82, −0.77, −0.67, and −0.61, respectively, were exposed to different levels of drought conditions in descending order.
Figure 12 .
Figure 12.SPI-4 drought classification for the main growing season of 2017.
Figure 12 .
Figure 12.SPI-4 drought classification for the main growing season of 2017.
Table 1 .
Drought classification levels using Vegetation Condition Index (VCI) extracted from Moderate Resolution Imaging Spectro-radiometer Normalized Difference Vegetation Index (NDVI).
Table 2 .
Drought classification using SPI based on 4 months CHRIPS daily aggregate dataset. | 2019-04-10T13:02:24.510Z | 2019-03-26T00:00:00.000 | {
"year": 2019,
"sha1": "a98bac091d969fe7c697e2f5a801b55be0687d4d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/11/6/724/pdf?version=1553590678",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7ca815d97a10a835f32a5a8ce1ba9f4f8f2fcd31",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Environmental Science"
]
} |
18384344 | pes2o/s2orc | v3-fos-license | Sentential Paraphrase Generation for Agglutinative Languages Using SVM with a String Kernel
Paraphrase generation is widely used for various natural language processing (NLP) applications such as question answering, multi-document summarization, and machine translation. In this study, we identify the problems occurring in the process of applying existing probabilistic model-based methods to agglutinative languages, and provide solutions by reflecting the inherent characteristics of agglutinative languages. More specifically, we propose and evaluate a sentential paraphrase generation (SPG) method for the Korean language using Support Vector Machines (SVM) with a string kernel. The quality of generated paraphrases is evaluated using three criteria: (1) meaning preservation, (2) grammaticality, and (3) equivalence. Our experiment shows that the proposed method outperformed a probabilistic model-based method by 12%, 16%, and 17%, respectively, with respect to the three criteria.
Introduction
Paraphrase generation (PG) is a useful technique in various natural language processing (NLP) applications, where it expands natural language expressions. In question answering systems, PG can be utilized to generate semantically equivalent questions. It can solve word mismatch problems when searching for answers (Lin and Pantel, 2001;Riezler et al., 2007). For multi-document summarization, it also helps to generate a summary sentence by identifying repeated information among semantically similar sentences (McKeown et al., 2002). In addition, for machine translation, paraphrasing can mitigate the scarcity of training data by expanding the reference translations (Callison-Burch, 2006).
In this study, we focus on a paraphrase generation approach, namely, sentential paraphrase generation (SPG), which takes a whole sentence as an input and generates a paraphrased output sentence that has the same meaning. Figure 1 shows an overview of the SPG process in general. For example, let us assume that we would like to generate a paraphrased sentence using bilingual parallel corpora for a Korean input sentence "삼성본사는 서울에 위치하고 있다 (The headquarters of Samsung is located in Seoul)." For simplicity, in the examples used in this paper, we assume that an input sentence has only one source phrase to be substituted/ paraphrased. In our sample sentence, the source phrase is "위치하고 있다 (is located)." Currently, popular methods for SPG use phrase-based statistical machine PACLIC 28 ! 651 translation (PBSMT) techniques (Bannard and Callison-Burch, 2005;Callison-Burch, 2008;Zhao et al., 2009;Wubben et al., 2010) with phrasebased paraphrase sets extracted from bilingual or monolingual parallel corpora. Such methods based on PBSMT use probabilistic-based models (e.g., a paraphrase model (PM) and a language model (LM)) to select the best phrase for substitution from a paraphrase set, which contains phrases that share the same meaning, to produce a paraphrased sentence. Probabilistic-based methods improve the system as the size of the corpora increases with increased frequency of the phrases. However, these methods tend to encounter two problems when applied to agglutinative languages (e.g., Korean, Japanese, and Turkish), which are morphologically rich languages.
The first problem is that it is very difficult to obtain a reliable probability distribution in agglutinative languages. Isolating (e.g., Chinese) and inflectional (e.g., Latin and German) languages employ fewer lexical variants to represent diverse grammatical functions or categories, whereas in agglutinative languages this process leads to an enormous number of possible inflected variants of a word. This is because a word is formed by combining at least one root, which represents a meaning, with various function or bound morphemes (e.g., postpositional particles and affixes). Furthermore, agglutinative languages suffer from the problem of resource scarcity (Wang et al., 2013). This problem becomes even more severe when obtaining an appropriate probability distribution for each variant, because the frequency of each phrase in a paraphrase set is less than in other languages, given the same quantity of corpora. In this study, therefore, we propose to use Support Vector Machines (SVM) for classification, which select the best paraphrase without employing probability information.
The second problem in using previously proposed probabilistic-based methods with agglutinative languages is that these methods lead to lower grammaticality because these methods do not consider the internal structure of a source phrase and the internal structures of dependent words of the source phrase. These methods take into account only the surface form distribution. It is very difficult to identify grammatically correct candidates in the paraphrase sets. This problem appears to be much more severe in agglutinative languages than in isolating or inflectional languages.
For this reason, in this study, we propose to utilize the similarity of syntactic categories, grammatical categories, and contextual information between the source phrase and its candidate paraphrases, when selecting the best paraphrase.
In this paper, we propose a novel SPG method that deals with the two problems mentioned above, for the Korean language, which is an agglutinative language. In the remainder of this paper, we review background literatures for our method on paraphrase generation in section 2; describe our proposed method in section 3, explain the experimental settings and results in section 4, and conclude in section 5.
Probabilistic Model-Based Paraphrase
An SPG process begins with paraphrase phrases extraction from monolingual or bilingual parallel corpora. In this section, we review a popular paraphrasing method introduced by Bannard and Callison-Burch (2005). Since this is one of the very first studies to be conducted using bilingual parallel corpora and is a fundamental method in research on paraphrasing with bilingual parallel corpora, we used it as the baseline for our comparative experiment in this paper.
The method assumes that phrases that share commonly aligned foreign phrases are likely to be paraphrases of each other. For example, English phrases and that share commonly aligned foreign phrases can be regarded as paraphrases of each other and their "paraphrase probability" is expressed as follows: Given a source phrase in a new input sentence, the best paraphrase is chosen from candidate phrases as expressed in equation below: Since we use a trigram LM to decide how acceptable each is for a given input, and are the two words preceding , and and are the two words following . The phrase is substituted with the best paraphrase , which has the highest probability.
Kernel
In this study, we propose an SPG method using SVM, instead of using the probabilistic-based model, which is used in the approach described in section 2.1.
An SVM is a linear classifier that finds a linear hyper-plane that separates positive and negative instances of labeled samples with the largest margin. This classifier is designed to reduce the generalization error rate, which is the ratio of incorrectly predicted classes to the novel inputs, because it is less overfitted to the training data set than other methods (Kozareva and Montoyo, 2006). With reference to sparseness of each lexical variant in agglutinative languages, it is also more tolerant than probabilistic-based models because it does not largely depend on the frequency of instances.
For problems that are not linearly separable, SVM uses a kernel function that implicitly transforms a non-linear problem into a higherdimension space and makes the problem into a linearly separable one. A kernel is a similarity function between a pair of instances. In particular, since string kernels are useful in terms of measuring the similarity of non-fixed size feature vectors (e.g., text documents, the dependency tree of a sentence, and syntax trees) (Erkan et al., 2007), we used a string kernel given that the features that consider the morphological structures of words are variable in length. More specifically, we use the edit distance kernel function (Erkan et al., 2007) as follows: Here, edit_distance is defined as the Levenshtein distance between string x i and x j . i.e., the minimum number of edits (deletions, insertions, or substitutions at the word level) required to transform one string into another. One of the advantages of using this kernel is that it takes into account the order of the strings in the structured data (e.g., a dependency path tree) as opposed to other string kernels (e.g., cosine similarity kernel) which consider only the common terms when measuring similarity.
In our study, the class for SVM is each phrase in a paraphrase set, which contains a source phrase. In addition, the SPG used in our study can be regarded as a multiclass classification problem. Thus, to solve this problem with a binary classifier, we adopted a "one-against-one" approach (Chang and Lin, 2011). This approach constructs k(k-1)/2 classifiers, where k is the number of classes, with a training data set from two classes. A new data point is allocated to the class with the most votes during each binary classification.
Paraphrase Generation Using SVM with a String Kernel
This section describes our proposed SPG method, which uses an SVM with a string kernel. Figure 2 shows an overview of this method.
Training Phase
In the training phase, phrase alignment is first conducted manually using training sentences, which is composed of bilingual parallel corpora of Figure 2: Overview of the proposed SPG method.
PACLIC 28
! 653 the Korean and English languages, as shown in Figure 1. Phrase alignment can be automatically conducted using the GIZA++ toolkit (Och and Ney, 2003) and phrase alignment heuristics (Koehn et al., 2003). However, in this study, we evaluate the performance of our generation method independently from the quality of automatic word or phrase alignment algorithms. Therefore, we conduct manual alignment, which is much more accurate than automatic alignment. In addition, applying these alignment tools to the Korean language is not appropriate because they only obtain a few correct results.
We align Korean phrases that range from unigrams to trigrams with English. For example, "뉴욕에 위치하고 있다 = is located in New York City" in the bilingual parallel corpora shown in Figure 1 can be aligned as follows: • 뉴욕에 (unigram) = in New York City • 뉴욕에 위치하고 (bigram) = located in New York City • 뉴욕에 위치하고 있다 (trigram) = is located in New York City With these aligned phrases, we extract paraphrase sets by grouping phrases that have common foreign phrases (i.e., English) because they are likely to have the same meaning (e.g., is located = 위치하고 있다, 자리잡은 장소는, 자리잡고 있다 in Figure 1).
Next, for feature extraction, three types of features are generated for the training phrases in paraphrase sets, referring to the sentences that the phrases are originally contained in: syntactic categories (SC), grammatical categories (GC), and contextual information (CI). For each type of feature, characteristics of the training phrase as well as the dependent words that precede and follow the training phrase in a Korean training sentence are extracted.
These three features help enhance the paraphrasing method using the agglutinative languages. Using the SC and GC features helps maintain the grammaticality of the source phrase. However, given the high variation in postpositional particles or affixes in agglutinative languages, there is a low probability of matching the SC and GC features in the source and training phrases. Therefore, by considering the SC and GC features of the dependent words in addition to the features of the source phrase, our method considers the context of the source phrase in terms of grammaticality to find the best candidate for paraphrasing. The CI features have a similar purpose in that they consider the context of the word sense of neighboring words.
• Syntactic Categories (SC): This feature helps to select a phrase with an acceptable syntactic type based on the structure of a given sentence. Morphological analysis is conducted for three phrases: the training phrase as well as the two dependent words preceding and following the training phrase. Based on the result of the morphological analysis, features are extracted such as phrase type (e.g., noun phrase (NP), verb phrase (VP)), case (e.g., subject (SBJ), and object (OBJ)), and tags of morphemes (e.g., pronoun (np) and case particle (jc)) for the three phrases.
• Grammatical Categories (GC): This feature helps to select a phrase that preserves the grammatical categories of a source phrase. Grammatical categories are extracted for the training phrase as well as the dependent words preceding and following the training phrase. They are extracted by considering the affixes of each phrase or word. Sample features for GC include the sentence type (e.g., interrogative sentence (INT), declarative sentence (DEC)), voice (e.g., passive (PAS), active (ACT)), and tense (past (PAST), present (PRES), and future (FUTU)). This feature is labeled as "N/A" if a corresponding feature does not exist.
• Contextual Information (CI):
This feature helps to select a phrase that has the same word sense as a source phrase. Contextual information is extracted by taking the roots for the preceding and following dependent words.
The features are represented as a string instead of a numerical feature vector since a string kernel is used. Finally, phrases in a paraphrase set with identical meanings and corresponding features are PACLIC 28 ! 654 trained together using the SVM to generate a paraphrase model. This model is used in the paraphrase generation phase, as described in section 3.2. Figure 3 illustrates the three types of features used in our model for one sentence from the bilingual parallel corpora example shown in Figure 1.
Paraphrase Generation
In the paraphrase generation step, a source phrase in the input sentence is replaced by a candidate phrase in its corresponding paraphrase set, and as a result, a paraphrased sentence is produced. This step starts by locating a source phrase in an input sentence. For the input sentence "삼성 본사는 서울에 위치하고 있다. (The headquarters of Samsung is located in Seoul.)," as shown in Figure 1, our method first selects a source phrase. If multiple candidates appear, one with the maximum length of words is selected. If both phrases, "위치하고 있다 (is located)" and "위치하고 (located)" are possible candidates, for instance, the longer phrase "위치하고 있다 (is located)" will be selected. Next, dependent words preceding and following the source phrase are used together with the source phrase to obtain the three types of features described in section 3.1. Next, the SVM classifier is used to identify the best phrase in the paraphrase set for the source phrase that was built during the training phase.
Finally, the source phrase is substituted with the selected best paraphrase. Although in this example, we assumed the input sentence to have only one source phrase for simplicity, in our actual implementation the paraphrase generation process was repeated for multiple source phrases in the input sentence as shown in Table 1.
Evaluation
We evaluated our proposed method by comparing it with the popular method proposed by Bannard Table 1: Examples of test sentences (TS) and paraphrased sentences obtained using each method (Baseline and SKBPG). In the examples of sentences, the same superscript numbers indicate the source in a TS and the paraphrased phrase selected from each method.
PACLIC 28
! 655 and Callison-Burch (2005). This baseline 1 method was implemented by using probabilistic models and is described in section 2.1. Our proposed method, string kernel-based paraphrase generation (SKBPG), was implemented by using the edit distance string kernel, which is described in section 2.2.
Experimental Resources
In order to generate paraphrase sets, we used 998 randomly selected English sentences from the Text REtrievial Conference (TREC) question answering track (2003)(2004)(2005)(2006)(2007) 2 and their translations (Korean words: 5,286, English words: 7,474). The question answering track was selected so that we could apply our method to a question answering system.
For the test sentences, 100 quiz sentences from Korean TV quiz shows (e.g., Golden Bell Challenge!) were selected. The sentences had to contain at least one possible source phrase with multiple candidates in its corresponding paraphrase set. Table 1 shows examples of the test sentences and the paraphrased sentences obtained using each method.
For the baseline method, we used 52,732 Korean sentences (Korean words: 322,306) in KAIST language resources (Choi, 2001) for training trigram LMs, in addition to the questions from TREC. This additional resource was included to make probability distribution in LM stable by expanding size of corpus. The LM probability was acquired using the IRSTLM toolkit (Federico and Cettolo, 2007), and conditional probability in LM was calculated by applying modified Kneser-Ney smoothing.
For the SKBPG method, we used ETRI linguistics analyzer (Lee and Jang, 2011) for dependency parsing and morphological analysis. For the SVM, we used LIBSVM-string (Guo-Xun Yuan, 2010;Chang and Lin, 2011), which supports the edit distance kernel option and multiclass classification based on the one-against-one approach, as described in section 2.2. The parameter of edit distance kernel (γ) was 0.1.
Evaluation Metrics
The Korean paraphrase pairs that we generated were evaluated by two native Korean speakers according to the following three criteria: • Meaning Preservation (MP): Does a generated paraphrase preserve the meaning of the source phrase?
We used two types of scales as shown in Table 2. These criteria were adopted from previous research (Callison-Burch, 2008;Fujita et al., 2012). In terms of the inter-annotator agreement using Kappa, K = .412 for the 5-point scale, which is considered as "Moderate." For the binary scale, K = .612, which is regarded as "Substantial" (Landis and Koch, 1977;Carletta, 1996).
Results and Discussion
In the section, we summarize the results of our manual evaluation, which show that our method outperformed the baseline method, as shown in Table 3 and Table 4. For the manual evaluation using the 5-point scale, an independent-samples t-test showed that SKBPG significantly outperformed the baseline for both meaning preservation (t(398) = 2.564, p = .011) and grammaticality (t(398) = 3.501, p = .001). The evaluation using the binary scale also showed that SKBPG outperformed the baseline by 12%, 16%, and 17% for the three criteria of meaning preservation, grammaticality, and equivalence, respectively.
Interestingly, even though more resources were used in the baseline method for training the LM (52,732 Korean sentences), it did not outperform SKBPG. This suggests that our method is more efficient in terms of using fewer resources with less amount of data storage space. One plausible reason for such efficiency is that given that agglutinative languages have a large number of variants of lexicons for a root, it is difficult to account for most of the variations. Since the baseline method uses probabilistic models that utilize the frequency of each variation, much more data is needed. Another potential reason for the efficiency is that the word order for agglutinative languages is not critical for maintaining grammaticality. As opposed to isolating languages in which the word order determines grammatical functions, agglutinative languages use the postpositional particles or affixes of a root in a word to determine grammatical functions. Therefore, rather than using a LM that calculates the probability of contiguous words sequences, utilizing dependency grammar between words, as in SKBPG, can be more efficient.
Conclusion
In this study, we proposed a novel paraphrasing method, which considers the inherent characteristics of agglutinative languages by using an SVM with a string kernel.
Our evaluation of the generated paraphrases showed that the proposed method outperformed the probabilistic model-based method by 12%, 16%, and 17%, with respect to meaning preservation, grammaticality, and equivalence even with fewer resources than in the baseline method.
A limitation of this study is that the data set was aligned manually for paraphrase extraction between the two languages and due to this reason our data set size was relatively small with 1515 paraphrase sets. This limitation led to several problems in our evaluation. Sometimes, there were no appropriate grammatically correct candidates in the paraphrase sets for a certain input sentence. This also led to reduced coverage of paraphrases.
In addition, our method does not consider many semantic features such as semantic roles and named entities. This point suggests that our method is fragile in meaning preservation of the source sentence as the data size increases.
Therefore, we plan to work on automatic paraphrase extraction method tailored to agglutinative languages in order to increase the size of our data set. We also expect to expand the feature set by considering additional semantic features for our future work. | 2016-01-24T08:34:15.539Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "7121b1c8e5b17f4aae8fc5cbc6a04723c6ecf866",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "7121b1c8e5b17f4aae8fc5cbc6a04723c6ecf866",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
62829728 | pes2o/s2orc | v3-fos-license | Formation of santonide and parasantonide in the pyrolysis of santonic acid a new insight into an old reaction
Thermal reactions of santonic acid ( 1 ) in solution and flash vacuum pyrolysis were carried out in order to study the mechanism of formation of parasantonide ( 3 ) and santonide ( 4 ). Heating an acetic acid solution of 1 in a sealed tube at 200ºC led to 3 , while heating at 175 ºC afforded 4 . Flash vacuum pyrolysis of 1 was carried out between 300 and 550 ºC, 10 -2 torr and contact times of 10 -2 s. Results showed that 1 decomposed by at least two different mechanisms. One involving isomerization and lactonization affording 4 which affords 3 and the other involving radical scission and formation of aromatic and conjugated products. A mechanism involving isomerization to isosantonic acid ( 6 ) previous to lactonization is proposed for formation of 4 .
Introduction
The santonides, santonide and parasantonide, are isomeric neutral substances that were obtained by acidic reflux and thermolysis of santonic acid (1). 1 While these compounds were obtained at the end of the 19th century by the Italian school led by Cannizzaro during their classical studies on the structure of the sesquiterpene santonin (2), 2 their structures were not elucidated until 1950 when Woodward and Kovach reported a detailed description of the transformation of santonic acid into parasantonide and furthermore, proposed its structure as 3.These authors also established that the configuration at C-11 of 3 is opposite to the one in the same carbon of santonic acid (1) and further suggested that santonide (4) is the corresponding diastereoisomer (Figure 1). 3
Figure 1
Although some of the transformations of 1 4 have been studied after the pioneering series of publications of Woodward et al.,1a,3 to the best of our knowledge, no additional reports exist in the literature on the transformation of 1 into 3 and 4 and on the confirmation of their structures.We have recently confirmed the complete structure of 3 and that of its alkaline hydrolysis product, parasantonic acid (5) (Figure 2), by X-ray crystallographic analysis, 5 and, we wish to report now, a careful analysis of the transformation of 1 into 3 and 4 by static pyrolysis and flash vacuum pyrolysis.
Results and Discussion
For the preparation of parasantonide (3) from (-)-santonic acid (1) we initially followed the procedure described by Woodward and Kovach. 3Instead of the direct isolation of 3 from the crude thermolysis product, we found more convenient, as the authors suggested, first to obtain crystalline methyl parasantonate (5a) (Figure 2) which, upon saponification to 5 and followed by treatment with acetic anhydride under reflux, afforded crystalline 3.All our attempts to prepare santonide (4) by heating an acetic acid solution of 1 in a sealed tube at 200ºC, as described by Francesconi 6 led to 3. In our hands, this procedure proved to be more convenient for the preparation of 3 than those previously described.However, when we followed the procedure of Cannizaro and Valente 7 [heating in vacuum at 175ºC for 4 h the residue obtained of an acetic solution of 1 that had been heated at reflux for 18 h] 4 was obtained in 22% yield (based on the recovered santonic acid), after crystallization from diisopropyl ether.These results showed a broad trend in which the formation of 4 is favored at lower temperatures and that at higher temperatures, 4 equilibrates to 3.
In order to follow the progress of the transformation of santonic acid (1) into santonide (4) and parasantonide (3), we conducted an experiment in a sealed NMR tube by heating a solution of 1 in acetic-d 3 acid d 1 at 200ºC.At different intervals, the reaction was monitored by 1 H NMR based on the signals of the Me-15 of 3 and 4 at δ 1.63 and 1.65, respectively.After 2 h of heating, the exclusive formation of 4 was detected, whereas after 4 h and 6 h, approximately 2:1 and 1:2 mixtures of 4 and 3 were observed.The geometries of 3 and 4 were fully optimized at B3LYP level using the standard 6-31G* basis set and the relative energies were calculated. 8The calculations showed that 4 is, in fact, less stable (2.2 Kcal mol -1 ) than 3. From this it is reasonable to propose that 4 is formed under kinetic control whereas 3 is the product of thermodynamic control or that 3 is formed from 4.
To have more information on these transformations, some flash vacuum pyrolysis (fvp) experiments were carried out.The characteristics of fvp allow one to obtain kinetic control products and/or to avoid further transformations as products leave the hot zone in a short time.Thus, fvp reactions of 1 were studied between 300 and 550 ºC with pressures of ~ 10 -2 torr and contact times of ~ 10 -2 s.At 550 ºC no more santonic acid was detected in the reaction products.In almost all of the reactions nitrogen was used as carrier gas and in some reactions toluene was used as carrier in order to check for the presence of radicals.Products arising from different competing reactions were detected or isolated.Thus, isomerization, radical scission, lactonization and decarboxylation reactions were found.Reactions affording parasantonide (3) and santonide (4) will be discussed first, as they are the focus of this study.FVP reactions of 1 at 300 -400 ºC afforded small amounts of two isomeric products in the analysis by GC/MS.One of them was identified as isosantonic acid (6) by comparison with pure sample.It is worth noting here that pure 6 decomposed to an isomeric product in the column chromatography so, it was assumed that 1 isomerizes to 6 under fvp conditions at lower temperatures.A possible isomerization mechanism, reminiscent of the Dowd-Beckwith ring expansion, 9 is depicted in Scheme 1.It should be recalled that the mass spectra of 5 and 6 show an important peak of m/z 246 (lactone) and a low intensity peak of the molecular weight (m/z 264), while the mass spectrum of 1 has an important peak of the molecular weight and a small peak of the lactone.These results support the suggestion of Woodward et al. 3 on the difficulties for lactonization of 1 due to structural impediments.
Scheme 1
Parasantonide (3) and santonide (4) appeared in reactions over 400 ºC with no traces of isosantonic acid (6), showing that under these conditions, 6 readily lactonizes to 4 (Scheme 2).At this point it was necessary to find out how parasantonide is formed, i.e., from isosantonic acid or from santonide.By solution experiments described above it was demonstrated that 3 is formed at higher temperatures than 4.
Scheme 2
The analysis by 1 H-NMR and TLC showed that the yield of 4 was higher than 3, confirming that the former is the first lactonization product.To get more information on these reactions some fvp experiments of 3 and 4 were carried out at 500 ºC.Results are shown in Table 1; relative amounts were calculated by 1 H-NMR.It is worth noting here that no other products were detected in these reactions, showing that all of the products in fvp of 1, coming from radical and diradical reactions (that will be discussed later), arise from decomposition of 1 and not from 3 and 4. No evidence of bibenzyl was found when reactions were carried out with toluene as carrier gas, confirming that there are no radicals in the reaction.
As it can be seen in Table 1, 3 and 4 interconvert under the experimental conditions.Formation of 3 is clearly favoured, showing that this one may be the more stable isomer (as shown by calculations described above).This interconversion reaction can be explained by the mechanism depicted in Scheme 3.
Scheme 3
This C-CO bond fission in the lactone affording a diradical was already described in fvp reactions of α-coumaranone leading to CO extrusion over 700 ºC, although in this case it was an aromatic C-CO bond fission.This rather unusual behavior was explained by thermochemical calculations. 10The interconversion reaction without CO extrusion can easily be explained because the reaction temperature is lower than the reported for α-coumaranone.
As was mentioned earlier, fvp reactions of 1 are not as clean as desired, and above 450 ºC some other competitive reactions also take place.So, products arising from decarboxylation (m/z) 220, from fragmentation (m/z) 218 and 190 and two other compounds with m/z 246 as 3 and 4 were formed as well.A description of these results and quantification of reactions at 470 and 500 ºC are reported in Table 2; a tentative mechanism and structures of the different products are described in Scheme 4. Due to the fact that there are many isomeric products, they were separated as groups of compounds by polarity by column chromatography with different solvents (hexane/ethyl acetate 9:1, chloroform, ethyl acetate and methanol).The reported quantifications were done after separation of each fraction by preparative chromatography (hexane:ethyl acetate 9:1).
Scheme 4
The structures 7 -11 agree with the many methyl and vinyl groups in the NMR spectrum and with the MS of each one after separation by GC/MS.These are typical fragmentations of cyclic compounds like terpenes taking place through different diradicals. 11Compound 12 is the expected product from decarboxylation of 1.The presence of radicals was confirmed by the appearance of bibenzyl in reactions with toluene as carrier gas.
These parallel reactions should have been present in the experiments carried out by Woodward et al. 3 since they reported only low yields of 3.
Conclusions
Here we have reported a new study of thermal reaction of santonic acid (1) leading to parasantonide (3) and santonide (4) .Under these conditions, 1 suffers parallel reactions coming from the different diradicals that can be formed.One of these paths leads to isosantonic acid (6) which lactonizes to 4. We have also demonstrated that 3 is formed from 4 by the radical opening of the lactone ring, epimerization of C-11 followed by ring closure to the more stable isomer.The other products found come from decarboxylation and fragmentation of 1 by formation of different diradicals.
Experimental Section
General Procedures.Gas chromatography/mass spectrometry (GC/MS) analysis were performed with a SE -30 column, using helium as eluent at a flow rate of 1 mL/min, the heating rate was different for each compound or mixture of compounds.Mass spectra were obtained in the electron impact mode (EI) using 70 eV as ionization energy. 1 H-NMR and 13 C-NMR spectra were carried out in CDCl 3 with a Bruker 200 FT spectrometer (at 200 MHz) and in DMSO with Varian UNITY INOVA spectrometer (200 MHz and 400 MHz for 1 H and 100 MHz for 13 C).Chemical shifts are reported in ppm downfield from TMS.The IR spectra were recorded with a Thermo Nicolet AVATAR 320 FT-IR spectrophotometer.Melting Points were determined by a Büchi apparatus and are uncorrected.Column and thin layer chromatographies were performed on silica gel.Solvents were analytical grade.
General flash vacuum pyrolysis experiments
Flash vacuum pyrolysis reactions were carried out in a vycor glass reactor using a Thermolyne 21100 tube furnace with a temperature controller device.Oxygen-free dry nitrogen was used as carrier gas.Samples to be pyrolyzed were of ~ 40 mg.Contact times were around 10 -2 s and pressures of 0.02 Torr were used.Products were trapped at the liquid air temperature, extracted with solvent and submitted to different analysis or separation techniques.
Flash vacuum pyrolysis of 1
Reactions of 1 were carried out between 300 and 550 ºC, 10 -2 torr and contact times of ~ 10 -2 s.Reaction crudes were extracted with chloroform and submitted to GC/MS, column chromatography and/or preparative chromatography and
Table 2 .
FVP of 1. GC/MS after column chromatography a Others: aromatic compounds such as indene, naphthalene and derivatives.
1nd Kovach 3 : santonic acid was first converted into ) and then into parasantonide (3).1H and 13 C NMR data and HMBC correlations of 3 in CDCl 3 at 500 MHz and 125 MHz, respectively.The small differences in coupling constants (7.3 Hz vs 7.1 Hz) are probably due to the low digital resolution of the spectrometer. 1H-NMR. | 2018-12-21T01:28:51.423Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "4c7d315d06f3cc9bcb3c4e5508f45e8ac85ff59f",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/19889/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c548f13a9588acc72160cd0ba6fe0941fce38ce1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
59021492 | pes2o/s2orc | v3-fos-license | Planar Hall torque
Spin-orbit torques in bilayers of ferromagnetic and nonmagnetic materials hold promise for energy efficient switching of magnetization in nonvolatile magnetic memories. Previously studied spin Hall and Rashba torques originate from spin-orbit interactions within the nonmagnetic material and at the bilayer interface, respectively. Here we report a spin-orbit torque that arises from planar Hall current in the ferromagnetic material of the bilayer and acts as either positive or negative magnetic damping. This planar Hall torque exhibits unusual biaxial symmetry in the plane defined by the applied electric field and the bilayer normal. The magnitude of the planar Hall torque is similar to that of the giant spin Hall torque and is large enough to excite auto-oscillations of the ferromagnetic layer magnetization.
Spin-orbit torques in bilayers of ferromagnetic and nonmagnetic materials hold promise for energy efficient switching of magnetization in nonvolatile magnetic memories. Previously studied spin Hall and Rashba torques originate from spin-orbit interactions within the nonmagnetic material and at the bilayer interface, respectively. Here we report a spin-orbit torque that arises from planar Hall current in the ferromagnetic material of the bilayer and acts as either positive or negative magnetic damping. This planar Hall torque exhibits unusual biaxial symmetry in the plane defined by the applied electric field and the bilayer normal. The magnitude of the planar Hall torque is similar to that of the giant spin Hall torque and is large enough to excite auto-oscillations of the ferromagnetic layer magnetization.
Electric current in bilayers of ferromagnetic (FM) and nonmagnetic (NM) materials can apply torque to the magnetization of the FM [1]. Such torque arising from spin-orbit coupling (SOC) of conduction electrons can be large enough to drive switching [2,3] and autooscillations [4][5][6][7][8] of the FM magnetization. Manipulation of magnetization by spin-orbit torques (SOTs) may be employed in the next generation of magnetic memories and spin torque nano-oscillators (STNOs) that can serve as tunable microwave sources [9]. The most studied SOTs in NM/FM bilayers are the spin Hall torque (SHT) [10] arising from SOC in the NM layer and the Rashba torque (RT) [11] originating from SOC at the NM/FM interface [12]. Recently, an unconventional SOT induced by low crystalline symmetry of the NM layer was observed [13]. Understanding of all types of SOTs that can arise in NM/FM bilayers is needed for formulation of a comprehensive SOT theory and for engineering practical SOT devices.
Here we report an unconventional SOT in NM/FM bilayers that arises from SOC in the FM layer, as opposed to the NM layer or NM/FM interface. This torque modulates magnetic damping of the FM layer in the plane where the effect of SHT and RT on the FM damping is zero. The origin of this SOT can be traced to a spinpolarized electric current in the FM known to give rise to anisotropic magnetoresistance (AMR) and the planar Hall effect (PHE) [14], and thus this SOT can be called the planar Hall torque (PHT). The magnitude of PHT is found to be large enough to cancel magnetic damping of the FM and drive auto-oscillations of the FM magnetization.
We measure SOT in NM/FM nanowire devices schematically shown in Fig. 1A along with the coordinate system used throughout this report (x is the unit vector along the nanowire,ẑ is the unit vector normal to the bilayer plane). We employ highly resistive Ta seed and cap layers that reduce roughness of the sputter-deposited NM/FM bilayer and prevent its oxidation [15]. The resulting substrate/Ta(3 nm)/NM/FM/Ta(4 nm) multilayers are patterned into 40-50 nm wide, 40 µm long nanowires via e-beam lithography and ion mill etching. Two leads are used to apply electric current over a 80-190 nm long central part of the nanowire. We employ a [Co(0.85 nm)/Ni(1.28 nm)] 2 /Co(0.85 nm) superlattice as a FM layer to take advantage of the perpendicular magnetic anisotropy (PMA) in this system [16,17] that nearly cancels the demagnetizing field of the FM film. The room temperature (T = 295 K) AMR curves shown in Fig. 1B demonstrate that magnetization of the Ta/Au(3.9 nm)/FM/Ta nanowire easily saturates for both in-plane and out-of-plane magnetic fields.
We characterize SOTs by field-modulated spin torque ferromagnetic resonance (ST-FMR), which measures the rectified voltageṼ mix generated by the nanowire at the frequency of the magnetic field modulation [15,18]. ST-FMR spectra of the Ta/Au(3.9 nm)/FM/Ta device measured at T = 295 K as a function of magnetic field H applied in the xz plane at 45 • to the sample normal ( Fig. 1C) reveal several spin wave resonances inṼ mix (H). Fig. 1D demonstrates that the linewidth of these resonances is altered by a direct current I dc . We fit the ST-FMR resonances by the magnetic field derivative of the sum of Lorentzian and anti-Lorentzian functions [18] and plot the linewidth ∆H of the lowest-frequency mode (labeled M 1 ) as a function of J dc in Fig. 1E (blue symbols). Here J dc = I dc /w is the sheet current density, where w is the nanowire width. The linewidth, proportional to the FM magnetic damping α, is found to be a linear function of J dc , which reveals the presence of a SOT that can tune α (referred to as an antidamping SOT in this report). Since the antidamping action of SHT is zero for magnetization lying in the xz plane [10], the data in Fig. 1E reveal an unconventional SOT. Furthermore, we find the magnitude of SHT to be small in this system as evidenced by the weak dependence of ∆H on J dc for magnetization nearly parallel to the y axis, where the strongest antidamping effect of SHT is expected (red symbols in Fig. 1E). Fig. 1F shows angular dependence of the slope of ∆H as a function of J dc , which quantifies the strength of an antidamping SOT. For magnetization lying in the xy plane (red symbols), d∆H/dJ dc is small and is consistent with a 360 • periodicity expected for SHT. In contrast, d∆H/dJ dc in the xz plane (blue symbols) is 180 •periodic and is large when magnetization makes a 45 • angle with respect to the sample normal. The data in Fig. 1E show that ∆H extrapolates to zero at the critical sheet current density J c ≈ −4.4×10 4 A/m, beyond which the FM magnetization is expected to auto-oscillate.
To reduce ohmic heating, we study the auto-oscillatory dynamics at T = 77 K. Fig. 2 shows spectra of the microwave signal generated by the sample as a function of J dc for four directions of a 1.7 kOe magnetic field applied in the xz plane at 45 • to the sample normal. The sample generates microwave signal for one current polarity when |J dc | exceeds 1.7 × 10 4 A/m. The data in Fig. 2 reveal that J c < 0 for θ = 45 • and θ = 225 • , while J c > 0 for θ = 135 • and θ = 315 • in agreement with ST-FMR data in Fig. 1F. Several auto-oscillatory modes are excited above J c , with the highest-amplitude mode M 1 generating up to 60 pW of microwave power [15].
The angular dependence of d∆H/dJ dc in Fig. 1F is well fit by cos(θ) sin(θ) cos(φ) = (m ·x)(m ·ẑ) (blue curve), wherem is the unit vector in the direction of the FM magnetization. We argue that this biaxial symmetry of the antidamping SOT can arise from the planar Hall current generated by SOC in FM conductors [14,19]. The planar Hall effect results in a current of spin-polarized electrons of density J PHE = ∆σ AMR (m · E)m flowing parallel to the FM magnetization, where E is the elec-tric field applied to the FM (E ≈ Ex) and ∆σ AMR is anisotropic part of the FM conductivity. Assuming the antidamping SOT originates from exchange of angular momentum between the FM and NM layers via spin currents, only the component of J PHE normal to the NM/FM interface can contribute to the SOT: The angular dependence of J z PHE is identical to the measured angular dependence of the antidamping SOT shown in Fig. 1F, which suggests that this SOT originates from the planar Hall current. When J z PHE < 0, the planar Hall current drives electrons with magnetic moments aligned with the FM magnetization (black arrows) from the FM layer into the NM layer [14], as shown in Fig. 3A. In a steady state, the net electron current across the NM/FM interface is zero, which implies a backflow of electrons from the NM layer to the FM layer. The backflow current from a NM layer with strong spin flip scattering is weakly spin polarized, which results in a net pure spin current density Q z PHE flowing in the −ẑ direction across the NM/FM interface which acts as negative damping. Such a spin current changes between negative and positive damping under change in the sign of J dc as shown Fig. 3B, where Q z PHE now flows in the +ẑ direction across the NM/FM interface. Since Q z PHE changes sign upon a 90 • rotation ofm in the xz plane (Eq. (1)), the effect of PHT changes from positive to negative damping upon such a rotation in agreement with the data in Fig. 3D shows angular dependence of the SOT-induced damping for PHT (m ·x)(m ·ẑ) and SHT (m ·ŷ), both allowed by symmetry [20]. Red and blue in Fig. 3D represent negative damping and positive damping torques for J dc > 0, respectively. The biaxial angular symmetry of PHT is clearly illustrated by this figure. We also expect PHT to be zero in symmetric NM/FM/NM trilayers because the net spin current flowing across the two NM/FM interfaces is zero as illustrated in Fig. 3C. Indeed, our measurements of antidamping SOT in a Ta/Au/FM/Au/Ta trilayer nanowire displayed in Fig. 1G demonstrate that PHT in this system is negligibly small.
To understand the effect of the NM layer on antidamping SOTs, we measure PHT and SHT in a series of Ta/NM/FM/Ta nanowires that employ different NM layers with similar sheet resistances: Au(3.9 nm), Pt(7.0 nm), Pd(8.0 nm) [15]. The angular dependence of d∆H/dJ dc quantifying an antidamping SOT is measured in the xy and xz planes. Fig. 3E shows that the magnitude of the 360 • -periodic antidamping SHT in the xy plane strongly depends on the NM material, in agreement with previous studies [21]. The data in Fig. 3F demonstrates that the magnitude of the 180 • -periodic PHT is nearly the same for all three NM layers, consistent with PHT arising from spin current generated by the FM layer rather than the NM layer, which merely serves as a good spin sink [15]. Fig. 3E and Fig. 3F clearly demonstrate that the magnitudes of SHT and PHT are not correlated, which further supports the difference in the mechanisms giving rise to these SOTs.
It is expected from Eq. (1) that the magnitude of PHT is proportional to the magnitude of AMR in the FM layer. We find that the AMR ratio in our Ta/Au/FM/Ta multilayer increases by a factor of 1.8 upon cooling the sample from 295 K to 77 K [15]. At the same time, the critical sheet current density J c decreases by a factor of 1.8 (from −4.4 × 10 4 A/m at 295 K to −2.5 × 10 4 A/m at 77 K as measured by ST-FMR at 9 GHz [15]). As J c is inversely proportional to the PHT magnitude, these data strongly support the proportionality of PHT and AMR. The strong increase of the PHT magnitude upon cooling is in sharp contrast to that of SHT known to decrease with decreasing temperature [6].
Recently, PHT and the anomalous Hall torque (AHT) were predicted in FM/NM/FM trilayers, where the planar and anomalous Hall currents generated in one FM layer apply torques to magnetization of the other FM layer [19]. Experiments show that current-driven coupling between two FM layers in a FM/NM/FM trilayer can be achieved via SOTs generated at the NM/FM interfaces [22]. Our work demonstrates that a strong antidamping SOT originating from current in the FM layer and acting on magnetization of the same FM layer can arise in a NM/FM bilayer.
While our data demonstrate that the PHT symmetry and magnitude are determined by the symmetry and magnitude of the planar Hall current in the FM, the microscopic mechanism giving rise to PHT requires further theoretical understanding. Spin polarization of the planar Hall currentp in the FM is collinear with the FM magnetization, and thus cannot give rise to the conventional SOTs proportional tom×p orm×(p×m) [20]. It is, however, possible thatp rotates upon traversing the NM/FM interface and develops a component transverse tom, which can generate an antidamping SOT. Such rotation of the current polarization can arise from interfacial SOC scattering [23] or spin swapping [24]. Alternatively, antidamping PHT can be produced from a longitudinal spin current (p m) via a magnonic mechanism similar to that discussed and demonstrated in the context of spin Seebeck torque in NM/FM bilayers [25,26]. Further theoretical and experimental work is needed to understand the microscopic mechanism of PHT.
In summary, we observe a biaxial antidamping spin-orbit torque arising from planar Hall current in bilayers of ferromagnetic and nonmagnetic metals, and demonstrate operation of a spin torque nano-oscillator driven by this planar Hall torque. We expect the planar Hall torque to play a significant role in spin-orbit torque switching of magnetization [27] as well as in current-driven domain wall [28,29] and skyrmion [30] motion in bilayers of ferromagnetic and nonmagnetic materials. Since the Ta(3 nm) seed layer and Ta(4 nm) capping layer are common to all structures, we refer to type 1 as NM/FM and type 2 as NM/FM/NM structures. The Ta seed layer is employed to promote growth of a smooth multilayer [31] and the Ta capping layer prevents the multilayer oxidation. The composite FM layer is a superlattice of exchange coupled Co and Ni layers: Co(0.85 nm)/Ni(1.28 nm)/Co(0.85 nm)/Ni(1.28 nm)/Co(0.85 nm). We have chosen the Co/Ni superlattice due to its large PHE and AMR [32] as well as its significant perpendicular magnetic anisotropy (PMA) [17,33]. The thicknesses of the Co and Ni layers in the composite FM are chosen to nearly balance the easy plane magnetic shape anisotropy of the FM film by PMA in order to be able to saturate the FM magnetization in any direction by a small magnetic field. We employ Pt, Pd, and Au as the NM layers. In type 1 multilayers, we use NM = Pt(7.0 nm), Pd(8.0 nm), or Au(3.9 nm), with the NM thickness chosen to keep the NM layer sheet resistance nearly the same. For type 2 multilayer, we employ Au(2.5 nm) as the bottom NM material and Au(1.5 nm) as the top NM layer. The thickness of the top and bottom NM layers are chosen to be different in order to generate a non-zero microwave Oersted field applied to the FM magnetization in ST-FMR measurements, which strongly enhances the amplitude of the magnetic oscillations and the signal-to-noise ratio of the ST-FMR signal. We make sheet resistance measurements of magnetic multilayers by the collinear four point probe technique [34]. Four equidistant probes make contact along the symmetry axis to 10 × 10 mm 2 film samples, as shown in Fig. S1. The resistances R A = V 23 /I 14 , R B = V 24 /I 13 , and R C = V 43 /I 12 are measured by means of 4 terminal source-meter. The subscripts on I and V indicate the probes where current is supplied and voltage is measured; for example, for R A , a direct current I is supplied between points 1 and 4 and the voltage is measured at points 2 and 3. R A , R B , and R C must satisfy the following two relations [35]:
SHEET RESISTANCE MEASUREMENTS
where R S is the sheet resistance of the film. Small deviations in probe spacing are corrected by averaging a symmetric measurement for the Rs, and thermal voltages/offsets are accounted for by switching voltage leads, therefore: The measured sheet resistances of various multilayer films are summarized in Table S1. The current densities in each layer can be determined from the parallel resistor model. The top Ta(4 nm) is taken to have the same resistivity as the bottom Ta(3 nm) because approximately 1-2 nm of the top Ta layer oxidizes under ambient conditions [36]. The data in Table S1 demonstrate that the Ta layers have much higher sheet resistances than other layers in the multilayer stack and thus the electric current density in the Ta layers is small in all our measurements.
BROADBAND FERROMAGNETIC RESONANCE
We employ a conventional broadband ferromagnetic resonance technique (FMR) to measure magnetic damping in the NM/FM and NM/FM/NM multilayer films in order to characterize spin pumping across the NM/FM interfaces and spin sink efficiencies of the NM layers. FMR measurements are carried out at room temperature using a broadband microwave generator, coplanar waveguide, and planar-doped detector diode. The measurements are performed at discrete frequencies in an in-plane field-swept, field-modulated configuration, as detailed in Ref. [37]. The measured FMR data are described by an admixture of the χ and χ components of the complex transverse magnetic susceptibility, χ = χ + iχ . The FMR data are fit as described by Ref. [37,38].
The Landau-Lifshitz-Gilbert (LLG) equation describes the magnetization dynamics in the FMR experiment: where M is the instantaneous magnetization vector with magnitude M s ,m is the unit vector parallel to M, H eff is the sum of internal and external magnetic fields, γ = gµ B / is the absolute value of the gyromagnetic ratio, g is the Landé g-factor, µ B is the Bohr magneton, is the reduced Planck constant, and α is the dimensionless Gilbert damping parameter. The samples studied are polycrystalline and show negligible in-plane anisotropy. The in-plane resonance condition is where ω = 2πf is the microwave angular frequency, H FMR is the resonance field, 4πM eff = 4πM s − 2K U /M s , and K U is the perpendicular-to-film-plane uniaxial anisotropy. Figure S2A shows example data of H FMR as a function of frequency. The data are fit using equation S5 to extract 4πM eff and g, which are tabulated in Table S2.
The measured FMR linewidth defined as half-width of the resonance curve is well described by Gilbert-like damping, where ω is the microwave angular frequency, and ∆H(0) is the zero-frequency line broadening due to long range magnetic inhomogeneity [39][40][41]. Figure S2B shows example data of ∆H as a function of frequency. The data are fit using equation S6 to extract the parameters α and ∆H(0), which are tabulated in Table S2. The values of α in Table S2 show that the Ta/FM/Ta sample has the lowest damping. This means that the spin pumping is smallest at the Ta/FM and FM/Ta interfaces [42][43][44]. These data imply that spin current density across the FM/Ta interface is low and thus PHT arising from net spin current across this interface is small. Adding a Au insertion layer at the bottom Ta/FM interface, Ta/FM/Ta → Ta/Au/FM/Ta, shows a significant increase in the spin pumping induced damping. It is noteworthy that the thickness of Au(3.9 nm) is much less than the spin diffusion in Au (30-60 nm) suggesting that the Au/Ta bilayer acts as an efficient spin sink. Furthermore, adding another Au layer between the top FM/Ta interface shows further increase in spin pumping induced damping. This shows that the Au layer in the Au/Ta bilayer facilitates efficient spin transfer from the FM layer. The values of α in the Pt/FM and Pd/FM structures are high indicating that the Pt and Pd layers are efficient spin current sinks.
The result of low damping at the FM/Ta interface is somewhat surprising as much literature has shown that many FM/Ta exhibit significant spin pumping induced damping [36,45], i.e. Ta acts as an efficient spin sink when adjacent to a FM. However, there are other cases where spin pumping directly into Ta did not occur. In one of the earliest spin pumping experiments on NM/NiFe/NM films, Mizukami et. al. [46] showed large interface damping when NM = Pt, Pd, but no interface damping when NM = Ta. A similar effect was seen by Liu et. al. [3] at the Ta/CoFeB interface. More recently, Singh et. al. [47], have shown spin pumping experiments in Ta/NM1/Co/NM2/Ta, Ta/Co/Ta, and Cu/Ta/Cu samples, where NM1 and NM2 are various combinations of Pt and Au layers. Their results are consistent with our data in Table S2 and show, that under certain sample growth conditions, spin transfer from Co to Ta at the Co/Ta interfaces is inefficient.
SPIN TORQUE FERROMAGNETIC RESONANCE
We measure the ferromagnetic resonance linewidth as a function of direct current bias in the nanowire devices by spin-torque ferromagnetic resonance (ST-FMR) [48,49]. In this method, a rectified voltage V mix generated by the sample in response to the applied microwave current is measured as a function of the drive frequency and external magnetic field [50]. Resonances in V mix are observed at the frequency and field values corresponding to spin wave eigenmodes of the system [51]. To improve the sensitivity of the method, we modulate the applied magnetic field and use lock-in detection to measure ac voltageṼ mix generated by the sample at the frequency of the magnetic field modulation as illustrated in Fig. S3 [18]. This signal is proportional to magnetic field derivative of the rectified voltagẽ V mix (H) ∼ dV mix (H) /dH.
GENERATED MICROWAVE POWER AND THE CRITICAL CURRENT
When direct current I dc applied to the nanowire spin torque nano-oscillator (STNO) exceeds a critical value I c , magnetization of the FM layer enters an auto-oscillatory state, which gives rise to microwave power generation by the STNO device. We find that several spin wave eigenmodes of the nanowire enter the auto-oscillatory state, however most of the microwave power is generated by auto-oscillations of the lowest-frequency spin wave mode M 1 . The integrated power P int (blue symbols) emitted by mode M 1 in the Ta(3 nm)/Au(3.9 nm)/FM/Ta(4 nm) nanowire device is shown in Fig. S4A. In this report, the power is expressed as power delivered to a 50 Ω load and is corrected for frequency dependent attenuation and amplification in the microwave measurement circuit. The data in Fig. S4A are collected at T = 77 K in a 1.6 kOe magnetic field applied at θ = 225 • in the xz plane. We observe a large increase in the emitted microwave power above the background level for I dc < −0.7 mA. A more precise evaluation of I c can be obtained by fitting the inverse of the integrated power (red symbols in Fig. S4A) to a straight line for sub-critical values of I dc [9]. The critical current obtained via such a procedure from the data in Fig. S4A is I c = −0.73 mA. Figure S4B shows the dependence of I c on magnetic field H applied at θ = 225 • in the xz plane. Apart from the low field regime, this dependence is well fit by a straight line. This linear dependence of I c on H is expected for our devices because magnetic anisotropy of the nanowire is relatively small [9,52]. In the low field regime, magnetization of the nanowire starts to deviate from the applied field direction that maximizes the PHT efficiency (θ = 225 • ), which leads to an increase of I c . The microwave power emitted by this STNO in the M 1 mode is approximately 60 pW at I dc = −1.3 mA, which significantly exceeds the output power of spin Hall STNOs measured at higher currents [6]. The reason for the improved efficiency of PHT STNO is the angular dependence of AMR, which maximizes the amplitude of resistance oscillations for magnetization making a 45 • angle with respect to the electric current direction. This magnetization direction in the xz plane is also the direction of maximum efficiency of the PHT. This is to be contrasted with spin Hall STNOs, for which the direction of maximum negative damping efficiency (ŷ) minimizes the amplitude of AMR oscillations.
RELATION BETWEEN PLANAR HALL TORQUE AND ANISOTROPIC MAGNETORESISTANCE
Both PHT and AMR arise from the planar Hall current in the FM layer, and thus PHT is expected to increase with increasing AMR. Since AMR in our NM/FM systems strongly depends on temperature, we can test the relation between PHT and AMR by measuring their temperature dependence.
The angular dependence of normalized resistance of a Ta(3 nm)/Au(3.9 nm)/FM/Ta(4 nm) multilayer as a function of the direction of a 4 kOe saturating magnetic field applied in the plane of the sample is shown in Fig. S5A. The data reveal that the AMR ratio ∆ρ AMR /ρ of the multilayer increases by 80% at T = 77 K compared to its room temperature value: (∆ρ AMR /ρ) 295 K = 0.0116; (∆ρ AMR /ρ) 77 K = 0.0209: (∆ρ AMR /ρ) 77 K (∆ρ AMR /ρ) 295 K = 1.80. (S7) Fig. S5B shows the critical sheet current density for the onset of auto-oscillations J c measured by ST-FMR in the Ta(3 nm)/Au(3.9 nm)/FM/Ta(4 nm) nanowire device at T = 295 K and T = 77 K. This J c measured at 9 GHz decreases from J c,295 K = −4.4×10 4 A/m at room temperature to J c,77 K = −2.5×10 4 A/m at 77 K. The value of J c,77 K measured by ST-FMR (at 9 GHz corresponding to H = 2.83 kOe) is in agreement with the value of J c obtained from the microwave emission measurements at this magnetic field value, as shown in Fig. S4B. The observed temperatureinduced variation of J c is consistent with the expected increase of the PHT efficiency due to the low-temperature increase of the planar Hall current and AMR.
The critical sheet current density can be approximated by [52]: where e is the electron charge, is the reduced Planck's constant, d is the FM layer thickness, ω is the oscillation angular frequency, γ is the gyromagnetic ratio, and η is a PHT efficiency parameter characterizing conversion of the sheet current density J dc into the spin current density Q z PHE . | 2018-12-15T08:48:08.543Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "04f75493c4f4c8348608f2a2f575d1d09960d42d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.07335",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d71abb87c90f01f4b1daab7b857974c2148c41d6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
228083081 | pes2o/s2orc | v3-fos-license | Braincase and endocranium of the Placodont Parahenodus atancensis
Parahenodus atancensis de Miguel Chaves, Ortega & Pérez-García, 2018 is a recently described bizarre cyamodontoid placodont, based on a partial but well-preserved Spanish Upper Triassic skull. It was identified as the sister taxon of the German highly specialized Henodus chelyops Huene, 1936. The use of micro-computed tomography has been able to significantly increase knowledge of the cranial anatomy of Parahenodus atancensis by the characterization of several bones and structures previously unknown for this taxon, as well as to obtain a partial reconstruction of its brain endocast and associated endocranial structures, poorly known in most Triassic sauropterygians. In addition, we identify several synapomorphies of the braincase of Cyamodontoidea so far unknown in Henodontidae, improving knowledge about this clade. The study of the endocranium and neurosensory structures of Parahenodus atancensis suggests a relatively lower reliance on vision, the pineal system and the pituitary than in other Triassic sauropterygians. RÉSUMÉ La boîte crânienne et l’endocrâne du Placodont Parahenodus atancensis de Miguel Chaves, Ortega & Pérez-García, 2018, un représentant du groupe hautement spécialisé Henodontidae. Parahenodus atancensis de Miguel Chaves, Ortega & Pérez-García, 2018 est un placodonte cyamodontoïde récemment décrit à partir d’un crâne partiel mais bien préservé du Trias supérieur d’Espagne.
De Miguel Chaves C. et al.
INTRODUCTION
Placodonts were a clade of mostly-durophagous Triassic sauropterygians that inhabited shallow waters of the Paleotethys Sea; their fossil record ranging from the Anisian (Middle Triassic) to the Rhaetian (Late Triassic) (Rieppel 2000;Neenan et al. 2013Neenan et al. , 2015Klein et al. 2015). Th us, remains of placodonts have been recovered in Europe (e.g. Rieppel 2000;Klein & Scheyer 2014;Neenan & Scheyer 2014;, the Middle East (Haas 1975;Rieppel et al. 1999;Vickers-Rich et al. 1999;Rieppel 2002a;Kear et al. 2010) and China (Li 2000;Li & Rieppel 2002;Jiang et al. 2008;Zhao et al. 2008;Neenan et al. 2015;Wang et al. 2018Wang et al. , 2019. Th e members of Placodontia show two morphotypes: unarmored placodonts or placodontoids, and armored placodonts or cyamodontoids. Whereas the monophyly of Cyamodontoidea is well established, the phylogenetic relationships within Placodontoidea are still object of debate, being currently considered monophyletic by several authors ), but paraphyletic by others (Wang et al. 2018(Wang et al. , 2019. Henodus chelyops Huene, 1936 is a bizarre cyamodontoid placodont exclusively found in the Carnian (Upper Triassic) of Tübingen (Germany) (Huene 1936). It presents some exclusive adaptations that suggest a very specialized trophic niche (i.e., herbivory and fi lter-feeding), very diff erent to the durophagy of the other members of Placodontia (Reif & Stein 1999;Rieppel 2002b;Naish 2004), and therefore its phylogenetic position has traditionally been problematic. A recently described cyamodontoid placodont from the Upper Triassic fossil site of El Atance (Guadalajara, Spain), Parahenodus atancensis de Miguel Chaves, Ortega & Pérez-García, 2018, has been recognized as the sister taxon of Henodus chelyops . Parahenodus atancensis shares some highly derived characters with Henodus chelyops, but it also presents some primitive conditions for other characters, shared with other less derived members of Cyamodontoidea. Th us, the fi nd of Parahenodus atancensis provided relevant information about the acquisition of these autapomorphic characters in Henodus chelyops and the evolution of the clade Henodontidae .
Although some nothosaurian endocasts were cited but not described at the end of the nineteenth century (Koken 1890(Koken , 1893a, the fi rst work on the neuroanatomy of the Triassic members of Sauropterygia goes back to the study and description of an endocast of Nothosaurus mirabilis Münster, 1834(Edinger 1921. Several works have focused on the study of the braincase and neurosensory structures of Triassic sauropterygians for almost a century, including nothosauroids (Gorce 1960;Rieppel 1994) and placodonts (Edinger 1925;Hopson 1979;, 1996Rieppel 2001;Nosotti & Rieppel 2002). In addition, the use of computed and micro-computed tomographic scanning has been incorporated into the study of the cranial morphology and braincase of this group in recent years (Neenan et al. 2013Neenan & Scheyer 2014;Marquez-Aliaga et al. 2019;Wang et al. 2019). However, a detailed study and reconstruction of the brain and neurosensory and neurovascular structures has only been carried out in the placodont Placodus gigas Agassiz, 1833 (Neenan & Scheyer 2012), and in the nothosaur Nothosaurus marchicus Koken, 1893(Voeten et al. 2018. Th e use of micro-computed tomographic scanning has also been recently applied to the reconstruction of the labyrinth of several sauropterygian taxa (Neenan et al. 2017).
Th us, we present a detailed description of the skull and braincase of Parahenodus atancensis using micro-computed tomography, providing new abundant information on the cranial anatomy of this taxon, highlighting those characters that are not observable in an external view. We also provide a reconstruction of the cranial endocast of Parahenodus atancensis, corresponding to the fi rst published three-dimensional reconstruction of a cyamodontoid placodont endocranium and neurosensory structures. MUPA ATZ0104 is an undeformed partial skull 63 mm long and 73 mm wide, which preserves the parietal table, and the partial right orbit, palate and occiput (Fig. 1). However, it lacks the rostrum, most of the orbits and the left side of the skull table. Th e skull is internally fi lled with marl.
MUPA-ATZ
MUPA ATZ0104 was scanned with a high resolution scanner Nikon XT H-160, in the Servicio de Técnicas No Destructivas: Microscopía Electrónica y Confocal y Espectroscopía ct-Scan, of the Museo Nacional de Ciencias Naturales (Madrid, Spain). Th e scanning used a voltage of 152 kV and a current of 49 μA. Th e inter-slice spacing was 0.08 mm. Th e raw data were imported to a segmentation and 3D editor software for segmentation, visualization and analysis. An interactive 3D PDF of the skull bones, brain and associated structures is included as Electronic Supplementary Material (Appendices).
NEW INSIGHTS INTO THE OSSEOUS ANATOMY OF THE PARAHENODUS ATANCENSIS SKULL
A detailed description of the external morphology of the holotype of Parahenodus atancensis was provided by . However, the application of CT technology has allowed us to complete that description by the characterization of the inner morphology of the braincase, in addition to performing the three-dimensional reconstruction of each of the osseous elements preserved in this specimen (Fig. 2).
New osseous information relative to the skull of Parahenodus atancensis is described in this section. Th e quadratojugals contact the complete lateral and dorsolateral surfaces of the quadrates ( Fig. 2A, C, E, F). In ventral view, the contact (10) between the preserved jugal and maxilla is located near the external margin of the palate (Fig. 2B), being confi rmed as caused by a medial expansion of the jugal, as suggested by . Th is medial expansion of the right jugal also contacts ventromedially with the palatine. Th is study allows a detailed characterization of the jugals of this taxon. In ventral view, the jugal posterolaterally contacts the quadratojugal, posteromedially the postorbital, and the squamosal between both sutures. Furthermore, the jugal defi nes the anterior margin of the preserved subtemporal fossa (Fig. 2B). Th e medial margin of the subtemporal fossa is formed by the pterygoid, the palatine and the medial projection of the jugal. Due to breakage, the contact between the palatines and the quadrates cannot be accurately defi ned.
Th e occiput and the posterior part of the braincase of MUPA ATZ0104 are badly damaged. Th us, part of the supraoccipital, the exoccipitals and most of the basioccipital, including the condyle, are lost, and most of the foramina present in the posterior surface of the skull cannot be observed either (Fig. 2D). Th e supraoccipital defi nes the dorsal margin of the foramen magnum, and it is in contact with the parietals. Th e supraoccipital would also have been in contact with the opisthotic but due to the preservation of the fossil this area of contact is missing. Th e preserved right opisthotic forms the paroccipital process, which includes a ventral tuber. Th e opisthotic also presents a lateral ascendant ramus that contacts a descendent process of the squamosal (Fig. 2D). It also contacts the quadrate. Th e anterior surface of the opisthotic is not well preserved.
Th e squamosals possess a small projection that expands anteriorly, the otic process (sensu Nosotti & Pinna 1993), which contacts the posterolateral region of the prootics (Fig. 3). Another small medioventral projection of the squamosals contacts the posterodorsal expansion of the epipterygoids (Fig. 3).
Th e prootics are short elements with a convex lateral surface and a deeply concave medial one, where they possess a cavity where the endosseous labyrinths (inner ear) are located. A posterolateral projection of the prootics contacts the otic process of the squamosals (Fig. 3). Th is projection of the prootics also defi nes the medial margin of the palatoquadrate cartilage recess, and has a groove for this cartilage (see below). Th is groove for the palatoquadrate cartilage is also present in the quadrates (Fig. 4). Th e prootics are located anterior to the opisthotics (Fig. 3). However, due to the bad preservation of the anterior surface of the preserved right opisthotic, the limits of the otic capsule cannot be well defi ned. Th e prootics also contact the quadrates ventrolaterally, and are located posterior to the epipterygoids (Fig. 4). Th e epipterygoids are complex elements that form most of the lateral walls of the preserved braincase (Fig. 3). Th ey are anteriorly convex, and strongly concave posteriorly, with a lateroventral expansion that extends over the palatines (Fig. 4). Th e epipterygoids have a longitudinal furrow that receives a descending process of the parietals (Fig. 3). A well-developed posterodorsal projection of the epipterygoids contacts a medioventral expansion of the squamosals (Fig. 3). Th e epipterygoids contact the fused parietal dorsally, the prootics and the squamosals posteriorly, and the palatines, the pterygoids and the parabasisphenoid ventrally.
Th e parietals are fused into a single element, and they enclose the braincase dorsally ( Fig. 2A). Th is unpaired parietal possesses two descending processes that are received by the epipterygoids. Th ese descending processes become posteriorly a ventral thickening of the bone, where the parietal contacts the supraoccipital.
Th e right post-temporal fenestra has been identifi ed in Parahenodus atancesis (Figs 2D; 3). It is defi ned posteroventrally by the opisthotic; posterodorsally, posteromedially, posterolaterally, dorsally and laterally by the squamosal; anterodorsally and anteromedially by the epipterygoid; and anteroventrally by the prootic (Figs 2D; 3). A foramen is present at the anterior region of the preserved right post-temporal fenestra, the pteroccipital foramen, which opens ventrally to a short passage of the stapedial artery, anterior to the paroccipital process (Fig. 3). Th is pteroccipital foramen is delimited posteriorly by the opisthotic, laterally by the otic process of the squamosal, anteriorly by the prootic, and laterally by the squamosal and the epipterygoid.
A pair of palatoquadrate cartilage recesses has also been identifi ed in the holotype of Parahenodus atancensis (Fig. 4). Th ey are triangular cleavages that would have been fi lled with cartilage in the living animal, connecting the quadrate and the epipterygoid. Th ey are defi ned mostly by the palatines and the medial wing of the quadrates laterally; medially by the squamosals, the prootics and the epipterygoids; and ventrally by the quadrates, the palatines and the pterygoids. A groove for the cartilage is observed in the quadrates and the prootics.
Th e prootics and the epipterygoids also defi ne the trigeminal foramina, which allow the exit of the trigeminal nerves (Fig. 4). Th e trigeminal foramina are limited anteriorly by the epipterygoids and posteriorly by the prootics.
Most of the small parabasisphenoid complex is also preserved, being located anteriorly to the basioccipital (Fig. 5). It closes the braincase ventrally in the preserved area. It is V-shaped, with its lateral projections contacting the pterygoids and the epipterygoids. Th e sella turcica is small and shallow, being square-shaped in dorsal view. Two small cerebral carotid foramina are also present. A poorly developed dorsum sellae is observed in the posterior margin of the sella turcica.
MORPHOLOGY OF THE RECONSTRUCTED BRAIN, INNER EAR, AND NEUROSENSORY AND NEUROVASCULAR STRUCTURES
Due to the preservation of MUPA ATZ0104, only a small portion of the endocast can be identifi ed (Fig. 6). Th e antorbital region of the skull is missing, so most of the forebrain (including the olfactory structures) could not be reconstructed. In the same way, the skull lacks most of the bones in the occipital region (i.e., the exoccipitals, part of the supraoccipital, the left opisthotic and most of the basioccipital). Th us, the reconstruction of the endocast of MUPA ATZ0104 includes the posterior area of the forebrain or prosencephalon, most of the midbrain or mesencephalon, and the anterior part of the hindbrain or rhombencephalon.
Only the posteriormost region of the forebrain of MUPA ATZ0104, where the pineal system is located, could be reconstructed (Fig. 7). Th e cerebrum has a fl at dorsal surface, and a trapezium-shape section, the dorsal surface being longer than the ventral one (Fig. 7C, D). Th e pineal system in Parahenodus atancensis is represented by a very thin and laterally compressed passage, which connects the dorsal surface of the brain with a longitudinally narrow pineal foramen located anteriorly in the fused parietal (Fig. 7A, C, D).
Th e area where the pituitary (or hypophysis) would have been is identifi ed as a weakly developed rectangular structure, located in the posteroventralmost part of the prosencephalon (Fig. 7B). It is settled over the sella turcica of the parabasisphenoid. It has two small lobes that lead to the two small cerebral carotid foramina of the sella turcica.
Th e reconstructed mesencephalon of Parahenodus atancensis exhibits a pair of strong lateral and symmetric compressions (Fig. 7A). Th ese lateral compressions of the midbrain are well defi ned by the medial ascendant wall of the epipterygoids. Each of these strong compressions could correspond to the cava epipterica. Optic lobes could not be clearly distinguished in the endocast of the midbrain of MUPA ATZ0104. Posterior to the cava epipterica, the mesencephalon expands laterally (Fig. 7A). Th e morphology of the mesencephalon and rhombencephalon in the area posterior to this point cannot be defi ned.
Only the anteriormost part of both right and left endosseous labyrinths could be reconstructed (Fig. 7), due to the poor preservation of the right opisthotic and the lack of the left one, the otic capsules being incomplete. Both endosseous labyrinths are slightly distorted and medially rotated. Th us, only part of the anterior semicircular canal can be reconstructed, as well as part of the vestibule of the inner ear (Fig. 7E, F). Th ey seem to be dorsoventrally short and anteroposteriorly elongated. However, the morphology of the anterior semicircular canal cannot be observed in detail due to the collapse of these canals and the poor preservation of the prootics.
Although most of the areas where the cranial nerves of MUPA ATZ0104 were located are lost, the canal for the trigeminal nerve (V) can be identifi ed (Fig. 7). Th is canal is located anterior to the endosseous labyrinth, being enclosed between the epipterygoids and the prootics. Th e trigeminal nerve would emerge from the braincase through the trigeminal foramen (see above). Ventrally to the brain, the paired carotid branches are recognized, corresponding to two long neurovascular canals that run longitudinally to the occipital foramina, not preserved in MUPA ATZ0104 (Fig. 7B, E, F). In fact, because of the loss of most of the occiput in the holotype of Parahenodus atancensis, we cannot reconstruct the morphology of the carotid arteries posterior to the pterygoids. Th e canals for the carotid branches are ventrally and laterally delimited by the pterygoids, medially by the basioccipital and the parabasisphenoid, and dorso-anteriorly by the parabasisphenoid. Th e carotids would emerge from the brain ventrally to the pineal organ (Fig. 7E, F). Posterior to this point, they would reach the sella turcica in the parabasisphenoid and, although it cannot be clearly observed due to the preservation, they would contact the pituitary through the small carotid foramina of the sella turcica.
DISCUSSION
Th e virtual reconstruction of the skull of the henodontid pladocont Parahenodus atancensis, based on its holotype and so far only known specimen, provides new information about this taxon and improves our knowledge of the highly specialized clade Henodontidae. Th e ventral contact between the jugal and the maxilla of Parahenodus atancensis is recognized here as caused by a medial expansion of the jugal (Figs 2B; 8). Th is suture has not been described in any specimen of the other representative of Henodontidae currently known, Henodus chelyops, due to preservation issues (Rieppel 2001). A contact between the process of the squamosals and the posterodorsal process of the epipterygoids is present in Parahenodus atancensis (Fig. 3), this contact being also present in Cyamodus rostratus Münster, 1839, and Placochelys placodonta Jaekel, 1902(Rieppel, 2001, but unknown in the other cyamodontoids. Th e otic process of the squamosal sensu is present in MUPA ATZ0104, being in contact with the prootic (Fig. 3). Th is neomorphic process is also observed in other cyamodontoids such as Placochelys placodonta, Cyamodus rostratus and Cyamodus orientalis Wang, Li, Scheyer & Zhao, 2019(Rieppel 2001Wang et al. 2019), but is unknown in Henodus chelyops. Th e epipterygoids of Parahenodus atancensis are identifi ed as large and complex elements that defi ne the braincase laterally (Fig. 3). Due to the preservation, the morphology and disposition of these bones in Henodus chelyops cannot be well defi ned, but the morphology in MUPA ATZ0104 is shared with that of other cyamodontoids. Th us, a broad dorsal process, the contact with the squamosal at the dorsal margin of post-temporal fenestra, and the large suture with the palatines are shared amongst all of them (Rieppel 2001;Neenan et al. 2015). Th e presence of the palatoquadrate cartilage recess in the holotype of Parahenodus atancensis (i.e., a cleft fi lled with cartilage in the living animal that would connect the quadrate with the epipterygoid; Fig. 4), is shared with the other cyamodontoids, being considered as a synapomorphy of Cyamodontoidea (Rieppel 2001;Neenan et al. 2015). It was also identifi ed in Specimen I of Henodus chelyops (Rieppel 2001). Th e presence of a pteroccipital foramen (sensu Rieppel 2001) delimited by the opisthotic, the prootic, the squamosal and the epipterygoid (Fig. 3), is also a synapomorphy of Cyamodontoidea (Rieppel 2001;Neenan et al. 2015;Wang et al. 2019 delimited by the epipterygoid anteriorly and by the prootic posteriorly (Fig. 4), has been identifi ed in a similar position and defi ned by the same bones in other cyamodontoid taxa (Nosotti & Pinna 1996;Rieppel 2001;Neenan & Scheyer 2014;Wang et al. 2019). By contrast, this foramen is totally enclosed by the prootic in the non-cyamodontoid placodont Placodus gigas (Neenan & Scheyer 2012). Information about the parabasisphenoid complex in cyamodontoid placodonts is limited, this complex having been only described in detail in Placochelys placodonta (Rieppel 2001) and Psephoderma alpinum Meyer, 1858(Neenan & Scheyer, 2014. Its morphology in Parahenodus atancensis is similar to those of both taxa, with a shallow sella turcica, small cerebral carotid foramina and a poorly developed dorsum sellae (Fig. 5). However, it is larger and more complex in the non-cyamodontoid Placodus gigas (Neenan & Scheyer 2012). In fact, an unusual character in the latter is the notable distance between the sella turcica and the dorsum sellae (Nosotti & Rieppel 2002;Neenan & Scheyer 2012), which are usually located close to one another in cyamodontoid placodonts. Scarce information is so far available about the brain and senses of the placodonts and other Triassic sauropterygians. However, in spite of the fragmentary nature of MUPA ATZ0104, some information can be provided about the neuroanatomy of Parahenodus atancensis. Th e preserved region of MUPA ATZ0104 shows a long and tubular brain endocast (Fig. 7), as is the case in the eosauropterygian genus Nothosaurus (Koken, 1893;Edinger, 1921;Voeten et al., 2018), whereas the brain in Placodus gigas is bulkier (Nosotti & Rieppel 2002;Neenan & Scheyer 2012). Th e morphology of the skull can be related with those of the braincase and the brain (as suggested by Voeten et al. 2018), since the skulls of both nothosaurians and cyamodontoid placodonts are much more dorsoventrally compressed than that of Placodus gigas. As in the case of Nothosaurus marchicus (Voeten et al. 2018), Parahenodus atancensis lacks welldeveloped cerebral lobes, but unlike this small nothosaur, no optic lobes are distinguishable in the endocast of MUPA ATZ0104. Th e absence of well recognizable optic lobes in the midbrain of Parahenodus atancensis suggests a poorer vision than those of the nothosaurs and plesiosaurs, where these structures are well developed (Allemand et al. 2019). No optic lobes have been identifi ed in Placodus gigas so far. Th e reconstructed pineal system in Parahenodus atancensis is very narrow (Fig. 7A, C, D), especially when compared with those of the Triassic sauropterygians Placodus gigas (Neenan & Scheyer 2012) and Nothosaurus marchicus (Voeten et al. 2018). Th is structure is much more developed in these two taxa, which suggests a lower dependence on the pineal organ in Parahenodus atancensis than in those forms, as well as a lower photoreceptive capability. Although the morphology of this pineal organ is unknown in Henodus chelyops, the pineal foramen is also mediolaterallly compressed (e.g. Rieppel 2001), so a similar development of the pineal organ in this taxon is also expected.
Th e reconstructed pituitary in Parahenodus atancensis is weakly developed (Fig. 7B) pituitary can be inferred also in Placochelys placodonta and Psephoderma alpinum, based on the morphology of the parabasisphenoid complex (Rieppel 2001;Neenan & Scheyer 2014). A weak development of this structure has also been observed in Nothosaurus marchicus (Voeten et al. 2018). Th is contrasts with the identifi cation of an unusually well-developed hypophysis in the placodont Placodus gigas (Neenan & Scheyer 2012), whose morphology could therefore be uncommon among Triassic sauropterygians, and also with the well-developed pituitary of such other Sauropterygia linages as the plesiosaurs (Otero et al. 2018;Allemand et al. 2019).
In spite of its incompleteness, the holotype of Parahenodus atancensis also provides some information relative to its bony labyrinth (Fig. 7). Th is structure is recognized as slightly dorsoventrally fl attened and anteroposteriorly enlarged (Fig. 7E, F). Th is morphology is shared with Placodus gigas (Neenan & Scheyer 2012) and Nothosaurus marchicus (Voeten et al. 2018), as well as with other Triassic sauropterygians like Simosaurus gaillardoti Meyer, 1842(Neenan et al. 2017, all of which were inhabitants of nearshore and shallow waters. Whereas the bony labyrinth of the plesiosaurs is more derived as an adaptation to pelagic lifestyles, being similar to those of most of the cryptodiran marine turtles (see some exceptions in Evers et al. 2019), the morphology of this element in the Triassic sauropterygians resembles that of several extant aquatic reptiles, including crocodylians, freshwater turtles and marine squamates (Neenan et al. 2017). Th is implies that the reconstructed morphology of the bony labyrinth of Henodontidae (only known for Parahenodus atancensis) is similar to that of both extinct and extant shallow-water semi-aquatic reptiles, whose movement is strongly based on lateral undulations of the trunk and tail (Neenan et al. 2017).
CONCLUSIONS
Th e use of micro-computed tomography for the study of MUPA ATZ0104, a partial skull corresponding to the holotype and only known specimen of the Spanish Upper Triassic Parahenodus atancensis, provides the fi rst three-dimensional reconstruction of a cyamodontoid placodont braincase and neural system. Th is virtual reconstruction allows the identifi cation of several osseous elements of the skull so far unpublished or poorly known, such as the epipterygoids, the prootics, the parabasisphenoid complex, the otic process of the squamosal, the palatoquadrate cartilage recess, the trigeminal foramen and the pteroccipital foramen. Several synapomorphies of the braincase of the clade Cyamodontoidea, some of them so far unknown in Henodontidae, are recognized in Parahenodus atancensis, including the large and complex epipterygoids contacting the squamosal at the dorsal margin of the posttemporal fenestra and the palatines in their ventral surfaces; the presence of the palatoquadrate cartilage recess; and the presence of a pteroccipital foramen defi ned by the opisthotic, the prootic, the squamosal and the epipterygoid.
Th e reconstruction of the brain endocast and the associated structures of Parahenodus shows a lower reliance on vision and the pineal complex than in other known Triassic sauropterygians, as well as a weakly developed hypophysis; these characters were so far unknown for the highly specialized clade Henodontidae. Th e partial reconstruction of the bony labyrinth shows a similar morphology to that of other Triassic sauropterygians and several extant shallow-water semiaquatic reptiles, which suggests that Parahenodus atancensis was a nearshore inhabitant, which is compatible with the lifestyle proposed for the members of Placodontia. | 2020-12-10T05:43:27.962Z | 2020-12-07T00:00:00.000 | {
"year": 2020,
"sha1": "30ed86a88b68b7069f6ca6bf6577cc0161ddb15f",
"oa_license": null,
"oa_url": "https://sciencepress.mnhn.fr/sites/default/files/articles/pdf/comptes-rendus-palevol2020v19a10.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "764739b5b60e316fdf355dfad0d9ed0735cca2b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
221050564 | pes2o/s2orc | v3-fos-license | Deadlock of proctologic practice in Italy during COVID-19 pandemic: a national report from ProctoLock2020
Proctology is one of the surgical specialties that suffered the most during COVID-19 pandemic. Using data from a cross-sectional worldwide web survey, we aimed to snapshot the current status of proctologic practice in Italy with differences between three macro areas (North, Centre, South). Specialists affiliated to renowned scientific societies with an interest in coloproctology were invited to join a 27-item survey. Predictive power of respondents’ and hospitals’ demographics on the change of status of surgical activities was calculated. The study was registered at ClinicalTrials.gov (NCT 04392245). Of 299 respondents from Italy, 94 (40%) practiced in the North, 60 (25%) in the Centrer and 82 (35%) in the South and Islands. The majority were men (79%), at consultant level (70%), with a mean age of 46.5 years, practicing in academic hospitals (39%), where a dedicated proctologist was readily available (68%). Southern respondents were more at risk of infection compared to those from the Center (OR, 3.30; 95%CI 1.46; 7.47, P = 0.004), as were males (OR, 2.64; 95%CI 1.09; 6.37, P = 0.031) and those who routinely tested patients prior to surgery (OR, 3.02; 95%CI 1.39; 6.53, P = 0.005). The likelihood of ongoing surgical practice was higher in the South (OR 1.36, 95%CI 0.75; 2.46, P = 0.304) and in centers that were not fully dedicated to COVID-19 care (OR 4.00, 95%CI 1.88; 8.50, P < 0.001). The results of this survey highlight important factors contributing to the deadlock of proctologic practice in Italy and may inform the development of future management strategies. Electronic supplementary material The online version of this article (10.1007/s13304-020-00860-0) contains supplementary material, which is available to authorized users.
Introduction
COVID-19 pandemic has critically impacted the surgical world [1]. More than 28 million procedures would be cancelled or postponed during the 12-week peak according to a recent global expert-response study [2]. The vast majority (90%) of operations would be treating benign diseases, with an estimated overall 12-week cancellation rate of 72%. This scenario has strongly challenged proctologic practice, which includes a large spectrum of conditions with a significant psycho-socio-economic burden [3].
Detection of the novel coronavirus (SARS-COV-2) RNA in patients' stool samples and gastrointestinal epithelium has led to enhance infection control precautions [4]. Consequently, several guidelines have been developed to optimize treatment strategies while ensuring healthcare workers' safety by means of adequate personal protective equipment (PPE) [5][6][7]. However, the ever-changing situation observed in most countries has often hampered the attempts to put these guidelines into practice [8].
ProctoLock 2020 is a survey aimed to assess the current status of proctologic practice worldwide.
In our previous global report [9], the proportion of unaltered, reduced or fully stopped practice has been snapshotted in 69 countries.
The purpose of this study was to explore the impact of COVID-19 on proctologic practice in Italy, looking for differences between North, South and Central regions.
Materials and methods
Experts in the field who joined a previous qualitative study [10] (N = 492) were invited to complete a web survey. The survey link was sent to national scientific societies of interest to coloproctologists and disseminated to their members. All collaborators committed to further recruitment of participants by direct invitation.
A 27-item survey (namely, 'ProctoLock 2020′; Appendix 1) was designed and developed by the authors using an online platform ('Online surveys' [formerly BOS-Bristol Online Survey], developed by the University of Bristol) in accordance with the Checklist for Reporting Results of Internet E-Surveys (the CHERRIES statement) [11]. The finalized online survey was made available online from April 15th to 26th 2020. The survey aimed to capture the current status of proctologic practice worldwide, first exploring the overall changes in terms of resource allocation, and second assessing in more detail the various fields of application for both proctologic surgery (i.e. elective [oncological and non oncological] and urgent) and outpatient practice, with a focus on sexually transmitted disease and pelvic floor clinics. The availability of anorectal physiology testing was also assessed. The study was registered at ClinicalTrials. gov (NCT 04392245).
Statistical analysis
Logistic models for binary or ordinal variables were performed to assess the association between respondents' preferences and their characteristics (adjusted odds ratio [OR]). Multivariable models were fitted using a predefined set of covariates which included respondents' and hospitals' demographics (i.e. geographical area, age, gender, type of hospital, hospital rearrangement, external facilities for proctologic surgery, use of PPE, pre-operative testing policies for COVID -19). Brant test to check the proportional odds assumption was performed for the ordinal logistic model. No formal correction for multiple testing has been made although we have critically assessed all P values < 0.05.
All analyses were performed using Stata 16 (StataCorp LLC, College Station, TX, USA).
A multivariable logistic model showed that respondents from the South were more at risk of infection compared to those from the Centre (OR, 3.30; 95%CI 1.46; 7.47, P = 0.004), as were males (OR, 2.64; 95%CI 1.09; 6.37, P = 0.031) and those who routinely tested patients prior to surgery (OR, 3.02; 95CI 1.39; 6.53, P = 0.005) compared to their counterparts.
The majority of Italian respondents worked in centres that were partially rearranged (N = 216 [72%]) to guarantee the assistance to COVID-19 patients, with a similar distribution between regions (Table 2).
More than a half of respondents had modified the surgical informed consent for both COVID- Forty percent (N = 116) of respondents had yet to reschedule patients waiting for surgery or outpatient visit.
Compared to the rest of Europe, elective proctologic surgery in Italy was considerably reduced (with a test of heterogeneity at P = 0.026) (Fig. 1).
Among the 116 (39%) respondents who found flaws or delay in the management of oncological patients, the majority was from the North (N = 52 [56%]).
More than 82% (N = 247) of participants declared that elective non oncological surgery was fully stopped, with the main reasons being hospital directions and/or reduced referrals.
Following national or local hospital directions, outpatient activity was fully stopped or reduced in Italy according to 53% (N = 158) or 45% (N = 135) of respondents, respectively. The majority (127 [90%)]) reported regular use of PPE during the visits.
Possible diagnostic delays resulting from a decreased outpatient activity concerned 265 [88.6%] respondents.
Discussion
In Italy (and Europe), the hunt for patient zero has proven unsuccessful and only served to fuel the confusion on the origin of the outbreak [12]. SARS-COV-2 spread across Europe following multiple paths and heterogeneously impacted on countries and between different geographical areas within the same nation.
The higher prevalence of male over female subjects was consistent with the results of our worldwide survey [9].
Despite proctology has been recognized worldwide as subspecialty, only two-thirds of Italian respondents reported the availability of dedicated proctologist in their center.
The alarmingly high prevalence of COVID-19 positivity among Italian respondents (twice that of all healthcare workers) [13] might suggest that proctologists carry a higher risk of contagion compared to other specialists. Several case studies have reported gastrointestinal symptoms and/or evidence that some patients with COVID-19 have viral RNA viable in stool or gastrointestinal epithelium, suggesting fecal-oral pathway as a further possible route of transmission [4,14,15]. Compared to those from the Center, prevalence of COVID-19 positivity among Southern respondents was three times higher. In this geographic area, PPE were less frequently deemed readily available and the likelihood of continuing the surgical activity was 36% higher. Such a worrying proportion of COVID-19 positive respondents in the South suggests that the different timing of the epidemic and the prompt lockdown measures put in place by the Italian government likely may have rescued this area from a potential catastrophe. As proof of resilience during troubled times, a significant number of respondents were redeployed to other activities [16].
More than 50% of respondents reported to have amended the surgical consent form, reflecting a great awareness of growing evidence from the literature about the increased operative risks in COVID-19 patients [17,18].
Compared to the rest of Europe, the reduction in elective surgical activity has been more pronounced in Italy. To further confirm the deadlock of proctologic practice in this country, almost 40% of respondents had yet to reschedule patients' outpatient visits or operations. While being unable to access healthcare services, many patients refrained from attending the emergency department due to the fear of being infected [19].
The suspension of oncological activity was reported by 39% of Italian respondents, with a peak of 56% in the North (the worst-hit area), where all activities were more likely to be put on hold and hospitals forced to shift all resources towards COVID-19 care. Most respondents were concerned about the negative effects of delaying care, with potentially irreversible consequences, especially for cancer patients [20].
As recently suggested [7], the outpatient/office surgical activity could have helped to diminish commitment to hospitals and optimize resource allocation in terms of operating spaces, staffing and beds. But this was not the case for at least two reasons, namely the full closure of all non-COVID-related activities (as per national directions) and the currently very limited experience with delivering this type of proctologic surgery. Undoubtedly, this should prompt health authorities and specialists to redesign and optimize the whole proctologic pathway across the national territory [21,22].
This study has some limitations that are commonly observed in survey-based studies (e.g. recall and selection bias). Nevertheless, the high percentage of consultants among respondents vouches for a satisfactory level of experience and supports the reliability of collected data.
The results of ProctoLock 2020 survey highlighted key critical issues that have emerged during COVID-19 pandemic worldwide and in particular in Italy, thus building foundations for future development of organizational solutions and nation-level initiatives. | 2020-08-08T15:54:23.029Z | 2020-08-08T00:00:00.000 | {
"year": 2020,
"sha1": "8c558a0fbdbc4d5eee99e7d97645d5d7957a6a38",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13304-020-00860-0.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c558a0fbdbc4d5eee99e7d97645d5d7957a6a38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234522494 | pes2o/s2orc | v3-fos-license | The Effect Analysis of Braden Scale on Pressure Ulcer in Community-Dwelling Older Adults
The research aimed to analyze the Braden Scale on the incidence of compressive wounds in elderly people who lived in homes, whether they were or not under the supervision of health workers. The research was analytic with a cross-sectional study. With the purposive sampling technique, the data collection was carried out from several areas in Bandung from October to November 2017. Moreover, the analysis used was multiple regression to see the effect of the Braden Scale on pressure ulcers. The multiple linear regression model was also tested. The results show that 48,22% of pressure ulcer factors can be influenced by sensory perception, humidity, activity, mobility, nutrition, and friction. Sensory perception, activity and friction have significant influence on incidence of pressure ulcers. Meanwhile, the humidity, mobility, and nutrition do not significantly influence it.
I. INTRODUCTION
According to Government Regulation of the Republic of Indonesia Number 43 of 2004, an elderly person is someone who has reached the age of 60 years and above. The increase in the number of elderly people in Indonesia is a challenge with positive and negative impacts. It has a positive impact if the elderly population remains healthy, active, and productive. On the contrary, it is a negative impact if there are problems in declining health and income and the absence of social and environmental support for the elderly population. Based on population projection data, the Ministry of Health of the Republic of Indonesia estimates that the number of the elderly population in 2035 will reach 48,19 million, about 15% of the total population population (Pusat Data dan Informasi Kementerian Kesehatan Republik Indonesia, 2018). According to Adioetomo, Howell, McPherson, and Priebe (2013), the percentage of the number of elderly people in Indonesia is estimated to reach 23% by 2050. The data show that Indonesia will enter an era of aging population because the number of the elderly population exceeds 7% (Sahar, Riasmini, Kusumawati, & Erawati, 2017). Based on National Socio-Economic Survey data, the percentage of the elderly population above 7% has existed since 2015, which is 8,1%.
Biologically, the elderly population will experience a continuous aging process characterized by a decrease in physical endurance so that they are vulnerable to disease that can cause death. Therefore, the elderly population is one of the community groups that need health services the most. The aging factor strongly influences health complaints experienced by elderly people. Based on statistical data on the elderly population, as they get older, their health complaints increase (Pusat Data dan Informasi Kementerian Kesehatan Republik Indonesia, 2018). Nevertheless, the aging process still triggers physical, mental, social, and spiritual changes in elderly people (Chatterji, Byles, Cutler, Seeman, & Verdes, 2015). It causes an increase in the risk of various health problems (Singh et al., 2018).
One of the indicator used to measure health status is morbidity. According to Statistics Indonesia data in 2017, the morbidity rate for elderly was 26,72%. It could be seen in 2015−2017, the morbidity rate for elderly continued to decrease. However, its degression was relatively small (Badan Pusat Statistik, 2018). The increase in the elderly population brings challenges in the health sector. In community-dwelling older adults, various health problems arise. The target to keep elderly people healthy and active became the main focus, as evidenced by more than 3.500 scientific articles discussing healthy and successful aging in 2016 (Michel & Sadana, 2017).
The problem found in elderly people is pressure ulcers, which decrease life quality (Sulidah & Susilowati, 2017). Pressure ulcers are injuries in the skin and underlying tissues. Those are usually above the bone protrusion as a result of pressure or a combination of pressure and friction, according to the National Pressure Ulcer Advisory Panel-European Pressure Ulcer Advisory Panel (NPUAP-EPUAP) (National Pressure Ulcer Advisory Panel, European Pressure Ulcer Advisory Panel and Pan Pacific Pressure Injury Alliance, 2016). The recent research in Indonesia, precisely in the city of Bandung, shows that most elderly people in the community have one or more health complaints. Only half of them have access to health services. It is found that the incidence of pressure ulcers in elderly people at home is 10,8%. More than half the injuries (61,4%) found in respondents are in category 1 injuries and generally in the area around the fingers, ankles, and knees .
One way to recognize the risk of pressure ulcers is the Braden Scale (Mukhtar, Sari, & Sari, 2019). It has been tested by reliability and validity using various types of hospitals and patients (Mizan, Rosa, & Yuniarti, 2016). It consists of six variables. It includes sensory perception, humidity, activity, mobility, nutrition, and friction with the surface of the mattress (Zambonato, Assis, & Beghetto, 2013). The maximum score on the Braden Scale is 23. Scores above 20 are for low risk, 16−20 for moderate risk, 11−15 for high risk, and less than 10 for very high risk (Cox, 2012).
Pressure ulcer is associated with patient harm and decrease in life quality (Lovegrove, Miles, & Fulbrook, 2018). Therefore, treating pressure ulcers is the main measure of health service quality. Considering it is largely preventable, it continues occurring in hospitalized patients. Internal prevention in the hospital environment starts from the possible risks and structured and validated tool for considering the risk factors for pressure ulcer patients and risk level (Coleman et al., 2018).
Based on the description above, the research aims to analyze the Braden Scale on the incidence of pressure ulcers in elderly people who live in homes, whether they are under the supervision of health workers or not. The research result is expected to give information about the factors that can significantly influence pressure ulcers in elderly people. Furthermore, the information can provide an insight into how to prevent pressure ulcers.
II. METHODS
The research is an analytic study with a cross-sectional approach. The researchers do not take any intervention or special treatment on the subject of research and are limited to collect subject data needed through medical records (Laksmi, Putra, & Wiranata, 2019). The research uses a purposive sampling technique by taking respondents based on certain characteristics and considerations. Respondents are elderly residents with the criteria of living at home and suffering from pressure ulcers, whether they are under the supervision of a health worker or not. The sample consists of 325 elderly people. However, only 35 elderly people meet the criteria of having pressure ulcers that will be further analyzed.
Data collection is carried out in several areas in Bandung in one month (October -November 2017) by six nurses who have passed the national nurse competency test. The assessment of pressure ulcers is done by direct observation using the International Prevalence Measurement of Care Quality questionnaire that has been validated in Indonesian (Amir, Kottner, Schols, Lohrmann, & Halfens, 2014). The categories of pressure ulcers are determined based on the referral standard of NPUAP-EPUAP-Pan Pacific Pressure Injury Alliance (NPUAP-EPUAP-PPPIA). Moreover, the risk factor assessment uses the Braden Scale. The researchers use regression analysis to analyze the effect of the Braden Scale on the incidence of pressure ulcers in elderly people. It is a data analysis technique in statistics to examine the relationship between several independent variables and predict a dependent variable (Kurtner, Nachtsheim, & Neter, 2004). For analysis, the independent variable will be stated with X, while the nonindependent variable is expressed as Y. The multiple linear regression model is as follows: (1) The method of regression parameter estimation is ordinary least square. There are several assumptions imposed in the use of multiple linear regression. Some of them assume the normality of error, no multicollinearity, non-autocorrelation, and homoscedasticity. There are several descriptions of the assumption test in regression analysis. First, the normality test in the regression model is used to test whether the residual values generated from the regression are normally distributed or not. A good regression model has a normally distributed residual value (Schmidt & Finan, 2018). The assumption of normality is carried out using the Kolmogorov-Smirnov test.
Second, there is no multicollinearity. Testing for the presence or absence of multicollinearity symptoms considers the value of the correlation matrix produced during data processing and the value of Variance Inflation Factor (VIF). It is the inflation factor of the standard deviation. The classic multicollinearity assumption test is used to measure the level of association of the relationship between the independent variables through the magnitude of the correlation coefficient (Yu, Jiang, & Land, 2015). VIF values that are greater than 10 indicate multicollinearity to occur (Stevens, 2009).
Third, for the non-autocorrelation assumption, the Durbin-Watson is used. If autocorrelation occurs, the equation will not be good. The predictable statistical values from the Durbin-Watson test, which is greater than 2 or smaller than -2, show autocorrelation symptoms (Field, 2013).
Fourth, it is necessary to test the same or different variants of the residuals from one observation to another in multiple regression equation. If residuals have same variance, it is called homoscedasticity. Meanwhile, if it is not, it is called heteroskedasticity. A good regression equation happens when heteroskedasticity does not occur (Gujarati, 2004). The heteroskedasticity assumption is conducted using the Levene test by regressing the independent variable with the absolute value of the residual. If the significance value between the independent variables and absolute residuals is more than 0,05, there is no heteroscedasticity problem.
In this research, multiple linear regression analysis determines the effect of Braden Scale on the incidence of pressure ulcers. It analyzes statistically the effect of parameters measured based on the Braden Scale on pressure sores with subjects in the elderly community. It shows whether there is the influence of the independent variable on the dependent variable simultaneously and partially or not. To test the suitability of the regression model simultaneously, the researchers use F-test at the significance level of α = 0,05. Meanwhile, for the significance of the regression coefficients individually (partial), the researchers utilize the t-test at the significance level of α = 0,05.
Moreover, the coefficient of determination (R 2 ) is conducted to see whether there is a perfect relationship or not. It is indicated by whether the dependent variable will follow the change in the independent variable in the same proportion. This test is done by looking at the value of R-squared (R 2 ). The coefficient of determination is between 0 and 1. The whole analysis uses the help of STATCAL software (Gio & Caraka, 2018).
III. RESULTS AND DISCUSSIONS
The characteristics of respondents consist of age, gender, and status of residence. Those are categorized based on the Braden Scale assessment, as shown in Table 1. Meanwhile, the Braden Scale assessment based on the severity of the injuries is presented in Table 2.
Based on Table 1, the pressure ulcer is generally found in elderly people aged above 70 years. Mostly, it occurs in women with 57,1%. Seeing the status of residence in community-dwelling older adults, there are various results. However, most of them still live with their extended families around 63%. It is followed by the status of living with a spouse and/or child with 17%.
Furthermore, from the characteristics of the collected data, a Braden Scale assessment is based on the severity of the injuries. A health professional does it by examining the severity of the wound. Then, they categorize the severity of the wound based on standardized guidelines. The Braden Scale assessment is presented in Table 2. Before inferencing the regression model, the regression analysis assumptions are tested. First, it is the normality test. Testing for normality at 5% confidence level (α = 0,05) obtains greater p-value than α = 0,05 which is 0,40335. Hence, the normality assumption decision is obtained. The statistics value of Kolmogorov-Smirnov (0,15084) is smaller than the value of the Kolmogorov-Smirnov table (0,23). Thus, the same decision is also obtained that the assumption of normality is met.
Second, the test for the assumptions that multicollinearity does not occur is presented in Table 3 by looking at the VIF value. From Table 3, the VIF value for each variable is less than 10. The variables are free from multicollinearity. Thus, the assumption of no multicollinearity is fulfilled. It means that between independent variables, multicollinearity does not occur. Third, the researchers test the assumption of nonautocorrelation by calculating the statistical value of the Durbin-Watson. The result is 0,24575. Therefore, the statistical value of the Durbin-Watson test is at an interval of -2 ≤ 0,24575 ≤ 2. There is no autocorrelation.
Fourth, it is the assumption of homoscedasticity. From Table 4, the p-value is more than 0,05. If the significance is more than 0,05, there is no heteroscedasticity in the regression model. It implies that the distribution of dependent and independent variables spread below and above the Y-axis and do not have a regular pattern. It can be concluded that in the independent variables, there is no heteroskedasticity. It means that the assumption of homoscedasticity is fulfilled. Thus, the variance of the error is constant. After testing the classical assumptions in multiple linear regression, the researchers conclude that the classical assumption test is fulfilled. The assumptions of normality, no multicollinearity, non-autocorrelation, and homoscedasticity are met. Thus, the obtained multiple linear regression model is good and feasible.
Next, the simultaneous regression model test determines whether there is a significant influence between sensory perception, humidity, activity, mobility, nutrition, and friction on pressure ulcers. Based on the results of the F-test, the obtained statistical F-value is 0,8122. The value is greater than the F-table. Hence, there is simultaneous significant influence between sensory perception, humidity, activity, mobility, nutrition, and friction on the incidence of pressure ulcers. The significant test of the regression coefficients individually is also carried out using the t-test (see Table 5). The data are obtained from the following regression models: Y = 1,99 + 1,613X 1 -0,473X 2 -1,732 X 3 -0,106 X 4 -0,874 X 5 + 1,059 X 6 (2) Description: Y = Pressure ulcer X 1 = Sensory perception X 2 = Humidity X 3 = Activity X 4 = Mobility X 5 = Nutrition X 6 = Friction Based on the data presented in Table 5, the results of regression analysis results can be explained. The regression coefficient value of the sensory perception variable is positive (1,613). It means that the sensory perception of the Braden Scale positively influences pressure ulcers. In addition, the p-value is smaller than the 0,05. Thus, the pressure ulcer is significantly influenced by sensory perception.
Moreover, the regression coefficient value of the activity is -1,732, and the p-value is smaller than 0,05. It means activity affects pressure ulcers. Furthermore, the regression coefficient value of the friction is also positive, which is 1,509. Moreover, the p-value is smaller than 0,05. Hence, friction affects pressure ulcers. On the other hand, other variables such as humidity, mobility, and nutrition have a more significant p-value than 0,05. It means they do not affect pressure ulcers. Based on Table 5, besides p-value, t-value can also determine which variables have a significant effect on pressure ulcers. By comparing t-value and t-table with mean degrees of freedom (α; n-k-1 or 0,05; 35-6-1), t-table is 2,0481. In Table 5, sensory perception, activity, and friction have a greater t-value than t-table. Hence, they have a significant effect on ulcer pressure. On the other hand, humidity, mobility, and nutrition have a smaller t-value than t-table, so that they have no significant effect.
Moreover, the standard error of the variable indicates the deviation of the regression coefficient in the variables. The smaller deviation in the regression coefficient means a more significant contribution of its variables. Based on Table 5, the smallest standard error is in sensory perception. It shows that sensory perception has the smallest deviation in the regression coefficient. It implies that the contribution of sensory perception is very significant to pressure ulcers.
Next, the determinant coefficient test shows whether there is a perfect relationship with changes in the independent variable, followed by the dependent variable in the same proportion. The results of the determinant coefficient obtained are 48,22%. It shows that the relationship between sensory perception, humidity, activity, mobility, nutrition, and friction simultaneously on pressure ulcer is equal to 48,22%. Meanwhile, the remaining around 51,78% is caused by other factors not examined in the research.
Data processing and analysis results using regression analysis in the research show that sensory perception, activity, and friction influence pressure ulcers in communitydwelling older adults. The research result is similar to Mukhtar et al. (2019) by using Pearson correlation. They mentioned that activity, mobility, and friction affected the incidence and severity of pressure ulcers in elderly people. Thus, the two analytical methods conclude that the activity and friction influence the occurrence of pressure ulcers. Therefore, elderly people who live at home need to consider their activities that cause the risk of friction on their skin, such as praying, sitting cross-legged on the floor that is not covered with comfortable material, sleeping positions for long periods, and others.
The analysis results are also in line with previous research that the estimated number of category 1 in pressure ulcers in elderly people in the community is associated with praying habits such as prostration and sitting cross-legged . The research proves that the sensory perception felt by elderly people is closely related to the friction on the surface of their skin. Therefore, elderly people who live in the community must pay attention to their skin to reduce the friction by using tools and taking care of the skin that already has pressure ulcers due to repeated friction.
On the other hand, the research shows a low relationship between mobility, humidity, and nutrition with the severity of ulcers in elderly people in the community. However, these results do not necessarily prove that these factors do not affect pressure ulcers. It is also in line with the results of previous studies conducted. Mukhtar et al. (2019) agreed that the dominant factors that influenced the severity of pressure on elderly people in the community were activity, mobility, and friction. Conversely, friction, and nutrition had the lowest relationship with the severity of pressure ulcers. The interaction of friction with activity and mobility factors greatly influenced the emergence of pressure ulcers.
IV. CONCLUSIONS
The researchers collect data from several regions in Bandung on 325 elderly people living in communities, with 35 of them having pressure ulcers. Research results based on the Braden Scale show that sensory perception, activity, and friction significantly affect pressure ulcers. Meanwhile, humidity, mobility, and nutrition have no significant effect on ulcer pressure. These results are based on the applied multiple regression analysis. Furthermore, the determinant coefficient obtained is 48,22%. Hence, the relationship of sensory perception, humidity, activity, mobility, nutrition, and friction with pressure ulcers is 48,22% simultaneously. Meanwhile, the remaining (51,78%) is caused by other factors which are not examined in the research. The next research that can be applied is the severity prediction of pressure ulcers based on the Braden Scale using prediction methods in data mining, such as the Support Vector Machine. | 2021-05-16T00:03:12.258Z | 2020-12-16T00:00:00.000 | {
"year": 2020,
"sha1": "a84d9cbc2b1319c2d8278b815d55f2cd2f89a1b0",
"oa_license": "CCBYSA",
"oa_url": "https://journal.binus.ac.id/index.php/comtech/article/download/6237/3965",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "882d879ff565de4454ecb3e4aef5a70bb9867d4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59413806 | pes2o/s2orc | v3-fos-license | Custodes: Auditable Hypothesis Testing
We present Custodes: a new approach to solving the complex issue of preventing"p-hacking"in scientific studies. The novel protocol provides a concrete and publicly auditable method for controlling false-discoveries and eliminates any potential for data dredging on the part of researchers during data-analysis phase. Custodes provides provable guarantees on the validity of each hypotheses test performed on a dataset by using cryptographic techniques to certify outcomes of statistical tests. Custodes achieves this using a decentralized authority and a tamper-proof ledger which enables the auditing of the hypothesis testing process. We present a construction of Custodes which we implement and evaluate using both real and synthetic datasets on common statistical tests, demonstrating the effectiveness and practicality of Custodes in the real world.
INTRODUCTION
"Data is the new oil" and as such, it is mined (i.e., gathered), shared, analyzed, re-analyzed, and re-re-analyzed until it yields to more and more interesting insights. With every exploration to find yet another insight, the chance of encountering a random correlation increases. This phenomenon is formally known as the multiple comparisons problem (MCP) and, if done in a systematic fashion, is often referred to as "HARKing" [32], "p-hacking" [23] or "data dredging".
While a variety of statistical techniques exist to control the false discovery rate (FDR) [3,15], there is surprisingly almost no support to ensure that analysts actually use them. Rather individual research groups rely on often varying data analysis guidelines and trust in their group members to follow them. Things get even worse when the same data is analyzed by several institutions or teams. It is currently close to impossible to reliably employ statistical procedures, such as the Bonferroni [15] method, that guard against p-hacking across collaborators. It only requires one member to "misuse" the data (whether intentionally or not) and detecting, let alone recovering from such incidents is next to impossible. This problem is perhaps amplified by three factors: 1) the pressure on PhD students and PIs to publish [37], 2) "publication bias" [13] as papers with significant results are more likely to be published, and 3) the increasing trend to share and make datasets publicly available for any researchers to use. It is therefore unsurprising that the MCP is among the leading reasons why the scientific community is plagued by false discoveries [2,[25][26][27].
To illustrate this problem further consider a publicly available dataset such as MIMIC III [29]. This dataset contains deidentified health data associated with ≈ 40, 000 critical care patients. MIMIC III has already been used in various studies [20,24,35] and it is probably one of the most (over)analyzed clinical datasets. As such, any discovery made on MIMIC runs the risk of being a false discovery. Even if a particular group of researchers follow a proper FDR protocol, there is no control over happens across different groups and tracking hypotheses at a global scale poses many challenges. It is therefore hard to judge the validity of any insight derived from such a dataset.
A solution to guarantee validity of insights commonly used in clinical trials -preregistration of hypotheses [10] falls short in these scenarios. The data is collected upfront without knowing what kind of analysis will be done later on. Perhaps more promising is the use of a hold-out dataset. The MIMIC authors can release only 30K patient records as an exploration dataset (EDS) and hold back 10K as a validation dataset (VDS). The EDS can then be used in arbitrary ways to find interesting hypotheses. However, before any publication is made by a research group using the dataset, all hypotheses must be tested for its statistical significance over the VDS. Unfortunately, in order to use the VDS more than once, the same requirement as before holds true: every hypothesis over the VDS has to be tracked and controlled for. Furthermore, the data owner, the MIMIC authors in this case, need to provide this hypothesis validation service. This is both a burden for the data owners as well as a potential risk. Researchers need to trust the data owners to apply FDR control procedures correctly and to objectively evaluate their hypotheses.
The above example illustrates the motivation behind Custodes. Our goal is to create a system that guarantees the validity statistical test outcomes and allows readers (and/or reviewers) of publications to audit them for correctness. Using proven cryptographic techniques to certify outcomes of statistical tests by a decentralized authority, we eliminate the risk of data-dredging (intentional and otherwise) on the part of researchers or data owners. Custodes can be used in various settings, including cases where the data is public and only the hold-out data is fed into Custodes (as in the example above), in smaller settings where a few research groups collaborate on combined data, or even within single teams where lab managers can opt to encrypt all of their data and use Custodes as a way to prevent unintentional false discoveries, assign accountability and foster reproducibility.
CONTRIBUTIONS
The primary contribution of this work is the Custodes framework. Figure 1 shows a high-level overview of the proposed system. (1) A data owner encrypts their dataset (either the full dataset or just a hold-out) and submits it to Custodes: a decentralized platform consisting of multiple nodes that are run by different entities (e.g., universities or research groups). (2a and 3a) Researchers can post requests for statistical tests to Custodes. (2b and 3b) Custodes securely computes these tests using several cryptographic techniques, and stores the results (and transcript of the computation) sequentially on a tamper-proof ledger (e.g., a Blockchain [36]). (4) One of the researchers decides to publish their finding and includes a "certified p-value" in their paper. (5) A reader or auditor of this publication can query the ledger, retrieve all p-values of all the tests that have been run on this particular dataset and apply an incremental FDR control procedure a posteriori [18,45] to validated the publication's finding. Custodes prevents p-hacking by using encrypted data and by guaranteeing that every statistical test is accounted for. It is, to the best of our knowledge, the first cryptographically based solution to the problem of p-hacking with provable security guarantees. Custodes is a framework which we instantiate based on additively homomorphic encryption and multi-party computation protocols. Using those building blocks, we implement three widely used hypothesis tests (Student's T-test, Pearson Correlation, and Chi-Squared) in Custodes and evaluate its performance with various configurations and dataset sizes.
SYSTEM OVERVIEW
We model the scenario considered in the introduction as follows: a data owner wants to release a dataset D, to a set of researchers, denoted {P 1 , . . . , P n }, for the purpose of executing statistical tests. Note: D can be either a full dataset or a hold-out. We aim to build a system that guarantees that all statistical tests over D are accounted for and are executed and reported in a truthful manner. In other words, an MCP control procedure is correctly applied for all statistical tests computed on D. If all p-values resulting from tests computed on D are accounted for, then it is possible to apply an MCP control procedure to determine which null-hypothesis should be accepted (resp. rejected) to ensure the probability of a false-rejection remains below a threshold [3,18,45] (detailed in Section 3). Moreover, the entire process must be auditable: a publication can be vetted for having followed the correct MCP-control procedure. At a high level, Custodes achieves these requirements by restricting the exposure of the dataset D such that researchers are only able to test hypotheses on D via the Custodes interface which stores all hypotheses and results in a tamper-proof, auditable trace.
Custodes Design
Our design of Custodes is motivated by the following observations. The data owner cannot release a dataset D to the researchers directly since it creates a possibility for phacking and other forms of bias (e.g., parties can run tests privately and report only favorable results without controlling for the MCP). Hence, D has to be released such that only the output of the statistical test performed on D is made available to researchers. A naïve solution is to provide access to D through a stringent interface which only returns the results of statistical tests. However, even such a solution is not sufficient for enforcing correctness as it can be abused by parties querying the interface until a favorable (i.e., significant) result is obtained while avoiding to report the intermediate queries or adequately controlling for the MCP. Moreover, such a solution has no way to audit the test computation making it impossible to "certify" the discoveries.
Custodes addresses this problem by using several cryptographic methods, specifically, secure computation and a tamper-proof ledger both of which we formally introduce in the following section. At a high level, Custodes requires that a dataset D be encrypted by the data owner prior to being made available to researchers (or publicly released). Researchers use the encrypted dataset, denoted [D], to compute statistical tests using secure computations over the encrypted data with a mechanism for revealing (decrypting) only the test statistic while simultaneously ensuring the MCP is controlled for. Custodes ensures that each revealed statistic is recorded on a publicly accessible and secure ledger making each result of a statistical test public and verifiable. This latter requirement ensures that 1) all statistical tests executed on [D] are recorded in sequence and 2) makes it possible to certify that MCP control procedures are applied correctly. We elaborate on these two important requirements in subsequent sections.
Security Goals
At a high level, Custodes aims to achieve the following security goals. We formalize these properties in Section 6. Custodes has a corresponding certificate ψ that is publicly accessible and verifiable, even by a third-party. • Auditability. Given a certificate ψ , anyone can verify the correctness of the hypothesis test computation associated with the certificate (i.e., determine whether the null hypothesis is true or false).
Threat Model
Custodes provides provable guarantees to the five properties outlined in Section 3.2 under the following threat model. Our security assumptions are with respect to the data owner, participants engaged in the protocol, and the network over which the system is instantiated.
Data Owner. We assume that the data owner does not collude with the researchers who run the statistical tests. This assumption is necessary given that a malicious owner has unfettered access to the (unencrypted) dataset D and can thereby trivially bypass any MPC control procedure.
Participants. We assume that all parties are honest-butcurious, that is, they individually adhere to the correct protocol but are interested in learning more about the underlying dataset so that they may circumvent MCP control procedures, and may collude among themselves in attempt to achieve this (i.e., to engage in "p-hacking"). To this end, we assume that at most t −1 out of n parties may collude with each other in order to obtain more information about the dataset or manipulate a test certificate. In addition, we assume that the set of colluding parties is static (i.e., does not change during protocol execution). Finally, we require a secure public key infrastructure in place which allows to identify individual parties (researchers) by their public key and assume that all messages exchanged during protocol execution are digitally signed against a party's identity.
Network. Custodes does not rely on a secure communication channel between participants in the network and therefore tolerate the presence of a passive adversary controlling the network. However, we do not assume the presence of an active network adversary (one that can actively block or otherwise alter communication) as that would disrupt the availability of participants and the tamper-proof ledger. We note that this requirement allows for messages exchanged between parties to be made publicly available (e.g., by recording them on a public ledger) without compromising security of the overall system.
Controlling False Discoveries
An important step in realizing the construction of Custodes is understanding how MCP control procedures are used in the data analysis phase. In order to control for false discoveries it is necessary to know all p-values computed over dataset up to the current test. While standard procedures such as Bonferroni [15] must be applied a posteriori of the data analysis (once all hypotheses have been tested), in Custodes, we desire a method for controlling the FDR in a streaming fashion, ideally without knowledge or restriction on future tests. We use a common control procedure known as α-investing [18]. The α-investing procedure is a standard choice for controlling false discoveries in practice when the total number of hypotheses is unknown beforehand [45,46]. We briefly describe the procedure below and we refer the reader to [18,45] for additional details to the ones we provide as they are outside the scope of this work.
Formally, α-investing controls the marginal False Discovery Rate (mFDR) which is defined as: where i denotes the total number of tests which have been executed, while V (i) (resp., R(i)) denote the number of false (resp., total) discoveries after i hypotheses have been tested using the α-investing procedure. A testing procedure controls the mF DR η at level α if mF DR η (i) ≤ α where the parameter η (e.g., η = (1 − α) [45]) weighs the impact in case of few discoveries. Under the complete null hypothesis V (i) = R(i) and hence mF DR η (i) ≤ α implies that E [V (i)] ≤ αη/(1 − α). If we chose η = 1 − α then E [V (i)] ≤ α and it is possible to conclude that the false discovery rate is controlled at level α [18]. Intuitively the α-investing procedure works by assigning to each hypothesis test a budget α ′ from an initial "α-wealth. " If the p-value of the null hypothesis being considered is above α ′ the null hypothesis is accepted and some budget is lost, otherwise it is rejected and some testing budget is gained. In Custodes, the α-wealth, α and η can be determined by the data owner or set as a static constant. The importance of this procedure to this work is that given all p-values computed on D up to the ith test, it is easy to determine whether the hypothesis of the i + 1st test should be accepted (resp. rejected) based on the resulting p-value.
Certification of Hypotheses
The overarching goal of Custodes is to certify insights gained from data as valid. Therefore, Custodes must provide a mechanism for certifying the outcomes of hypotheses testing procedures. Taking a birds-eye view, for every test that Custodes executes, it produces a publicly available certificate ψ which is verifiable and unforgeable. Formally, we define a certificate of a statistical test as a tuple (τ , P pk , T , (t, p)) where τ is the test index (i.e., its order among all the tests that have been executed so far on D), P pk is the identifier of the researcher that requested the test (e.g., a public key), T is the code of the statistical test function, and (t, p) is the result of executing T (D) where t is the test statistic and p is the p-value corresponding to t. Given that each certificate contains both the test index τ and resulting p-value, it is trivial to verify whether an MCP control procedure was applied when accepting (resp. rejecting) a nullhypothesis using the procedure in Section 3.4. Note that the certificate by itself does not ensure that (t, p) = T (D), however, Custodes records sufficient information on a tamperproof ledger that anyone can verify whether it is indeed the case. We formalize and elaborate on these claims in Section 5.
CUSTODES PROTOCOL
Now that the design and security goals of Custodes have been made explicit, we are ready to describe the ideas of the framework at a high level (Section 4.1), afterwards, we introduce the cryptographic building-blocks (Section 4.2), and then describe the concrete instantiation of Custodes in Section 4.3. We note that, though the cryptographic techniques used as building-blocks are well known, they have to be combined in a new and careful manner to achieve a auditible MCP-control protocol. (See Table 1 for a summary of properties these building blocks help us to achieve.)
Framework
The functionality of Custodes framework can be split into three phases: Setup, Compute, and Audit.
Setup. A data owner, releases an (encrypted) dataset [D]
and shares the secret key used to encrypt D with parties {P 1 , . . . , P n } such that any threshold number of them can collectively decrypt [D]. The data owner then initializes a Blockchain B and posts to B a signed message (0, pk, ⊥, ⊥) that corresponds to the counter of the tests to be executed on D set to zero. In addition to releasing the encrypted dataset, the data owner releases metadata pertaining to D deemed sufficient for researchers to form hypotheses on the dataset. We formalize the metadata requirements in Section 6.
Compute. A researcher, say P, specifies the test T corresponding to a statistical test that she wants to execute on D. We note that P is one of the parties in {P 1 , . . . , P n } approved by the data owner during the setup of Custodes. P posts a message (τ , P pk , T , ⊥) to B and obtains the test index, τ . For simplicity, we assume that only one test is executed at a time though we note that this is not a necessary requirement since the dataset is static and B ensures that every test is assigned a sequential test index τ . Once the test is computed, the result of the test, (t, p), is made available only in an encrypted form as a result of secure computation. Recall that the test result cannot be recovered by any of the parties individually and requires a threshold number of the parties reach consensus in order to reveal it. To this end, P requests help from {P 1 , . . . , P n } by posting a signed message to B along with the (encrypted) result. Each party P i ∈ {P 1 , . . . , P n } verifies that the message indeed came from one of the researchers allowed to run the tests and posts a message to B with a partially decrypted share of the (t, p). When combined, the shares reveal the decrypted result. This specification ensures that each test result is made publicly available (to all parties). The final entry recorded on B is the test certificate consisting of the tuple (τ , P pk , T , (t, p)). This final tuple is signed by all parties engaged in the reveal computation (i.e., the parties revealing the result) since after the shares are revealed,
Property Cryptographic Building block(s) Role in Custodes
Access Control Digital signatures & threshold encryption.
Only approved researchers can request test computation and a threshold majority of approved parties can decrypt the data.
Computation result can only be revealed through a consensus of parties in the system which record every tested hypothesis at decryption time on a ledger.
Auditability
Tamper-proof ledger A transcript of every statistical test computation is recorded on the ledger making them verifiable and auditable by researchers and third-parties.
Certification
Tamper-proof ledger Sequence of statistical tests recorded on the ledger n make it possible to apply the α-investing procedure a posteiori which serves as a certificate. each party may individually reconstruct and ensure that the shares indeed correspond to the correct decryption.
Audit. A test with certificate (τ , P pk , T , (t, p)) can be audited for correctness by any entity with access to B, [D] and the public key pk used to encrypt D. An auditor V retrieves all messages related to τ from B and verifies the signature of each message, including the certificate. V then proceeds to use this information to ensure the test code was computed correctly and indeed produces the result (t, p). We require that B contains sufficient information to verify whether (t, p) = T (D), even if all parties are offline, while preventing any information be revealed about D (except for the result of the statistical test). In particular, any entity with access to B (e.g., a researcher, an auditor, a data owner, or a third party) can verify the computed statistical tests that have been run through Custodes: for every tuple (τ , P pk , T , (t, p)) anyone can verify that (t, p) is the correct output Hence, if T (D) (t, p), P's malicious behaviour is exposed and the result (t, p) is deemed invalid.
Cryptographic Building-Blocks
Our Custodes instantiation uses three common buildingblocks from the cryptographic literature: additively-homomorphic encryption, a tamper-proof ledger, and secure multiparty computation. (We note that Custodes is a framework and could be instantiated using, for example, fully homomorphic encryption to perform secure computation instead. However, for the statistical tests considered in this paper, additively homomorphic encryption was sufficient and remains more efficient in practice.) Additively-Homomorphic Encryption. To encrypt a dataset in a way which enables performing computations over the data we use additively-homomorphic encryption. This is a special form of encryption which enables the evaluation of a subset of arithmetic operations over encrypted values without knowledge of the secret key used to encrypt. Such schemes make it possible to evaluate a subset of functions without revealing information on the inputs nor output. While there are several different schemes to choose from, we use the additively homomorphic scheme known as Paillier [38] as it easily supports threshold decryption and its functionality can be augmented using multi-party computation [11].
Augmenting Paillier functionality. Paillier enables the evaluation linear arithmetic gates (i.e., addition, subtraction) out-of-the-box but does not support non-linear arithmetic evaluations such as multiplication and division. However, it is possible to augment the functionality of Paillier to support non-linear arithmetic using secure multi-party computations which we describe next. We used methods described in [6,7,12] to compute non-linear arithmetic gates using interactive secure multi-party computation (MPC) protocols (e.g., multiplication and division gates). We emphasize that while the protocols used in this work were originally described for secret-sharing schemes (i.e., [39]), all protocols apply to a threshold-Paillier setting, even when active security is required [11]. We elaborate on the details of these protocols in Section 5.
Tamper-proof Ledger. We rely on a tamper-proof ledger abstraction which refer to as a Blockchain, B. pk pk interactive protocols local computation + interactive protocols for multiplication and decryption Figure 2: Overview of the threshold Paillier based Custodes instantiation (Section 4.3). The researcher receives an encrypted dataset from the data owner and proceeds to compute statistical tests using both local computations and interactive protocols, using parties in the Custodes network for computing non-linear functions and to certify the obtained p-value(s) on the blockchain.
data owner), by the parties {P 1 , . . . , P n } themselves using a consensus protocol (e.g., Paxos [33]), or by independent parties (e.g., using a public Blockchain based on a decentralized consensus protocol such as Nakamoto consensus [36]). Similar to the ledger abstraction used in [8], our main requirement is that the above abstraction is correct and always available 1 .
Protocol Implementation
We are now ready to describe the detailed construction of Custodes. By using a threshold somewhat-homomorphic encryption scheme for encrypting the dataset, and distributing the key among a set of (possibly untrusted) parties, we can construct Custodes with minimal overhead on the researchers engaged in the system. Indeed, we achieve a construction which only requires the active participation of parties in the network in two cases: 1) when computing a non-linear arithmetic gate in the statistical test circuit evaluated over the encrypted dataset [D] (as explained above) and 2) for decryption of the result of the computation (i.e., to reveal the test statistic).
Recall that we divide the overall protocol into three distinct phases (Setup, Compute and Audit).
Setup. During the setup phase, the data owner executes algorithm Paillier.KeyGen(1 k ) that outputs a public key pk and secret key shares sk 1 , . . . , sk n . The shares sk 1 , . . . , sk n are distributed to parties P 1 , . . . , P n , respectively. The data owner then runs Encrypt(pk, D) on dataset D that returns 1 In general, this is standard requirement for tamper-proof ledgers. encoded dataset [D], where every value of D is individually encrypted. The data owner sends the tuple (pk, sk i ) to P i ∈ {P 1 , . . . , P n }, ∀i = 0 . . . n and makes [D] publicly available to all parties. The data owner then initializes a Blockchain B and records the digital identity of every party P i ∈ {P 1 , . . . , P n } that can run the protocol (e.g., it can be the verification key of a digital signature that will be used to verify P i 's signatures) on B. The owner then posts the signed message (0, pk, ⊥, ⊥) corresponding to the counter of the tests to be executed on D set to zero. In addition to releasing the dataset, the data owner makes available metadata pertaining to the dataset (e.g., size of the dataset, attributes, etc). With the exception of dataset size, the choice of metadata to be released is left up to the data owner. Compute. Let P be the researcher that wishes to run a statistical test T represented as an arithmetic circuit with input [D]. P posts (τ , P pk , T , ⊥) to B and obtains counter τ . P then proceeds to deterministically evaluate the arithmetic test circuit T ([D]) locally by using the homomorphic properties of the Paillier scheme. Assuming the arithmetic circuit contains a set of non-linear gates, P is only able to evaluate the first w − 1 ≥ 0 gates locally before reaching some gate w which requires an interactive protocol. Let op w (C w ) denote this computation where op w is the arithmetic gate and C w is the tuple of ciphertexts that P obtained from local computations up to level w − 1. Suppose, op w is a multiplication gate and C w contains ciphertexts [a] and [b]. P posts to B the message tuple (τ , P pk , op w , C w ). All parties receive this message and proceed to engage in the interactive protocol Mult Once the result it obtained, the certificate ψ = (τ , P pk , T , (t, p)) can be generated from the information recorded on B which, in conjunction with the information posted to B, certifies the result of the hypothesis test (t, p). Figure 3 illustrates the use of the tamper-proof ledger in the Compute process.
Audit. Suppose an auditor V wishes to verify that the certificate ψ = (P pk , τ , T , (t, p)) indeed certifies the result (t, p). V retrieves [D] and proceeds to evaluate the test circuit T ([D]) until the first non-linear operation (which required an interactive protocol during the Compute phase). Let op ′ w denote the first such operation and C ′ w denote its ciphertext list. V retrieves (τ , P pk , op w , C w ) from B and verifies that op ′ = op and C ′ w = C w . V then gathers the transcript resulting from the computation of op w from B and reconstructs [c]. Note that V does not engage in an interactive protocol to obtain [c]. V then proceeds with the evaluation of T using [c] as input and continues these verification steps for every , V rejects the certificate. We omit the verification of ciphertexts and other verification since we assume all parties adhear to protocol. However, we note that there exist numerous ways to augment the audit procedure by requiring various cryptographic proofs from the computing parties as described in [11].
Finally, once the certificates have been verified, V can verify whether the MCP control procedure described in Section 3.4 were applied correctly when accepting (resp. rejecting) the τ th hypothesis by ensuring mF DR η (τ ) ≤ α for the specified η and α parameters.
STATISTICAL TESTS
The previous section showed how a dataset can be encrypted to ensure that every tested hypothesis is locked (i.e., by being encrypted) such that the MCP control procedure can be enforced in an auditable way. However, one important piece remains: How do researchers execute statistical tests over the encrypted dataset in an efficient way. In this section we describe the techniques from several lines of work surrounding secure computation which we combine in a secure way to realize protocols for computing statistical tests. We present pseudo code for computing the three common statistical tests: Student's T-test, Pearson Correlation and Chi-Squared. While not exhaustive, this trio of statistical tests forms a basis for quantitative analysis and covers many of use cases: from reasoning about population means to analyzing differences between sets of categorical data. We stress, however, that Custodes can be easily extended to other statistical tests using similar techniques.
The pseudo-code in this section computes the test statistics of Student's T-test, Pearson Correlation and Chi-Squared, t, r and χ 2 , respectively. We note that using these statistics computing p-values can be done trivially for each respective statistic using the degrees of freedom [14,31] associated with the data and we therefore omit the full details of this process. Interestingly, computing p-values does not require knowing the sign of the test statistic [14]. We exploit this fact to avoid computationally expensive secure comparisons but note that it is possible to extract the sign of each statistic should it be necessary for other purposes [6,47].
Dataset Characteristics and Notation
Before diving into the implementation details of individual tests, we first describe how the data is structured in Custodes and introduces some notation.
Let D k ×w (R) be the set of all real-valued k-by-m matrices. Formally, a dataset D ∈ D k ×w (R) is a matrix with k rows and m columns (attributes). Recall that the encrypted form of D is denoted by [D] where each entry is encrypted. Then [D] ∈ D k×w (Z N 2 ) 2 o f thiswork. Therefore, D can be seen as a k×w matrix containing real values and [D] as a k×w matrix containing encrypted fixed-point approximations to the real values of each entry. In describing the tests, we assume wlog, that the tests are computed over the first w (where w ≥ 2) attributes π 1 , . . . , π w of D. As such, we denote the value of the ith row of attribute j as D i, π j , equivalently denoted as [D i, π j ] in the encrypted dataset.
To allow researchers to form hypotheses on D, we require the data owner to release attribute metadata (e.g., number of attributes, their type and domain size, independence from other attributes, etc) and assume that the size of the dataset is known. We define L s : D k ×w → {0, 1} * to be a function from datasets to binary strings encoding attribute metadata. Given L s (D) it should be possible to 1) define a hypothesis on D and 2) perform a statistical test in Custodes. At minimum, we require that L s (D) provides k, m, a set of independent attributes and a set of bounds on attribute domains.
Interactive Protocols
After forming a hypotheses, researchers must be able to evaluate the statistic test over the encrypted dataset. Recall that the dataset is encrypted using the Paillier encryption scheme which allows them to evaluate a portion of the statistical test locally. However, to compute the full test circuit, it is necessary to engage in interactive protocols (see Section 4.2). Concretely, there are four interactive protocols needed to evaluate arbitrary statistical tests using the Paillier scheme. We list and describe each protocol below but omit the full details of their implementation as each one is extensively described in [6,7,12].
A Note on Notation. The plaintext space in Paillier is the ring of integers Z N where N is a composite determined based on the security requirements of the system [38]. For visual simplicity in the presentation, additions, subtractions, and scalar multiplications (i.e., local computations) are left implicit unless otherwise stated, e.g., [a] + [b], denotes the addition of two Paillier encryptions and [a]c denotes the multiplication of an encrypted value by a public scalar.
Finally, several MPC protocols used require the bit-length of the encrypted value domain which we denote by ℓ. For example, ℓ = 64 for representing 64-bit integers but can be higher for fixed-point or floating point representations.
Overview of Protocols:
Reveal([a]) → a. Given an encryption [a], obtain a in 1 round. This is realized by simply invoking the thresholddecryption protocol described in [11] and summarized in Appendix C of this work.
Mult([a], [b]) → [ab]. Given encryptions [a] and [b]
, obtain the encryption [ab] in 1 interactive round and 1 sub-protocol invocation. Details for instantiating this protocol are found in [12]. obtain the encryption [a/b] with f -bits of fixed-point precision. This protocol is based on the Goldschmidt method for achieving integer division presented in [21]. The idea behind Goldschmidt's method is to obtain initial approximations to both the divisor and dividend and iteratively compute quadratically converging approximations up to a desired precision. The MPC version of the protocol is described in [7], is statistically secure, and has a total complexity of 3 log 2 (ℓ) + 2 interactive rounds and 1.5ℓ log 2 (ℓ) + 4ℓ sub-protocol invocations.
Student's T-test
Student's T-test is used to compare means of two independent samples where the null hypothesis stipulates that there is no statistically significant difference between the two distributions [40].
Let π 1 and π 2 be two independent attributes (columns) in D. Let x and y represent the column values of π 1 and π 2 in D, respectively. That is, x is (D 1, π 1 , . . . , D k, π 1 ) and y is (D 1, π 2 , . . . , D k, π 2 ). Denote the mean of x and y asx andȳ, respectively. Let the standard deviation of x and y be denoted as s x and s y .
The t statistic is computed according to: Where is an estimator of the pooled standard deviation of the two samples.
Computing Student's T-test in Custodes. Pseudo-code for the test is presented in Protocol 1. The protocol closely follows Equation 1 while adjusting the precision of fixed-point computations using TruncPR as explained in [7]. It avoids computing the square root of secret values and instead leaves it as a final operation to be computed on plaintext. We now briefly describe the steps of the protocol: Lines 1 to 3 compute the empirical mean of the two samples. Lines 4 to 11 compute the variance of the two samples. Lines 13 to 15 compute the square of the numerator in Equation 1. Line 16 computes the square of the denominator in Equation 1. Finally, line 19 computes the p-value (result) in the clear by using the square-root of the revealed t 2 value (test statistic) and the degrees of freedom, which, in the case of the Student's T-test is one less than the number of rows. Requirements. Let η ∈ N be an upper bound on the largest absolute value in D (i.e., given as part of attributes' domain size), to evaluate a Student's T-test over any two attributes D, the following constraints must hold to ensure no "overflow" occurs during computation: ℓ > log 2 (k) + 2 log 2 (η) + f and n > 2 ℓ+κ+n+1 . The calculation is trivial to verifiy by examining the arithmetic circuit.
Complexity. All sub-protocols invoked are constant-rounds with the exception of FPDiv which requires a number of rounds proportional to log 2 (ℓ). Therefore, the total round complexity hinges on the FPDiv invocation making the final complexity O(log 2 (ℓ)) rounds and O(2k) invocations of constant-round sub-protocols.
Pearson Correlation Test
Pearson Correlation test is used to compare the linear correlation between two continuous independent variables. The result r lies in the range [−1, 1] corresponding to negative or positive correlation level between the variables [40]. Let π 1 and π 2 be two continuous independent attributes. Let variables x and y represent the values of π 1 and π 2 where (x 1 , . . . , x k ) and (y 1 , . . . , y k ) denote the observed values for x and y, respectively. Denote the mean of x and y asx andȳ. Pearson Correlation's correlation coefficient r is then computed according to the following equation: Computing Pearson Correlation in Custodes. Pseudocode for computing Pearson Correlation coefficient is presented in Protocol 2. The protocol closely follows Equation 2 to compute the statistic and uses TruncPR to adjust the precision of fixed-point values. Lines 1 to 3 compute the empirical mean of the two samples. Lines 4 to 12 compute the sum of the variance of the two samples and the sum of the products of the standard deviations. Lines 13 to 16 compute the square of the numerator. Lines 17 to 18 compute the square of the denominator. Line 21 computes the p-value in the clear by using the square root of r 2 and degrees of freedom, which, in the case of the Pearson Correlation is k − 2.
Requirements. Let η ∈ N be an upper bound to the largest absolute value in D, to evaluate a Pearson Correlation test over any two attributes in D, the following requirements must hold for the user-set parameters to ensure no "overflow" occurs during computation: ℓ > 2 log 2 (η) + log 2 (k) + f and n > 2 ℓ+κ+n+1 . Again, the calculation follows from the description of the circuit in the protocol. Complexity. All sub-protocols invoked are constant-rounds with the exception of FPDiv which requires a number of rounds proportional to log 2 (ℓ). We therefore conclude that the protocol requires O(log 2 (ℓ)) rounds and 17 invocations.
Chi-Squared Test
The Chi-Squared test determines whether the sampling distribution of the test statistic follows a χ 2 distribution when the null hypothesis is true [9,40]. The Chi-Squared test is useful in determining whether there is a significant difference between the expected frequencies and the observed frequencies in a set of observations from mutually exclusive categories. In other words, Chi-Squared evaluates the "goodness of fit" between a set of expected values and observed values, the test result is deemed significant if the expected frequencies match the observed frequencies.
For a collection of k observations classified into w mutually exclusive categories where each observed value is denoted by x i for i = 1, 2, . . . , w, denote the probability that a value falls into the ith category by q i such that w i=1 q i = 1. Note that the expected value for each category is e i = nq i . The Chi-Squared statistic is computed according to the following equation: Let D contain w mutually exclusive attributes (categories) π 1 , . . . , π w such that D i , π j ∈ {0, 1} for all i = 1 . . . k and j = 1 . . . w. In other words, each category is a Boolean flag representing whether a row i in D is in the category j. Given such a "raw" dataset D, we need a way to convert D into histogram form H = (h 1 , h 2 , . . . , h w ), filtered based on the selected categories, so as to compute the Chi-Squared statistic over H . To achieve this in a private and secure manner, we must first "pre-process" [D] into a histogram H containing the summations of the w categories selected by the user.
Note: we use H for purpose of providing a general solution and to remain consistent with the descriptions of the previous two tests (i.e., D has the same format across all tests). If D is already in histogram form, the the Chi-Squared test may be applied directly on D.
Chi-Squared in Custodes. Pseudo-code for computing the Chi-Squared test is presented in Protocol 3. It closely follows Requirements. Let η ∈ N be an upper bound to the largest absolute value in D. Let H be a histogram with w attributes (2 ≤ w ≤ w). To correctly perform the Chi-Squared test over the w selected attributes in D, the following requirements must hold for these user-set parameters to ensure no "overflow" occurs during computation: ℓ > log 2 (η) + 2 log 2 (w) + f and n > 2 ℓ+κ+n+1 . The requirement follows from the test circuit description.
Complexity. All sub-protocols invoked are constant-rounds with the exception of FPDiv which requires a number of rounds proportional to log 2 (ℓ). We therefore conclude that the protocol requires O(w log 2 (ℓ)) rounds and 3w + 1 invocations.
SECURITY
We now argue that Custodes satisfies properties outlined in Section 3.
Correctness. Custodes follows the plaintext execution of statistical tests by using secure version of the sub-protocols as presented in Section 5. The main difference with plaintext execution of these tests is in the accuracy of the results since Custodes guarantees O(f )-bits of precision.
Confidentiality. The aim of Custodes is to enforce p-value calculation in a truthful manner. It achieves this goal by hiding the content of D and revealing only the metadata about the dataset D sufficient to carry out statistical tests and the results of the tests. The data owner reveals the following metadata: • The size of the dataset: both the number of columns (attributes) and number of rows in each attribute, k and w, respectively. These values are necessary for both computing statistical tests and determining the associated p-values. • Attribute metadata: information on the contents of D such as the characteristics of each attribute, the domain size, and independence from other attributes. For example, metadata for an "age" attribute may be the set {AttrType: Age, NumericRange: 0-110}). We stress, however, that the metadata can be made general and independent of the values in D when deemed of no influence on computation correctness.
The above information can be expressed as a function L s : D k ×w → {0, 1} * , that takes as input the dataset and returns k, w and other metadata. We capture the confidentiality property of Custodes using a common experiment used in cryptography where an adversary is required to distinguish whether it is set in the real or an ideal world. In the real world, the adversary (i.e., a entity modeling the collusion of malicious parties) interacts with Custodes as it would in the real setting. In the ideal world, the adversary interacts with a simulator who has access only to the output of L s (D) and to a test result oracle O(D, ·). The oracle O(D, ·) takes the database D and a statistical test circuit which it evaluates on D and returns its result. We note that O is a concept that is used only as part of the confidentiality definition and the proof thereof. In particular, it is used to capture the fact that the simulator is not given D but only the results of the tests. If one can show that an adversary cannot distinguish the real from the ideal world then Custodes does not reveal more about D than what the simulator knows about D. Otherwise, the adversary could use this leaked information to distinguish between the two worlds. We formally capture this experiment below. Definition 6.1. Let Custodes = (Setup, Compute, Audit), let L s be a stateful metadata function, O(D, ·) a test result oracle, n is the number of parties and t is the threshold of parties required for decryption. Consider the following probabilistic experiments where A is an honest-but-curious, stateful adversary and S is a stateful simulator: Real A (1 k , n, t): An adversary chooses D and a challenger C runs Setup(1 k , D, n, t) and obtains the encrypted dataset [D] and keys (pk, {sk 1 , sk 2 , . . . , sk n }). Wlog the parties are split into two disjoint sets {P 1 , . . . , P t −1 } and {P t , . . . , P n } where the first set is controlled by A and the second by C who acts on behalf of honest parties. C sends D, pk, and corresponding secret keys to all the parties {P 1 , . . . , P n }.
A then adaptively chooses q test queries {T 1 , . . . , T q } where each T i corresponds to a valid statistical test circuit. For each T i , A and C engage in the Compute protocol that returns result t i . In the end, A outputs a bit b.
Ideal A, S (1 k , n, t): An adversary chooses a dataset D and S is given the output of L s (D). S runs Setup S (1 k , L s (D), n, t) that returns an encryption of a random dataset [D R ] and (pk, {sk 1 , sk 2 , . . . , sk n }). S also updates its state. Wlog the parties are split into two disjoint sets {P 1 , . . . , P t −1 } and {P t , . . . , P n } where the first set is controlled by A and the second by S.
A then adaptively chooses q test queries {T 1 , . . . , T q } where each T i corresponds to a valid statistical test circuit. For each T i , S calls O(D, T i ) and gets the result t i . S then calls Compute S (t i ) where Compute S is an interactive protocol that simulates the interaction of the honest parties in the Compute phase with parties controlled by the adversary. In the end, A outputs a bit b.
We say that Custodes is (L s , t 1 , . . . , t q )-secure if ∃ PPT simulator S such that for all PPT adversaries A, Custodes is (L s , t 1 , . . . , t q )-secure, i.e., no information beyond the metadata and the result of q statistical tests on D is revealed.
See Appendix A for the proof of the above theorem.
Access Control. Though anyone can homomorphically compute on [D], the obtained encrypted result cannot be decrypted unless a threshold number of {P 1 , . . . , P n } collude. Moreover, each party performs a partial decryption of a result only if the decryption request message was posted on B and signed by one of the approved parties. Hence, decryption of a ciphertext can only occur if a party from {P 1 , . . . , P n } requested a decyrption.
Verifiability. This property is guaranteed again by the fact that a threshold number of the parties are required to decrypt a result. That is, even if someone computes on [D], the obtained result is encrypted and will not be decrypted unless more than t of the parties collude. However, every test whose result (t, p) is decrypted has partial decryption shares of (t, p) recorded on B. Hence, even if the party P that initiated the test goes offline after requesting a decryption of (t, p), the certificate on (t, p) can be reconstructed from shares stored on B.
Auditability. Since all local computations are deterministic and the transcripts of interactive protocols are recorded on B, given a certificate ψ = (τ , P pk , T , (t, p)) an auditor V can evaluate the arithmetic circuit T ([D]) without interaction and ensure that each input to a non-linear gate corresponds to a local computation of T ([D]). Note that it suffices to ensure the input is correct given that the output is signed by all parties during the computation. We stress that an audit does not require any SMPC evaluations given that the transcript and (encrypted) result is available publicly on B. Thus, an audit of a certificate ψ is essentially a linear scan of records obtained from B and local homomorphic ciphertext evaluations.
EXPERIMENTAL EVALUATION
We now turn to demonstrating the applicability of Custodes to the real world. The purpose of this section to understand the feasibility of Custodes for certifying hypotheses. The goals of our evaluation are as follows: • Thoroughly evaluate Custodes on a variety of datasets in terms of overall runtime and dataset characteristics.
• Ensure correctness of statistical tests matches those of theoretical guarantees and compare the error (if any) to the equivalent computation performed outside of Custodes. • Measure the impact of different selection parameters on the overall computation time for each statistical test. • Evaluate the auditing process for statistical tests computed through Custodes.
Implementation and Environment
We implement Custodes in Go 1.10.1.
The code is opensource and available at https://github.com/sachaservan/custodes. The implementation of statistical tests closely follows the pseudo-code described in Section 5. We implement the thresholdvariant of the Paillier encryption scheme for performing the computation required for evaluating statistical tests but deviate slightly when computing division gates as we found that using linear-secret sharing methods for computing division was more efficient in practice. As such, we implement FPDiv protocol using Shamir secret sharing [39] and convert from Paillier ciphertext to shares using techniques from [11]. We stress, however, that this is only done for two ciphertexts (numberator and denominator of the division gate) and not the entire computation. All experiments were conducted on a single machine with Intel Xeon E5 v3 (Haswell) @ 2.30GHz processors (40 cores in total) and 36GB of RAM, running Ubuntu 16.04 LTS. Unless otherwise stated, each result is the average over five separate runs (for each combination of parameters). While in most cases the variance in runtime across runs was minimal, we nonetheless report 95%-confidence markers (under a normal approximation assumption).
Datasets
We evaluate each statistical test on synthetic and real-world datasets. Notice, however, that Custodes's performance is not impacted by the distribution of the underlying data since the data itself is encrypted. We nonetheless evaluate Custodes on two real-world datasets to illustrate this fact.
The real-world datasets are obtained from the UC Irvine Machine Learning Repository [1]. For experiments involving Student's T-test and Pearson Correlation test we use the Abalone dataset containing weights and heights of 4177 Tasmanian abalones. For Chi-Squared we use the Pittsburgh Bridges dataset which contains categorical data on 108 bridges in the city of Pittsburgh.
In addition to these datasets, we generate three continuous synthetic datasets containing 1,000, 5,000 and 10,000 rows, respectively, with random real value entries ranging between 1 and 100. We use these datasets to evaluate the Student's T-test and the Pearson Correlation test. Again, we stress that generating datasets at random does not advantage Custodes in any way because computations are distribution agnostic. For evaluating Chi-Squared, we generate three categorical datasets containing 20 mutually exclusive categories. We evaluate the Chi-Squared test on a subset of 5, 10 and 20 categories to demonstrate the impact of varying the number of selected attributes. The characteristics of the datasets used in our evaluation is summarized in Table 2
Experiment Setup
We benchmark all statistical tests with 3 computing parties (servers), where a threshold majority of (t = 2) are required to partcipate in order to decrypt (i.e., at least one server is honest and does not attempt to collude). Each party is run as a separate process and with access to two cores on the computing machine. In other words, each party is simulated as a dual-core machine, separate from other parties 4 . This setup allows us to simulate a multi-party computing environment while simultaneously controlling for all external variables, (e.g., network latency) in the system. We set the simulated network latency (delay between communication rounds) to 1ms which simulates average latencies observed on a local area network (LAN); a standard setup used for benchmarking multi-party computations [4,44,47].
Parameters
To ensure all tests are evaluated in a way that enables comparisons between results, we fix the parameters of at the end). We let the statistical security parameter κ = 40 which provides 40-bits of statistical security; a common default value [4]. The largest value found in all the datasets is 100.0 so we set η max = 100.0 and the number of entries in the largest dataset is k max = 10, 000. We fix ℓ = 100 which ensures ℓ > 4 log 2 (η max ) + 4 log(k max ) + f thus satisfying parameter requirements for all three statistical tests. Finally, we set N (the Paillier modulus) to be a 1024-bit composite which both ensures security of ciphertexts and guarantees that Z N (the encryption space) is large enough to support statistical test computations with the given parameters.
Results
We break down the results into four sections each of which roughly corresponds to a goal that we outline at the beginning of this section.
System Setup. Across all experiments, the amount of time required to setup Custodes (i.e., generating keys, encrypting the dataset, etc) remained below 1 second with mean 0.30 seconds, sd. 0.28. This affirms that the overhead placed on the data owner is minimal.
Correctness. We compare the resulting precision of each statistical test computed in Custodes with results obtained from computing the same test over the same data using the Python SciPy Library [30]. Table 3 reports the mean errors per test. The results did not vary significantly from the mean. The absolute error for all tests was between 0.0 and 0.0009. We observe that the empirical error obtained in Custodes matches the precision expected based on the parameter selection and the probabilistic correctness of the TruncPR protocol.
Runtime Evaluation. We run each statistical test on each configuration of parameters five times and report the average runtime in In all experiments, division required the most computation time which is not unexpected considering that it is the most computationally expensive protocol invoked by each test. Perhaps surprisingly, the number of parties involved has a larger impact on performance than the size of the dataset. This is due to the high overhead of computing division over encrypted data which requires many interactions between parties. Both Protocol 1 and 2, require only a single division operation making the computation independent of the dataset size. Therefore, for significantly larger datasets this relationship is not likely to hold true. However, when computing the Chi-Squared test, division is overwhelmingly the dominant contributor to the overall runtime. This is due to the nature of the Chi-Squared statistic which requires one division per category (in the general case) which equates to a total of 20 calls to FPDiv for the largest selection of categories used in the Chi-Squared experiments. In practice, however, certain assumptions (e.g., when the expectation is uniform across categories) make it possible to evaluate the Chi-Squared test with only a single division operation (for the purpose of obtaining the reciprocal) and would thus considerably reduce the computing overhead.
Auditing. For each test computed in our evaluation, we store the locally computed ciphertext trace in addition to the results of interactive protocols which we then use to evaluate the audit process for each computed statistical test. As expected, the audit verified successfully across all computed tests. The total audit time was consistently below 10 seconds with mean 2.48, sd. 2.34. This demonstrates that while computing statistical tests in Custodes incurs a computational overhead, the auditing process remains relatively efficient.
Regarding Scalability. The results of the runtime evaluation begs the question: can Custodes scale to a setting with more than a handful of computing parties? We believe the answer is yes. We note that in the experimental evaluation we set the threshold of parties necessary for decryption and computations to be a majority of the total number of parties in the system. In practice, however, this is too stringent a requirement. From a technical standpoint, a system with n parties need only have t of them present in the computation phase and t need not scale with n. Therefore, even in a setting with thousands of researchers, similar performance can be achieved as demonstrated in our evaluation.
RELATED WORK
Much of the related work surrounding Custodes either focuses on private computations using homomorphic encryption or non-cryptographic methods for preventing phacking such as methods for pre-registration of hypotheses. To our knowledge, there has been no prior work using cryptographic techniques for certifying the validity of statistical tests in an auditable way. Lauter et al. [34] and Zhang et al. [43] propose the use of homomorphic encryption to compute statistical tests on encrypted genomic data. However, the constructions are not intended for validating statistical testing procedures but rather for guarding the privacy of patient data. Furthermore, Lauter et al. make several simplifications such as not performing encrypted division (rather performing arithmetic division in the clear), imposing assumptions on the data, etc., making their solution less general compared to methods for computing statistical tests in Custodes.
Chi-Squared Test Runtime Evaluation
Homomorphic encryption and multi-party computation techniques have been used for outsourcing machine learning tasks of private data [5,22,41,42]. Our work, however, crucially relies on the decryption functionality being decentralized in order to control and keep track of data exposure.
Several non-cryptographic techniques exist to guard against the MCP in statistical analysis. For example, there are several procedures that adjust p-values in scenarios where multiple hypotheses are examined at once. These range from conservative protocols that bound the family-wise error rate [15] to more relaxed procedures that bound the ratio of false rejections among the rejected tests (false discovery rate, FDR) [3] or the marginal FDR [18,45]. When applied properly these techniques prevent p-hacking in an idealized scenario. However, there is no guarantee that they are applied correctly. A research can misuse these procedures, intentional or unintentional, and only apply them over a subset of all statistical tests being computed on a given dataset. Furthermore, there is no way for external auditors to verify how these methods were applied. We, on the other hand, leverage these techniques by operating on encrypted data and tracking statistical tests in a tamper-proof ledger which certifies the correctness of results in an auditable way.
Dwork et al. [16,17] propose a method that constrains analyst's access to a hold-out dataset as follows. A trusted party keeps a hold-out dataset and answers up to m hypotheses using a differentially private algorithm. Here, m depends on the size of the hold-out data and the generalization error that one is willing to tolerate. Hence, compared to our decentralized approach, this method assumes trust into a single party that (1) does not release the dataset to the researchers and (2) does not answer more than m hypotheses queries.
Finally, we note that preserving privacy of dataset values when releasing results of statistical tests [19,28] is orthogonal to our work.
CONCLUSIONS
We present Custodes, a system that certifies hypothesis testing using proven cryptographic techniques and a decentralized certifying authority. Custodes computes statistical test over encrypted data and uses blockchain technology to provide auditability of those results. We believe that Custodes is a viable solution to prevent p-hacking and control for false-discoveries in a way that promotes accountability and reporducability in scientific studies. We discuss various theoretical constructions of Custodes and provide details on our particular implementation in which we support three common statistical tests. We evaluate this implementation on various configurations and dataset sizes. A DEFINITIONS AND PROOFS Statistical Security. We make extensive use of statistically MPC protocols introduced in [6,7]. The advantage of using statistical security (as opposed to perfect) is that many existing protocols can be rendered far more efficient with only slightly more relaxed notions of security.
Definition A.1 (Statistical Security [7]). For any two random variables X and Y with finite sample spaces U and V , respectively. The statistical distance between X and Y is said to be statistically indistinguishable in the security parameter κ if: In the special case that ∆(X , Y ) = 0 we say the distributions variables are perfectly indistinguishable or informationtheoretic secure.
We refer the reader to [6] for a full description of statistically secure building blocks as well the proof of security. We note that when using threshold-Paillier as the backbone for MPC (as we do in Custodes), the security assumptions are reduced to a computationally bounded adversary and thus provides weaker security guarantees than provided by ideal instantiations of statistically secure protocols.
Proof. In order to prove the above statement, we need to instantiate Setup S and Compute S and show that they can produce interactions of the honest parties that are indistinguishable from those in Custodes' protocol. Setup S is straightforward: S creates a random database of the same size as D since it is given this information as part of L s (D). It then calls key generation of the encryption protocol.
In order to simulate the interactions during the test computation, S.Compute is instantiated by combining simulators of the Custodes subprotocols outlined in Section 5.2 that are secure and universally composable. Finally, in order to reveal the correct test result, S uses the decryption simulator and the test result it obtained from the test result oracle O(D, ·). □
B ADDITIONAL EXPERIMENTS Interactive Protocol Complexity
We measure the total number of multiplications required per statistical test and combination of parameters. Since Mult is the fundamental building block of all interactive protocols, this measurement provides an estimate for the total number of interactive rounds required between parties for computing a given test. We report the results in Table 5 and 6). For both Student's T-test and Pearson Correlation test the number of multiplications ranged between 35,969 and 57,516 and was dependent on the total number of rows in the dataset. We stress that this also includes the computation of the division gate and therefore the number of multiplications does not scale linearly with the number of rows for this pair of tests (given there is only a single division operation).
Student's T-test and Pearson Correlation Complexity
For Chi-Squared test, the number of multiplications is independent of the number of rows. This is expected given that the protocol for computing Chi-Squared is dependent only on the number of categories selected and not on the number of observations in the dataset.
C ADDITIONAL TECHNICAL DETAILS Threshold-Paillier Scheme
Let {P 1 , . . . , P n } be a set of n parties where a threshold majority are required to participate in order to decrypt (reveal) a ciphertext.
KeyGen(k) → (pk, {sk 1 , . . . , sk n }). Pick two k-bit primes, p and q such that p = 2p ′ + 1 and q = 2q ′ + 1 where p ′ , q ′ are Sophie Germain primes. Set N = pq, M = p ′ q ′ , λ = LCM(p − 1, q − 1). Pick an integer d such that d ≡ 1 (mod N 2 ) and d ≡ 0 (mod M). 5 Set t such that t ≤ n and construct the polynomial f (X ) = t −1 i=0 a i X i (mod N 2 M) where a i for i ≥ 1 is picked at random from the set {0, 1, . . . , N 2 M − 1}. Set a 0 = d and for i = 1 . . . n, sk i = f (i) corresponds ith share of the secret key sk = λ such that any subset of t can be used to decrypt a ciphertext. Set the public key pk = N . Output (pk, sk 1 , . . . , sk n ).
Enc(pk, m ∈ Z N ) → c ∈ Z N 2 . To encrypt a message m ∈ Z N , pick a random r ∈ Z N 2 and compute the ciphertext c = (N + 1) m r N 2 (mod N 2 ). Output c.
ThresDec(c ∈ Z N 2 , {P 1 , . . . , P n }) → m ∈ Z N . To decrypt a ciphertext c, each player P i ∈ {P 1 , . . . , P n } uses the private secret key share sk i to compute c i = c 2∆sk i where ∆ = n!. With the required t unique shares (c 1 , . . . , c t ), the decryption can be computed by taking the set C = {c 1 , . . . , c t } of t shares and combining them using Lagrange interpolation as follows: Note that c ′ = c 4∆ 2 f (0) = c 4∆ 2 d . Since 4∆ 2 d ≡ 0 (mod λ) and 4∆ 2 d ≡ 4∆ 2 d (mod N 2 ), we get that c ′ = (n + 1) 4∆ 2 m where m is the plaintext we wish to extract. Therefore, 4∆ 2 m can be (efficiently) obtained as described in the original Paillier which allows us to recover m as follows: where L(u) = u−1 N as specified in [12,38]. Output m.
Observe that c 1 c 2 = (N + 1) m 1 +m 2 (r ′ ) N 2 for some r ′ ∈ Z N 2 hence the result is equivalent to encrypting m 1 + m 2 using r ′ for randomness making the result correct. We note that the randomization step may be omitted when deterministic computations are acceptable. Output c ′ .
Multiplication of a ciphertext c ∈ Z N 2 by a public constant b ∈ Z N is computed as follows: pick a random r ∈ Z N 2 and compute c ′ = c b r N 2 (mod N 2 ). Observe that c b = (N + 1) mb (r ′ ) N 2 for some r ′ ∈ Z N 2 , hence the result is correct. Again, we emphasize that the randomization step may be omitted when needed to achieve deterministic computations. Output c ′ . 5 [12] warns the reader that using d = λ (as per the original Paillier scheme) is not secure in the threshold setting. Therefore, d must be chosen independently of λ.
Fixed-Point Arithmetic in Z N
Representation of real numbers in both Paillier and secret sharing schemes is a common problem given the message space consists of elements from Z N , respectively. Performing arithmetic over real numbers is therefore non-trivial and requires either floating-point or fixed-point encoding. While floating-point representation provides better precision, it is also far less efficient compared to fixed-point representation, at least in an SMC context [47].
Encoding. Given a public fixed-point precision parameter f denoting bits of precision required per computation, we can approximate any real number a ∈ R as a ≈ ⌊a2 f ⌋2 −f .
We define fp f (a) = ⌊a2 f ⌋ to be the function encoding real numbers into the corresponding fixed-point representation.
Linear Arithmetic. Addition (and subtraction) of two fixedpoint numbers is trivial and can be computed without interaction. Let a, b be two fixed-point numbers with f -bits of precision, then a + b = (a + b)2 −f with f -bits of precision, as required.
Multiplication. Multiplication of fixed-point numbers is more involved as we need to scale down the result to have the correct precision. For two fixed-point numbers a, b with f -bits of precision, observe that ab = ab2 −2f has 2f -bits of precision. Hence, we need to scale down the product ab by 2 f in order to obtain the desired precision. This can be achieved interactively using the TruncPR protocol to obtain (ab)/2 f so that the resulting value has f -bits of precision as required.
Signed Number Encoding in Z N
We designate a range of length ω (e.g., ω = N ) within Z N for the representation of negative integers. More concretely we let the range [0, ⌈ ω 2 ⌉) represent the positive integers in the range [0, ⌈ ω 2 ⌉) and elements in the range [⌈ ω 2 ⌉, ω) encode negative integers in the range [−⌈ ω 2 ⌉, 0). Note that the correctness of both addition and multiplication is preserved with such encoding allowing for efficient representation and arithmetic of signed integer values in Z N assuming the domain of encrypted values is the range [−⌈ ω 2 ⌉, ⌈ ω 2 ]⌉. | 2019-01-19T20:04:48.000Z | 2019-01-19T00:00:00.000 | {
"year": 2019,
"sha1": "d40d9ed8979aa7d6be9c5cfd31db3936c5830839",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d40d9ed8979aa7d6be9c5cfd31db3936c5830839",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
239468594 | pes2o/s2orc | v3-fos-license | Development of Hydrolysis and Defatting Processes for Production of Lowered Fishy Odor Hydrolyzed Collagen from Fatty Skin of Sockeye Salmon (Oncorhynchus nerka)
Lipid oxidation has a negative impact on application and stability of hydrolyzed collagen (HC) powder from fatty fish skin. This study aimed to produce fat-free HC powder from salmon skin via optimization of one-step hydrolysis using mixed proteases (papain and Alcalase) at different levels. Fat removal processes using disk stack centrifugal separator (DSCS) for various cycles and subsequent defatting of HC powder using isopropanol for different cycles were also investigated. One-step hydrolysis by mixed proteases (3% papain and 4% Alcalase) at pH 8 and 60 °C for 240 min provided HC with highest degree of hydrolysis. HC powder having fat removal with DSCS for 9 cycles showed the decreased fat content. HC powder subsequently defatted with isopropanol for 2 cycles (HC-C9/ISP2) had no fat content with lowest fishy odor intensity, peroxide value, and thiobarbituric acid reactive substances than those without defatting and with 1-cycle defatting. HC-C9/ISP2 had high L*-value (84.52) and high protein (94.72%). It contained peptides having molecular weight less than 3 kDa. Glycine and imino acids were dominant amino acid. HC-C9/ISP2 had Na, Ca, P, and lowered odorous constituents. Combined processes including hydrolysis and defatting could therefore render HC powder free of fat and negligible fishy odor.
Introduction
Hydrolyzed collagen (HC), especially from fish skin or scale, has drawn attention as a promising source of biologically active peptides [1]. HC has been documented to induce skin nourishment [2] and bone strengthening [3]. HC from fish skins is acceptable as Kosher and Halal product [4]. Skins of fish including unicorn leatherjacket (Aluterus monoceros) [5], Asian seabass (Lates calcarifer) [6], etc. have been used for production of hydrolyzed collagen. In general, an enzymatic process is commonly used due to the mild condition with less side effects [7]. The optimization of pretreatment or enzymatic hydrolysis has been carried out to obtain the high yield, along with the desired characteristics or quality [8]. Under the controlled enzymatic hydrolysis, the type of enzyme as well as hydrolysis process had the influence on the degree of hydrolysis of the resulting hydrolysate [9,10]. Benjakul, Karnjanapratum, and Visessanguan [1] reported that HC prepared from seabass (Lates calcarifer) skin using 3% (w/w, based on solid content) papain in the first step of hydrolysis, followed by using 2% (w/w, based on solid content) Alcalase for the second step, consisted of antioxidative peptides. However, little information on the use of mixed proteases within one step for HC production exists.
Salmon is globally one of the most popular fish species for consumption due to its high nutritive value and high bioactive pigment, namely astaxanthin. Salmon meat contains high quality protein with a high amount of essential amino acids [11]. Additionally, salmon meat has an attractive taste, smell, and color. The salmon processing industry mostly disposes of a large amount of by-products, such as skins (about 7%) generated during deskinning [11]. Enzymatic hydrolysis has been widely used to produce protein hydrolysate with high protein recovery from salmon skin including farmed salmon, Atlantic salmon, and salmon (Salmo salar) [12][13][14]. Via the optimal hydrolytic process, the resulting hydrolysate contained the peptides with high bioactivity such as antioxidant activity, proliferation of fibroblast, and osteoblast cells [1][2][3]. However, salmon skin has a high fat content (23.3-61.5% dry basis) and is rich in polyunsaturated fatty acids (PUFA) [11,15]. The oxidation of PUFA associated with skin matrix is a drawback associated with undesirable characteristic, particularly fishy odor/flavor and instability of HC. Several techniques have been developed to reduce fat content of fish skin. Sae-leaw, et al. [16] documented that citric acid pretreatment, followed by defatting with 30% isopropanol, could remove fats and fishy odor in seabass skin gelatin. Chotphruethipong, et al. [17] reported that pulsed electric field (PEF) in conjunction with pancreas lipase (25 U/g dry matter) with the aid of vacuum impregnation could remove 86.93% fats from seabass skins. As a consequence, HC prepared from defatted skin showed much lower fishy odor and flavor [18]. Although some fat can be removed from skin, the fat retained is released and present in HC, especially in form of emulsion, leading to an undesirable fishy odor.
To reduce the use of solvent and time consumed, the use of disk stack centrifugal separator (DSCS) could be employed. Jukkola, et al. [19] documented that DSCS could separate fat globules from milk up to 93% when the centrifugal number was increased to 3 cycles. Nonetheless, no information regarding the use of DSCS for fat removal of HC from fish skin, especially that produced using mixed proteases for one-step hydrolysis process, has been reported. Furthermore, to increase oxidative stability of HC, appropriate solvent could be further used to extract the fat residue in HC powder. Therefore, this study aimed to develop the hydrolysis of salmon skin using mixed proteases for one-step process, and to investigate fat removal processes using DSCS and subsequent solvent extraction to yield HC powder lacking in fat and fishy odor.
Preparation of Salmon Skin
Skins of sockeye salmon (Oncorhynchus nerka) collected from the Nissui (Thailand) Co., Ltd., Songkhla, Thailand were stored in ice (1:2, w/w), in which a polystyrene box was used as a container. Within 1 h of transportation, the skins reached the International Center of Excellence in Seafood Science and Innovation. The remaining meat was removed from the skin manually and the skin was washed with cold water (≤10 • C). The prepared skins were stored at −20 • C under the vacuum in polyethylene bags (less than 1 month).
Before use, the size of frozen skins was reduced to small pieces (3.0 × 3.0 cm 2 ) using an electric sawing machine. The prepared samples (3.0 × 3.0 cm 2 ) were then subjected to non-collagenous protein removal process as tailored by Benjakul, Karnjanapratum and Visessanguan [1]. Skins were mixed with 0.05 M NaOH (1:10, w/v) and stirred gently at 150 rpm. After 1 h, the soaking solution was drained, and the alkali pretreatment was performed in the same manner for a total of three times. Alkali-treated skins were washed with water until neutralized. Thereafter, the prepared skin was soaked in 0.05 M citric acid (1:10, w/v) with gentle agitation for 15 min and left for 45 min. The swollen skins were rinsed with water until a neutral pH was reached. All the operations were run at room temperature (RT; 28-30 • C). The swollen skins were mixed with distilled water (1:5, w/v) and adjusted to pH 7 or 8 using 1.0 M HCl or 1.0 M NaOH. Mixed papain (P) and Alcalase (A) at varying concentrations (w/w, based on solid content) included 3%P + 3%A, 3%P + 4%A, 4%P + 3%A, and 4%P + 4%A. The aforementioned mixed proteases were added in prepared mixture and incubated in a temperature-controlled water bath at 50 • C with continuous stirring using a propeller. During hydrolysis up to 270 min, the samples (1 mL) were taken and added with hot 5% SDS (85 • C) at a 1:1 (v/v) ratio and heated at 85 • C for 30 min to terminate reaction and solubilize total proteins and peptides. Degree of hydrolysis (DH) was measured as tailored by Benjakul and Morrissey [20]. Hydrolysis condition (mixed proteases, hydrolysis pH and time) yielding the highest DH was chosen for further study.
Effect of Temperature
Swollen skins mixed with distilled water were prepared as previously detailed and pH was adjusted to 8. The selected mixed proteases as described above (3%P + 4%A) were added into the mixture. Hydrolysis was run at 50, 55, and 60 • C up to 270 min. The samples were taken and added with hot SDS solution, as previously described. Hydrolysis temperature rendering the highest DH was chosen.
The selected HC was then lyophilized using a freeze-dryer (Model Duratop™ lP/Dura Dry™ lP, FTS ® System, Inc., Stone Ridge, NY, USA) at −50 • C for 72 h. To determine fatty acid profile of resulting HC powder, lipid was extracted from the powder using Bligh and Dyer [21] method. Transmethylation was done with the aid of 2 M methanolic NaOH and 2 M methanolic HCl to attain fatty acid methyl esters (FAMEs) as described by Muhammed, et al. [22]. The Agilent 7890B (Santa Clara, CA, USA) gas chromatography system connected with flame ionization detector (FID) was used and the condition for separation was used as detailed by Nilsuwan, et al. [23].
Effect of Centrifugal Cycles on Fat Removal of HC
HC was prepared from swollen skins using 3%P + 4%A at pH 8 for 240 min, followed by heating at 90 • C for 15 min. HC solution was cooled to RT and subjected to fat removal using the disk stack centrifugal separator (SPX FLOW Technology Italia S.p.A., Milan, Italy) with the feed rate of 2.0 L/min for 0, 3, 6 and 9 cycles. The pressure of heavy phase (hydrolysate phase) and light phase (fat phase) was 2.0 and 0.5 bar, respectively. HC solutions subjected to fat removal for different centrifugal cycles were collected and lyophilized. Lyophilized HC samples were analyzed.
Fat Content
HC powder without and with fat removal for different centrifugal cycles were determined for fat content as per the method of AOAC [24] using Soxhlet apparatus (Soxterm-Gerhardt, Bonn, Germany). Petroleum ether was used as a solvent.
Effect of Isopropanol on Fat Removal of HC Powder
HC defatted using DSCS for 9 cycles showing the lowest fat content and highest L* value was selected and lyophilized, so called 'HC powder'. HC powder was mixed with isopropanol at a solid/solvent ratio of 1:10 (w/v). The mixture was further mixed with the aid of ultrasonic processor using a 13-mm probe at an amplitude of 70% with pulse mode (on/off at 10 s) for 10 min. The mixture was then stirred at 150 rpm for 20 min with a magnetic stirrer at 4-8 • C and subsequently centrifuged at 10,000× g for 20 min at 4 • C. HC was collected and the defatting process was conducted in the same manner for 2 and 3 cycles. All the collected HC samples defatted with isopropanol for 1, 2, and 3 cycles were spread on the tray for evaporation at 25 • C for 30 min with the aid of blowing air. All HC powders were subjected to analyses.
Fat Content and Color
Fat content [24] and color [1] were determined as mentioned in Sections 2.4.1 and 2.4.2, respectively.
Fishy Odor Intensity (FOI)
FOI was determined following the procedure of Sae-leaw, Benjakul, and O'Brien [16] using 10 trained panelists. Panelists were trained with standards as detailed by Sae-leaw, Benjakul, and O'Brien [16]. Ten-day iced-stored salmon skin was hydrolyzed under the selected condition and the resulting HC was used as a standard of fishy odor. The standard HC was mixed with water to obtain concentration of 0, 0.5, and 1% (w/v) representing the score of 0 (none), 2 (medium), and 4 (strong), respectively. HC powder sample was dissolved in water to have a final concentration of 0.75%. Solution was placed in sealable cup. The panelists were asked to sniff the headspace above the samples and assessed for FOI.
Characterization of Selected Defatted HC
HC with the lowest fat content and FOI (HC powder defatted with isopropanol for 2 cycles) was characterized.
Size Distribution
Molecular weight of HC powder was determined by MALDI-TOF following the method of Benjakul, Karnjanapratum and Visessanguan [2]. The Autoflex Speed MALDI-TOF (Bruker, GmbH, Bremen, Germany) mass spectrometer equipped with a 337 nm nitrogen laser was used.
Amino Acid Composition
HC powder was treated with 4.0 M methanesulphonic acid at 110 • C for 22 h under the reduced pressure to prevent the oxidation of tryptophan. The sample was neutralized with 3.5 M NaOH and diluted with 0.2 M citrate buffer (pH 2.2). An aliquot (100 µL) was injected into an amino acid analyzer (JLC-500/V AminoTac™, JEOL USA Inc., Peabody, MA, USA).
Proximate Composition and Mineral Content
Protein and ash contents of HC powder defatted with isopropanol for 2 cycles was analyzed using AOAC analytical methods no. 920.153 and 928.08, respectively [24]. Inductively coupled plasma optical emission spectrometer (ICP-OES) (Model AVIO 500, Perkin Elmer, Shelton, CT, USA) was used for measurement of Na, Ca, and P contents in HC as detailed by Benjakul, et al. [27].
Volatile Compounds
HC powder defatted with isopropanol for 2 cycles was analyzed for volatile compounds using a solid-phase microextraction gas chromatography mass spectrometry (SPME GC-MS) [28] in comparison with the HC defatted using centrifugal separator
Statistical Analysis
Completely randomized design (CRD) was used for the entire study. Experiment and analysis were conducted in triplicate (n = 3). Analysis of variance (ANOVA) was done and the differences among the samples were determined using Duncan's multiple range test at the p < 0.05 level. The analysis was performed with a SPSS package (SPSS for windows, SPSS Inc., Chicago, IL, USA). Figure 1. Generally, the DH of all HC samples increased with augmenting hydrolysis time. High hydrolysis rate was noted within the first 30 min, reflecting that numerous peptide bonds were cleaved. Lower rate of hydrolysis was subsequently obtained and reached a plateau. Those changes were depending on proteases and pH used. At pH 7, the hydrolysis times to reach the plateau were 120, 150, 90 and 90 min when 3%P + 3%A, 3%P + 4%A, 4%P + 3%A and 4%P + 4%A were applied, respectively. When pH 8 was used, hydrolysis reached the plateau at 180, 240, 150, and 150 min as 3%P + 3%A, 3%P + 4%A, 4%P + 3%A, and 4%P + 4%A were applied, respectively. This result suggested that the decreased hydrolysis rate was more likely due to a depletion of substrate, enzyme autodigestion, and/or product inhibition [7]. Overall, hydrolysis pattern was governed by types of proteases and pH used. When comparing DH of HC prepared at pH 7 and 8, higher DH was observed for that prepared at pH 8 (p < 0.05), regardless of the concentration of mixed proteases and hydrolysis time used. This result suggested that the mixed proteases were more active at alkaline pH. Commonly, papain was capable of hydrolyzing peptide bonds at optimum pH of 7-7.5, while Alcalase showed a broad activity in alkaline pH range (8-10) [10]. Adler-Nissen [29] documented that pH had the influence on the substrate and enzyme by changing the charge distribution and conformation of the molecules. As a result, pH was a vital factor for determining DH.
Results and Discussion
Moreover, the highest DH was observed for HC prepared at pH 8 with the aid of 3%P + 4%A for 240 min. Under the optimum hydrolysis condition, mixed proteases (both papain and Alcalase) might hydrolyze swollen skin with loosened structure. As a consequence, a number of peptides were released and present in the hydrolysate. Chotphruethipong, et al. [30] documented that papain was commonly able to hydrolyze peptide bonds that involve glycine, basic amino acids or leucine, while Alcalase prefers to cleave peptide bonds containing hydrophobic residues [31]. In the presence of both proteases, which had broad specificity, peptide bonds were cleaved effectively as witnessed by the marked increase in DH. Figure 2 shows the DH of HC from salmon skin hydrolyzed at pH 8 with the selected mixed proteases (3%P + 4%A) at different temperatures (50, 55, and 60 • C). Among all temperatures used, the HC hydrolyzed at 60 • C showed the highest DH (p < 0.05) when hydrolysis time of 240 min was used. This result indicated that the optimal hydrolysis temperature could favor hydrolysis, thus producing the different peptides in HC. Klomklao and Benjakul [10] found that temperature had a profound impact on the activity of enzymes. In addition, temperature higher than denaturation temperature could induce the unfolding of proteins, allowing the exposure of cleavage sites previously located inside the protein molecules. Thus, hydrolysis of peptides could be enhanced at appropriate temperature [10]. Chotphruethipong, Binlateh, Hutamekalin, Sukketsiri, Aluko, and Benjakul [6] reported that HC generally has a wide range of amino acid composition, sequence, and chain length, which have an impact on functionality and bioactivity. Therefore, mixed proteases (3%P + 4%A) showed an efficient hydrolysis toward salmon skin at pH 8 and 60 • C and the optimal hydrolysis time was 240 min.
Fat Content
Fat content of HC from salmon skin hydrolysate after fat removal with DSCS at different cycles (0, 3, 6, and 9 cycles) is presented in Table 2. Generally, fat content in HC without fat removal was 20.87% (dry weight basis). After DSCS was applied, the fat contents of 8.53, 8.44, and 7.80% were obtained for the HC after fat removal for 3, 6, and 9 cycles, respectively. The fat contents of HC subjected to fat removal with DSCS for 3, 6, and 9 cycles were reduced from that of HC without fat removal by 59.14, 59.56, and 62.63%, respectively. Among all HC samples, the HC defatted using DSCS for 9 cycles showed the lowest fat content (p < 0.05). Basically, the DSCS has very high G-force and is efficient for separating two liquids. The physical separation takes place within the disc stack, in which the light liquid phase (fat phase) positioning near the bowl axis and the heavy phase (hydrolysate phase) locating at the bowl wall are attained [32]. Therefore, the fat removal with DSCS for 9 cycles lowered fat content in HC from salmon skin without the use of solvents. Values are presented as mean ± SD (n = 3). Different lowercase letters in the same row indicated significant differences (p < 0.05).
Color
The color of powder of HC after fat removal with DSCS for various cycles is presented in Table 2. HC without fat removal obviously had a dark brown color, with the lowest L*-value and highest a*and b*-values (p < 0.05). As the cycles of fat removal by DSCS was increased, the increases in L*-value along with the decreases in a*and b*-values were attained for HC powders. The highest L*and ∆E*-values, along with lowest b*-value, were obtained for HC with fat removal for 9 cycles (p < 0.05). Color found in HC was related to the amount of pigment cells (chromatophores) in the epidermal layer of salmon skin [33]. Astaxanthin, which is a common fat soluble compound found in salmon, was a main pigment that contributed to the red and orange color in salmon skin [34]. Apart from astaxanthin, melanin was also present in the skin, causing the dark color of HC [33]. In this regard, fat removal process using DSCS could facilitate the removal of pigments in HC solution, especially astaxanthin dissolved in fat, resulting in the increased lightness of resulting HC powder. For melanin, which is water soluble, it might be co-separated with astaxanthin. Melanin could be associated with astaxanthin via hydroxyl group in its structure. This was evidenced by dark color of fat rich fraction after DSCS separation. Additionally, the high fat content could also enhance the discoloration of HC caused by Maillard reaction [16].
Effect of Isopropanol on Fat Removal of HC Powder
HC powder defatted with DSCS for 9 cycles still had high fat content. Therefore, fat extraction with isopropanol of HC powder was further conducted.
Fat Content
Defatting using isopropanol for different cycles (0, 1, 2, and 3 cycles) was applied to the selected HC powder. The efficacy of isopropanol in defatting HC powder was increased with augmenting defatting cycles ( Table 3). The higher fat content was observed in the HC powder without defatting with isopropanol (7.78%, dry weight basis), compared to those of HC powders defatted with isopropanol, regardless of cycle used. After defatting with isopropanol for 1 cycle, 1.23% fat (dry weight basis) was found, in which fat was removed by 84.19%, compared to that of HC powder without defatting using isopropanol. Additionally, HC powders defatted with isopropanol for 2 and 3 cycles had no fat detected. Thiansilakul, Benjakul, and Shahidi [31] found that isopropanol was able to remove fat in round scad muscle. Since lipids dispersed in HC were more likely polar in nature, the solvent with slight polarity should be appropriate for the removal of aforementioned lipids. Values are presented as mean ± SD (n = 3). Different lowercase letters in the same row indicated significant differences (p < 0.05). ND: not detected.
Color
The color of HC powders defatted with isopropanol for different cycles is shown in Table 3. Generally, the increases in L*and ∆E*-values, along with decreases in a* and b*-values, were noticeable for HC powders as the number of defatting cycles was augmented (p < 0.05). HC powder defatted with isopropanol for 3 cycles had the highest L*-value together with lowest a*-value and b*-value (p < 0.05). Via isopropanol defatting process, fat in the HC powder was more likely removed to a high extent. Moreover, the defatting process using isopropanol might facilitate the removal of pigments, leading to the augmented lightness. Sae-leaw, Benjakul, and O'Brien [16] documented that HC from fish skin had chromatophores rendering color. Additionally, the oxidation of lipid during enzymatic hydrolysis might occur. Lipid oxidation products, especially aldehyde compounds [35], were related with yellow discoloration via the Maillard reaction [16]. Therefore, HC powders defatted with isopropanol for 2 and 3 cycles, which had negligible fat content, were less susceptible to discoloration, as witnessed by the whiter color of HC powder.
Fishy Odor Intensity (FOI)
FOI of HC powder defatted with isopropanol for various cycles is shown in Table 3. All HC powders defatted with isopropanol showed the profound decrease in FOI, compared to HC powder without defatting using isopropanol, regardless of number of cycles used. The lowest FOI was found for HC samples defatted with isopropanol for 2 and 3 cycles (p < 0.05). This was in accordance with the marked decrease in fat content and secondary lipid oxidation products (SOPs). It was in agreement with Sae-leaw and Benjakul [36], who reported that the SOPs are one of the major contributors to undesirable fishy odor. Therefore, HC powder defatted with isopropanol for 2 cycles had significantly decreased FOI.
Peroxide Value (PV)
PV of HC powders without and with defatting by isopropanol for different cycles is shown in Table 3. The highest PV was found for HC powder without defatting (p < 0.05). The reduction in PV was generally observed as the number of defatting cycles was augmented (p < 0.05). HC powder defatted with isopropanol for 2 and 3 cycles showed the lowest PV (p < 0.05). The defatting with isopropanol might have removed fat as well as some primary lipid oxidation products such as hydroperoxides from HC powder more effectively. PV has been used to measure primary lipid oxidation product after H-atom is abstracted and the radicals are combined with oxygen molecules [35]. However, hydroperoxide is not stable and readily decomposed to SOPs. This coincided with low PV found in all samples.
Thiobarbituric Acid Reactive Substances (TBARS) Value
TBARS values of HC powders without and with defatting using isopropanol for various cycles are presented in Table 3. In general, TBARS value of HC powder without defatting using isopropanol was 7.30 mg MDA equivalent/100 g dry HC. Decomposition of hydroperoxides into SOPs might occur in HC during the DSCS fat removal process for 9 cycles, in which air or oxygen could be incorporated into HC containing lipids. However, the defatting with isopropanol for 1-3 cycles could reduce TBARS to 5.22-5.85 mg MDA equivalent/100 g dry HC. Among all the samples, the HC powder defatted with isopropanol for 2 and 3 cycles possessed the lowest TBARS value (p < 0.05). This result indicated that the defatting HC powder with isopropanol for at least 2 cycles could remove the SOPs in HC powder more effectively. This was in line with Sae-leaw, Benjakul, and O'Brien [16], who documented that the decreasing TBARS value was obtained in seabass skin after defatting with 30% isopropanol. Nevertheless, some TBARS were still retained in all HC powders after isopropanol defatting process. This might be associated with the aggregation of proteins or peptides during defatting process, which could entrap SOPs. This result was in line with Thiansilakul, Benjakul, and Shahidi [31], who documented that the aggregation of protein was obtained for round scad muscle after defatting with ethanol and isopropanol. As a result, SOPs could not be eliminated from defatted HC powder easily. Lowered TBARS, especially aldehydes, alcohols, and ketones [35], could contribute to a slightly fishy odor in resulting defatted HC powder. However, such a decrease could also enhance storage stability of defatted HC powder. Isopropanol was used for defatting of HC powder. Alcohol is generally capable of dissolving aldehydes. As a result, those aldehydes, the secondary oxidation products, could be leached out by isopropanol washing.
Size Distribution
Molecular weight (MW) of peptides in HC powder defatted with isopropanol for 2 cycles as determined by MALDI-TOF mass spectrometry is displayed in Figure 3. Several peaks were observed in spectrum, indicating the presence of various peptides having various MW in HC. All peptides had MW lower than 3000 Da. The peptide with MW of 1460 Da was dominant, followed by those with MW of 1975 and 1534 Da, respectively. This result indicated that the hydrolysis process using mixed proteases (both papain and Alcalase) under optimum conditions could hydrolyze swollen skin and provide a number of low MW peptides in the HC. Additionally, the MW is a key factor governing the biological and functional properties of hydrolysates [37]. Chotphruethipong, Binlateh, Hutamekalin, Aluko, Tepaamorndech, Zhang, and Benjakul [3] documented that the HC from seabass skin with low MW (<3000 Da) could promote osteoblast cell proliferation and enhanced alkaline phosphatase activity (AP-A) during the first 7 days, while inducing mineralization at day 21. It is worth noting that the HC with low MW peptides could be used as functional ingredients or nutraceuticals [37]. of low MW peptides in the HC. Additionally, the MW is a key factor governing the biological and functional properties of hydrolysates [37]. Chotphruethipong, Binlateh, Hutamekalin, Aluko, Tepaamorndech, Zhang, and Benjakul [3] documented that the HC from seabass skin with low MW (<3000 Da) could promote osteoblast cell proliferation and enhanced alkaline phosphatase activity (AP-A) during the first 7 days, while inducing mineralization at day 21. It is worth noting that the HC with low MW peptides could be used as functional ingredients or nutraceuticals [37]. Figure 3. Size distribution of HC powder defatted using DSCS for 9 cycles, followed by washing using isopropanol for 2 cycles determined by MALDI-TOF mass spectrometry.
Amino Acid Composition
Amino acid compositions of HC powder defatted with isopropanol for 2 cycles are presented in Table 4. HC powder had glycine as the dominant amino acid (359 residues/1000 residues), followed by proline (93 residues/1000 residues) and alanine (80 residues/1000 residues) and glutamic acid/glutamine (65 residues/1000 residues). Tyrosine (8 residues/1000 residues) and hydroxylysine (5 residues/1000 residues) were also found. HC powder contained no cysteine. Benjakul, Karnjanapratum, and Visessanguan [1] reported glycine (326 residues/1000 residues), alanine (128 residues/1000 residues), and proline (112 residues/1000 residues) in the HC from seabass skin. Glycine generally occurs at every third position in collagen polypeptide and imino acids (proline and hydroxyproline) are also constituted [38]. HC powder consisted of 20.90% essential amino acids. Moreover, the lower content of imino acids (137 residues/1000 residues) in defatted HC from salmon skin was observed than that of HC from seabass skin (192 residues/1000 residues). For hydrophobic amino acids, the defatted HC showed 64.10%, while HC from seabass skin constituted 67.48% hydrophobic amino acid as reported by Chotphruethipong, Aluko, and Benjakul [30]. This was plausibly governed by different species and processes used for HC production. It was postulated that the defatting process with isopropanol could remove free imino acids and hydrophobic amino acids such as proline, hydroxyproline, alanine, and leucine in HC powder to some degree. Thus, defatting using solvent likely affected amino acid composition of HC to some extent. Table 5 shows proximate compositions of HC powder defatted with isopropanol for 2 cycles. Typically, HC powder was hygroscopic, related with the charged or polar residues exposed after hydrolysis [39]. The resulting HC powder was rich in protein (94.72%, dry weight basis), indicating that proteins in HC became more concentrated after the fat removal processes [40]. In addition, ash content of HC was 5.08% (dry weight basis). This was possibly associated with salt formed during neutralization between acid and alkaline solutions during hydrolysis [8]. This was confirmed by high sodium content (2.50%, dry weight basis) (Table 5). Moreover, the salmon scale contained high amount of mineral, which could also be released to HC during hydrolysis as indicated by the presence of calcium and phosphorous in HC at 0.32 and 0.31% (dry weight basis), respectively. Ca-hydroxyapatite has been known to be the major mineral complex in fish scales [41].
Volatile Compounds
Volatile compounds in HC powder defatted with isopropanol for 2 cycles (HC-C9/ISP2) in comparison with the HC defatted using centrifugal separator for 9 cycles (HC-C9) and HC without fat removal are shown in Table 6. Aldehydes are the most prevalent volatile SOPs in HC and have been used as the index of lipid oxidation in foods due to their low odor threshold value. They majorly contribute to off-odor and off-flavor [42]. Ross and Smith [42] documented that several aldehydes generated during oxidation were oc-tanal, nonanal, pentanal, hexanal, etc. Among all aldehydic compounds in the present study, nonanal was the most dominant in HC, followed by undecanal, 2,6-nonadienal, octanal, and heptanal, respectively (Table 6). Principally, n-alkanals and 2-alkenals and 2,4-alkadienals occurred from oxidation of n-6 or n-9 unsaturated fatty acids, especially oleic acid [43]. Salmon skin contained oleic acid at a high level, 16.76 g/100 g lipid (Table 1). This led to a large number of aldehydes formed in HC. HC-C9 generally had a higher abundance of aldehydes than HC. This might be associated with higher lipid oxidation occurring during fat removal with DSCS for 9 cycles. The long exposure to centrifugal shearing force of DSCS might decrease oil droplet size with coincidentally increasing surface area of retained fat in HC solution. As a consequence, lipid oxidation could be enhanced. Nevertheless, the abundance of all aldehydes was lowered in HC-C9/ISP2 in comparison with the HC and HC-C9. No 2-pentenal, 4-heptenal, 2-methyl-2-heptenal, 2-octenal, 2-propyl-2-heptenal, and 2,4-heptadienal were detected in HC-C9/ISP2. This result suggested that the defatting of HC powder with isopropanol for 2 cycles could lower the amount of aldehydes. This was related with lower PV and TBARS, as documented in Table 3. Regarding the FOI, Varlet et al. [44] documented that carbonyl compounds, especially aldehydes involving 4-heptenal, octanal, decanal, and 2,4-decadienal, mainly contributed to fishy odor in salmon flesh. The lower abundance of octanal, along with disappearance of 4-heptenal, was observed for HC-C9/ISP2, compared to those of HC and HC-C9/ISP2 samples. It is worth noting that the lower FOI was obtained for HC powder defatted with isopropanol for 2 cycles (Table 3). This was in line with Chang, et al. [45], who documented that the use of 35-75% isopropanol was very efficient in reducing volatile compounds, particularly aldehydes in lentil protein isolate. Values are expressed as abundance (×10 7 ). ND: not detectable. HC-W/O: HC without fat removal processes; HC-C9: HC with fat removal using DSCS for 9 cycles; HC-C9/ISP2: HC defatted with DSCS for 9 cycles, followed by washing using isopropanol for 2 cycles.
Overall, the lower SOPs involving aldehydes, alcohols, and ketones coincided with the decreased PV and TBARS values ( Table 3). The results suggested that SOPs were lower in HC-C9/ISP2, compared to HC and HC-C9. Thus, the fat removal via DSCS process for 9 cycles, followed by treatment using isopropanol for 2 cycles, was effective in the removal of fat, which was a precursor for oxidation as well as in eliminating SOPs associated with offensive fishy odor in resulting HC.
Conclusions
The production of fat-free hydrolyzed collagen (HC) powder from salmon skin with negligible fishy odor was achieved. One-step hydrolysis using mixed proteases (3% papain and 4% Alcalase) at pH 8 and 60 • C for 240 min was used for HC production. HC powder from hydrolysate with fat removal using DSCS for 9 cycles had the lower fat content with higher lightness, compared to that without fat removal. HC powder was subsequently defatted with isopropanol for 2 cycles (HC-C9/ISP2). The obtained HC powder was free of fat with lowered fishy odor and lipid oxidation products but showed the highest L*-value (84.52) (p < 0.05). HC-C9/ISP2 contained peptides with MW less than 3000 Da. HC-C-/ISP/2 had high protein content with high amounts of glycine and imino acids. HC-C9/ISP2 had the lower abundance of odorous compounds, compared to that without the fat removal process. Therefore, the developed HC powder lacking in fat and fishy odor could be potentially applied in food or pharmaceutical products. | 2021-10-19T15:53:31.780Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "dfd1a662eda04ba4a6b63a5831c44aca2dd83859",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/10/2257/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "12ac289470c3fede9213ebf4f6f19ccecae3522d",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196537822 | pes2o/s2orc | v3-fos-license | Right Asepsis with ANTT® for Infection Prevention
Aseptic technique, which involves infection prevention actions designed to protect patients from infection when undergoing invasive clinical procedures, is universally prescribed by guideline makers as a critical competency in the prevention of infections. However, no meaningful explanation of what aseptic technique is or how it is to be applied to ensure patient safety is provided within any of the guidelines. The Aseptic Non Touch Technique (ANTT®), originated by Rowley in the late 1990s, was designed to help address variable aseptic technique standards of practice and provide a rationalized, contemporary, evidence-based framework to standardize this critical competency and help improve standards of practice. The ANTT® Clinical Practice Framework provides a comprehensive framework for aseptic technique for all invasive procedures based on an approach termed Key-Part and Key-Site Protection. During the insertion or manipulation of an intravascular device, the ‘ANTT-Approach’ provides a systematic method that supports the practitioner to include all the important elements of aseptic technique, with particular focus on the identification and protection of ‘Key-Parts’ and ‘Key-Sites’ throughout the preparation and the procedure. This chapter provides clinical examples of how the ANTT® is implemented in the healthcare setting, as well as, importantly, how to promote compliance of the technique.
Which Aseptic Technique
Is the 'Right' Aseptic Technique?
At its heart, aseptic technique involves a collection of infection prevention actions designed to protect patients from infection when undergoing invasive clinical procedures, including maintenance of indwelling medical devices. A combination of decontamination processes, sterilized equipment and handling technique is used to minimize potential transmission of pathogenic microorganisms (Clare and Rowley 2017). In terms of protecting patients from infection, aseptic technique is one of the most important and commonly used critical clinical competencies in healthcare. Although the prescription for aseptic technique is universal, agreement on the aim, definition, description and application of aseptic technique is not (Preston 2005;NICE 2012;Gorski et al. 2016).
11
Historical and contemporary literature presents a confused hierarchy of terms and practices including sterile technique, aseptic technique, clean technique and non-touch technique, further compounded by interchangeable use and meaning (Aziz 2009;Rowley et al. 2010;Unsworth and Collins 2011). Internationally, it seems to have become 'fashionable' for guideline makers to address the historical confusion surrounding aseptic technique by simply prescribing the generic term 'aseptic technique', with virtually no meaningful explanation of what aseptic technique is or how it is to be applied to ensure patient safety (NICE 2012;Gorski et al. 2016;RCN 2016).
Such 'prescription without explanation' by major stakeholders appears to have undermined this critical clinical competency and potentially fuels the complacency associated with aseptic technique. In education, students and qualified staff are taught various educational concepts and practical methods of performing essentially the same activity using different methods and descriptors (Hartley 2005;Flores 2008;Aziz 2009;Rowley et al. 2010). But most concerning of all is the effect of a confused literature on clinical practice with poor standards of aseptic technique commonly and consistently reported (Hartley 2005;Flores 2008;Aziz 2009;Rowley et al. 2010;Unsworth and Collins 2011). As a result, concern for standards of aseptic technique has been reflected in a wide range of international initiatives concerned with improving infection prevention such as the Keystone project (Pronovost et al. 2006;Pronovost 2008), 5 Moments for Hand Hygiene (Sax et al. 2007), Saving Lives [UK] (DH 2007) and the 100,000 Lives Campaign [USA] (Berwick et al. 2006).
Given the significantly invasive nature of inserting indwelling medical devices and their ongoing care and maintenance, aseptic technique has always been of much concern in the field of intravenous therapy. However, despite this, historically and contemporaneously, the field of IV therapy fairs no better in aseptic technique than any other field of clinical practice. Moreover, given the incidence of particularly high-profile organisms such as MRSA, IV therapy has consistently been identified as a particularly problematic area for aseptic technique.
Perhaps of most concern is that despite more than a century of infection prevention that has paralleled many advances in IV therapy, there is a dearth of evidence supporting which specific aseptic technique provides the most effective prevention against the risk of infection. For example, in a partial update to Clinical Guideline 2 (replaced by CG139), the National Institute for Health and Care Excellence (NICE) was unable to identify any clinical or costeffectiveness evidence to recommend the best technique when handling vascular access devices to reduce infection-related bacteremia, phlebitis, compliance, MRSA or C. diff reduction and mortality (NICE 2012). Subsequently, NICE and other stakeholders have recognized ANTT ® as providing the prerequisite platform required for guideline development and research enquiry in aseptic technique (NICE 2012(NICE [updated 2017).
Right Aseptic Technique: ANTT ®
To avoid any ambiguity regarding aseptic technique as discussed above, all references to aseptic technique throughout this book are articulated using the practice terms and principles explicitly defined in the ANTT ® Clinical Practice Framework.
ANTT ® is the most commonly used aseptic technique framework in healthcare today and is rapidly evolving as a global standard. As a result, it is increasingly becoming the recognized standardized practice (NICE 2012), providing meaningful, accurate practice language. The development of ANTT ® has been closely associated with IV practice as the insertion and use of IVs is the most commonly performed invasive procedure in healthcare. There are two types of ANTT ® : Surgical-ANTT and Standard-ANTT (for context, Surgical-ANTT reflects the so-called sterile technique, and Standard-ANTT is a rationalized approach to better describing a multitude of ambiguous practice terms).
The ANTT ® Clinical Practice Framework Explained
To address the historically confused and interchangeable use of terminology for aseptic technique, the ANTT ® Clinical Practice Framework utilizes only achievable practice terms and accurately defines them as follows: Aseptic-Free from pathogenic organisms (in sufficient numbers to cause infection) Sterile-Free from (all) microorganisms
Clean-Free from visible marks and stains
Non-touch-The action of not touching critical or important parts of equipment and/or vulnerable or compromised parts of a patient Often used interchangeably with the term 'aseptic' is the term 'sterile', a well-defined word that specifically means 'an absence of all microorganisms'. Unfortunately, sterile is simply not achievable in typical healthcare settings due to the multitude of microorganisms in the air environment. The term 'aseptic' refers to an absence of pathogenic microorganisms in sufficient quantity to cause infection and is achievable in the typical healthcare setting. These terms should not be used interchangeably as they are not synonymous. The term 'clean' is defined as the absence of visible marks and stains and, for the obvious reason that microorganisms are not visible to the naked eye, offers no measurable or safe practice definition for invasive procedures. Non-touch technique (NTT) is not a free-standing aseptic technique as such but rather represents a critical component of any safe aseptic practice.
The practice terms in the ANTT ® Practice Framework provide an interrelated set of definitions that are technically accurate, achievable and applicable to any invasive clinical procedure. Using these definitions has aided in the development of standardized competency-based education and training for both novices and experienced clinicians. These accurate descriptors of practice allow risk assessment to be simplified and focus on rational practice-based factors rather than arbitrary and subjective variables, so that clinical practice is more consistently applied.
ANTT ® : Key-Part and Key-Site Protection
The ANTT ® Practice Framework established that the aim of infection prevention during invasive procedures or the maintenance of indwelling medical devices by definition is and can be no more and no less than the procedure being 'aseptic'. In ANTT ® , maintaining an aseptic procedure is achieved by the fundamental concept and prac-tical application of Key-Part and Key-Site Protection. The 'Key-Parts' of procedural equipment and the patients' vulnerable points of access are portals of entry for bacteria. 'Key-Parts' and 'Key-Sites' are maintained in an aseptic state at all times during invasive procedures. Key-Part and Key-Site Protection is achieved by a practical process that involves prerequisite basic infective precautions such as hand cleaning and personal protective equipment (PPE), the use of sterilized equipment and medical supplies and an appropriate combination of aseptic fields and a non-touch handling technique.
Key-Sites-Areas of skin penetration that provide a direct route for the transmission of pathogens into the patient and present a significant infection risk. Key-Sites include surgical wounds, skin breakdowns or exit sites from CVAD/PVC placement. Key-Parts-Critical parts of medical devices/ items of procedure equipment that, if contaminated, provide a route for the transfer of harmful microorganisms directly onto or into a patient. In IV therapy, these include any part of the equipment that comes into direct or indirect contact with the liquid infusion.
Aseptic Fields
The type of aseptic fields utilized in an aseptic procedure is dependent upon the type of procedure and aseptic technique utilized. To this end, ANTT ® defines the utilization of the three different types of aseptic fields in common practice: Critical Aseptic Field-Usually a sterilized drape, used when a single main aseptic field is required to house and protect all procedure equipment that is typically removed from individual packaging onto the field. A main Critical Aseptic Field is used to help ensure equipment is maintained in an aseptic state during procedures by providing essential and primary protection from the procedure environment. Critical Aseptic Fields require what can be termed 'critical management' or the prohibition of anything non-sterilized entering the field at any time during the procedure. Subsequently, sterilized gloves are required to maintain asepsis while manipulating equipment into and out of the field. Essentially, all equipment is 'handled' as Key-Parts ( Fig. 11.1). General Aseptic Field-Typically a decontaminated and disinfected surface (examples include plastic procedure trays or dressing trollies) or a single-use disposable product (examples include pulp paper or plastic trays). The main aseptic field that promotes asepsis during procedures by providing basic protection from the procedure environment. General Aseptic Fields are used when the procedure Key-Parts are easily and primarily protected using a type of Critical Field termed Micro Critical Aseptic Field (see below). General Aseptic Fields require 'general' rather than 'critical' management and are subsequently managed with nonsterilized gloves and non-touch technique ( Fig. 11.2).
Micro Critical Aseptic Field (MCAF)-
Examples include sterilized caps, covers and the inside of recently opened sterilized equipment packaging. Critical Aseptic Fields are used to protect Key-Parts individually, e.g. syringe cap or needle cover. They allow safe protection during less complex procedures while also affording a high level of cost-effective ANTT ® ( Fig. 11.2).
Right Aseptic Technique: ANTT ® Applied to IV Therapy
There are too many procedures and procedure variables to provide an exhaustive A-Z reference of ANTT ® applied to IV therapy. Instead, the principles and practice terminology of the ANTT ® Clinical Practice Framework have been outlined above, and broad examples of how the framework is applied to practice is outlined for four of the most common IV procedures below.
Overview
The insertion of CVADs is a complex procedure with expert level of knowledge and competencybased skills required for safe and effective practice.
ANTT ® Risk Assessment for CVAD Insertion
The process of ensuring asepsis (the absence of pathogenic organisms) during CVAD insertion is technically complex and involves having to protect numerous Key-Parts and one particularly large Key-Part, the CVAD itself. The procedure is highly invasive and may last an hour or more. These risk factors alone make it technically challenging to ensure asepsis throughout the procedure, and subsequently a Surgical-ANTT approach is required.
Basic Precautions for CVAD Insertion
Surgical-ANTT requires the practitioner to adopt a high level of precaution, such as a surgical hand scrub rather than a standard hand clean, and personal protective equipment (PPE) that includes the use of sterilized gloves, gowns, head wear and masks (Loveday et al. 2014). Risk of airborne contamination at the bedside is reduced by ensuring the air environment has time to settle after any dust producing activities such as bed making or room cleaning prior to CVAD insertion.
Decontamination and Disinfection for CVAD Insertion
Most notably, Surgical-ANTT for CVAD insertion is typically performed using presterilized equipment and supplies and effective skin disinfection. Skin disinfection starts with visibly clean skin prior to applying the antiseptic solution. Unless contraindicated, 1 the current evidence base supports the use of ≥0.5% chlorhexidine in 70% IPA (Loveday et al. 2014;Gorski et al. 2016) applied with a suitable single-use applicator and a systematic technique that follows manufacturer's recommendations. The disinfecting solution should be allowed to dry before insertion of a CVAD.
Aseptic Fields in CVAD Insertion
Using Surgical-ANTT for CVAD insertions requires a large enough working area covered by a sterilized drape(s) to safely contain and protect all procedure equipment as aseptic. Large equipment used inter-procedure, such as ultrasound, is also covered in sterilized covers that enable aseptic handling by the operator.
Non-touch Technique for CVAD Insertion
Critical Aseptic Fields used in Surgical-ANTT CVAD insertion are managed critically, with sterilized gloves used to maintain aseptic continuity when handling all equipment and supplies. Due to the complexity of the procedure, a completely non-touch technique is not and cannot be mandated. However, the effective practitioner will be mindful that sterilized gloves and drapes are not infallible and can be inadvertently contaminated.
With this in mind, the principle remains that the safest way to not inadvertently contaminate a Key-Part is to not touch it directly where practically possible. To this end, many CVAD inserters pay particular reverence to the major Key-Part, the CVAD that will be inserted deep into the patient and lay indwelling, and only handle this part indirectly via forceps or sterilized packaging. (Gorski et al. 2011), for early detection of any complications.
ANTT ® Risk Assessment for PVC Insertion
It should be noted that type of ANTT ® for PVC insertion is particularly dependent upon the vessel health and vein accessibility of individual patients. As a result, the competency and experience of the practitioner are particularly relevant in the ANTT ® Risk Assessment and may often direct the practitioner to the Surgical-ANTT technique.
However, due to the technical ease of achieving asepsis for a relatively few number of Key-Parts, ANTT ® Risk assessment typically determines that PVC insertion can be performed safely by a competent practitioner using Standard-ANTT. To this end, non-sterile gloves are indicated if the access site is not touched following skin antisepsis (O'Grady et al. 2011).
Basic Precautions for PVC Insertion
PVC insertion demands effective hand hygiene and the use of appropriate PPE to help protect the healthcare worker from exposure to blood-borne pathogens. Recommended PPE varies internationally. The most common PPE guidelines involve non-sterile or sterile glove usage depending upon the type of ANTT ® utilized. In addition, disposable aprons are recommended in Epic3, but not universally. Loveday et al., through systematic review, identify a developing evidence base identifying a steady build-up of microorganisms on staff uniforms with some association to healthcare-associated infections (HAI) (Loveday et al. 2014). All invasive procedures require appraisal of the aseptic technique integrity throughout the procedure. This principle is particularly relevant in PVC insertion as it can often be necessary to re-palpate the injection site. If performing Standard-ANTT, vein re-palpation requires recleaning of the puncture site; if vein detection continues to be problematic and it is not practical to reclean the skin after re-palpation, sterilized gloves should be introduced. If performing Surgical-ANTT and there is a need to insert the PVC immediately after re-palpation without recleaning the skin, the practitioner must consider the integrity of their sterilized gloves prior to proceeding and replace them if compromised.
Decontamination and Protection for PVC Insertion
If using a reusable procedure tray, the tray should be decontaminated and disinfected according to local policy pre-and post-procedure. Prior to insertion, the skin, or Key-Site, should be cleaned and disinfected using a non-touch technique with a single-use applicator licensed for purpose. Solution should be applied with a frictional cross-hatch method using pressure to reach deep into the skin layers. The skin must be allowed to dry prior to insertion. If re-palpation is necessary, the site should be recleaned and managed as described above.
Aseptic Fields in PVC Insertion
PVC insertion using Standard-ANTT involves Key-Parts being primarily protected individually with Micro Critical Aseptic Fields such as sterilized caps and covers and the inside of sterilized equipment packaging. Secondary aseptic field protection is provided by containing all procedure equipment within a procedure tray serving as a General Aseptic Field. Disposable trays used as General Aseptic Fields for PVC insertion should be large enough to provide a reasonable working area with high sides to contain spillage of liquids or sharps.
A sterilized drape may be considered for placement underneath the patient's arm to promote asepsis around the immediate procedure environment as well as to help contain any leakage of blood.
If Surgical-ANTT is selected for PIV insertion, full barrier precautions are not recommended by evidence-based guidance; however, there is consensus-based guidance suggesting an increased attention to skin disinfection and the use of sterilized gloves might be beneficial in the context of longer indwelling times for PVCs (Gorski et al. 2016). A modest-sized sterilized drape, large enough to cover a small treatment trolley or procedure tray, can adequately serve as a Critical Aseptic Field with sterilized gloves worn for all equipment handling.
Non-touch Technique for PVC Insertion
When using Standard-ANTT, non-touch technique for PVC equipment preparation and insertion is mandatory and technically straightforward. As already noted, asepsis of the Key-Site is compromised when the Key-Site requires re-palpation after skin disinfection. See above for the appropriate counter measures.
Overview
The term intravenous maintenance is used below to describe the preparation and administration of intravenous drugs via infusion or injection. Any type of intravenous device, whether central or peripheral access, provides a portal of entry for microorganisms with a significant risk of infection. This risk is compounded by the frequency in which IV devices are handled and manipulated by many different and busy healthcare workers over long periods of time. Effective aseptic technique on every manipulation is of paramount importance. Although the risk of infection is significant, establishing and maintaining asepsis when connecting intravenous infusions or administering intravenous injections is not technically challenging and can warrant a relatively simple and efficient approach to aseptic technique.
ANTT ® Risk Assessment for IV Maintenance
Standard-ANTT is most likely selected for IV maintenance on the basis that procedures are typically of short duration and will involve minimal numbers of small Key-Parts that are technically easy to protect with non-touch technique and individual Micro Critical Aseptic Fields. The risks of the invasive nature of an IV device can be reduced by the utilization of closed intravenous line systems (Graves et al. 2011).
Basic Precautions for IV Maintenance
In preparation for Standard-ANTT, the practitioner begins by performing important precautions such as hand cleaning and applying personal protective equipment (PPE) according to local policy. Given the volume and frequency of IV maintenance, the challenge for healthcare organizations is ensuring compliance with basic precautions, especially effective hand cleaning every time staff connect or access intravenous systems.
Decontamination and Protection for IV Maintenance
The most common aspects of decontamination for Standard-ANTT for IV maintenance are decontamination and disinfection of procedure trays / trolleys / surfaces and, particularly, disinfection of IV hubs.
Procedure Tray Disinfection
Due to a highly variable evidence base for this area of practice (Leas et al. 2015), there is a wide choice of disinfection wipes available. It is important that local policies be explicit regarding the expectancy for decontamination, disinfection, storage and usage of procedure trays. Given that procedure trays are utilized to promote asepsis and not ensure it, pulp or paper trays can be acceptable; however, such trays should be large enough to serve as a General Aseptic Field and large enough to promote Key-Part protection, enable safe transport and handling of equipment as well as have sides high enough to contain any spillage of liquids or sharps. A clear, single system for safe storage of disposable trays is important as they cannot be decontaminated or disinfected prior to use.
Post-procedure decontamination and disinfection are important in preventing the potential for cross infection. Reusable medical equipment must not be stored without cleaning or while still wet to inhibit microorganism clustering and biofilm development while providing maximal effectiveness of the disinfection solution (NPSA 2009;HIRL 2006).
IV Hub Disinfection
When not in use, IV hubs are likely to come into contact with microorganisms from the patient or immediate environment. There is a strong and developing evidence base describing effective technique for disinfection of the IV hubs (Hibbard 2005;Kaler and Chinn 2007;Moureau and Flynn 2015;Gorski et al. 2016). It is widely accepted that best technique requires alcohol and chlorhexidine disinfection and adequate time spent 'scrubbing' the hub, creating friction (Kaler and Chinn 2007;Smith et al. 2012). Guidance still varies slightly concerning the type of disinfectant and the duration of scrubbing (Table 11.1). To this end, the user should be guided by local policy.
Passive IV Hub Disinfection
There is a growing body of evidence indicating that effective and routine IV hub disinfection is not universally achieved and that failure in hub disinfection is common (Moureau and Flynn 2015). To help address such 'human factors', socalled passive disinfection has been recommended as an alternative for effective and reliable IV hub disinfection (Gorski et al. 2016).
Aseptic Fields in IV Maintenance
Aseptic fields have a pivotal role in Key-Part protection, and typically, using Standard-ANTT for IV maintenance, Key-Parts are afforded primary protection by Micro Critical Aseptic Fields such as sterilized caps. For non-toxic medications, the sterilized inside of a syringe packet is also widely used for protecting syringe tip Key-Parts as they provide excellent Key-Part Protection and have the added advantage of positioning operator hands well away from the Key-Part. Secondary protection is typically provided by a plastic or disposable tray promoting asepsis, thus serving as a General Aseptic Field.
Non-touch Technique for IV Maintenance
IV maintenance typically involves technically simple non-touch technique of a few Key-Parts such as the connecting of a syringe to a needlefree connector. However technically simple, meticulous non-touch technique is paramount and mandatory. If compromised, equipment must be discarded or re-disinfected. Key-Parts should be assembled one at a time and immediately protected when not in use with individual Micro Critical Aseptic Fields such as sterilized caps and covers.
11.6.7 Right Aseptic Technique: ANTT ® Applied to Central Line Dressing Change 11.6.7.1 Overview Dressing changes are not necessarily invasive but do involve significant risk of infecting a patient through the transmission of harmful microorganisms (Ullman et al. 2015). Protecting a CVAD entry site requires ongoing Key-Site Protection involving CVAD dressings and site cleaning.
CVAD Dressings
Based on evidence, there is consensus across international guidance for the use of semipermeable clear dressings with integral or separate chlorhexidine and fixation (Loveday et al. 2014;Gorski et al. 2016). It should be noted that ANTT ® considers the skin area beneath an intact aseptic dressing to be the Key-Site, not just the much smaller exit site. Skin antisepsis is carried out during a dressing change using best practice guidance: Use a single-use application of a solution of >0.5% chlorhexidine gluconate in 70% isopropyl alcohol (or povidone-iodine in alcohol for patients with sensitivity to chlorhexidine) to clean the CVAD insertion site during dressing changes, and allow to air dry (Loveday et al. 2014;Gorski et al. 2016).
ANTT ® Risk Assessment for CVAD Dressing Change
The technical difficulty of maintaining asepsis during CVAD dressing change and the subsequent type of ANTT ® utilized is largely dependent on the type and combination of medical supplies used. Complete and subsequently more complex CVAD dressing changes typically require Surgical-ANTT as there are a number of Key-Parts and a large Key-Site to manage aseptically: an old and new dressing, fixation devices that vary in handling complexity and a chlorhexidine disk if not integral to the dressing.
Basic Precautions for CVAD Dressing Change
Effective hand hygiene is followed by application of appropriate PPE such as aprons/protective gowns and gloves depending on the type of ANTT ® used. Where Surgical-ANTT is utilized, non-sterile gloves can be used for dressing removal, and sterilized gloves are utilized thereafter.
Aseptic Fields in CVAD Dressing Change
Because CVAD dressing changes in many countries include sterile changes with CHG protective sponge and/or fixation devices, they require Surgical-ANTT, and thus application of the main Critical Aseptic Field with all the Key-Parts protected upon it would apply (Gorski et al. 2016).
Non-touch Technique for CVAD Dressing Change
When using Standard-ANTT and Micro Critical Aseptic Fields, the practitioner must be conscious throughout that maintaining asepsis depends on effective non-touch handling of Key-Parts and the Key-Site at all times. Non-touch handling is promoted when skin cleaning is performed by using a purpose built and licensed applicator designed to keep the healthcare worker's fingers well away from the Key-Site.
Perhaps often forgotten, dressings should be removed with as much handling care as when applied. Carefully rolling the dressing up from the edges of the dressing is one such method of effective, controlled removal. As well as helping to prevent Key-Site contamination through nontouch technique, such care helps minimize the risks of adhesive-related skin injury. Application of a skin barrier solution may help reduce the risk of skin damage (Gorski et al. 2016).
Decontamination for CVAD Dressing Change
Skin antisepsis following existing dressing removal should be performed using evidencebased recommendations. Preferred disinfection, if not contraindicated* 1 , is an alcoholic solution with >0.5% CHG (Gorski et al. 2016) in a singleuse applicator (Loveday et al. 2014). The skin should be cleaned as per manufacturer's recommendations such as a firm cross-hatch technique (Moureau 2003) with a minimum cleaning time of 30 s (Gorski et al. 2016). For the cleaning solution to provide its most effective bacterial kill effect, it must be allowed to dry fully before a dressing is applied. Efficacy has not been sufficiently demonstrated for the use of topical antimicrobial ointments (Loveday et al. 2014); however, the use of chlorhexidine-impregnated dressings or impregnated sponges is recommended (NICE 2012(NICE , 2015Gorski et al. 2016).
ANTT ® Clinical Governance: Competency, Compliance and Surveillance
Achieving and maintaining the effective delivery of ANTT ® for every patient during every invasive interaction across large healthcare organizations naturally require a robust effective clinical governance framework that reflects the critical importance of effective ANTT ® to patient outcome. Effective governance should include: • Clear responsibilities and accountabilities • A robust implementation program • Clear expectations communicated to staff through policy and guidelines • Standardized education and training • Competency assessment • Infection surveillance of relevant indicators • Compliance audit
Competency
The Infusion Nurses Society describes competency for infusion therapy as something that '… goes beyond psychomotor skills and includes application of knowledge, critical thinking, and decision-making abilities' (Gorski et al. 2016). Similarly, ANTT ® competency assessment includes assessment of theory and practice. The standard competency assessment tool for Surgical-ANTT and Standard-ANTT requires demonstration of effective Key-Part and Key-Site Protection but also the articulation of ANTT ® terms and principles. The inclusion of theory in a practical assessment helps organizations embed a common practice language for this important competency that has the additional benefit of being used internationally-helping generate more meaningful discussion about this critical competency (ANTT ® Competency Assessment Tools are freely available from www.antt.org/ competency assessment).
Implementation
ANTT ® is typically implemented across healthcare organizations as a quality assurance-based audit cycle (Fig. 11.3) involving a suite of targeted resources and implementation tools. The standardization of clinical practice is further aided by a series of visual, risk-assessed and sequenced clinical practice guidelines (Fig. 11.4). These simple and adaptable guidelines communicate a wealth of practical information in a bitesized user-friendly package. Translated into multiple languages, the ANTT ® Clinical Practice Guidelines are the visual embodiment of the Clinical Practice Framework.
Effective implementation of any important clinical competency is dependent upon various organizational factors (Melle et al. 2017).
Evaluation of ANTT ® implementation across seven large hospitals identified executive boardlevel organizational support as a key indicator for effective implementation (Rowley et al. 2010). A recently published study looked at the essential elements of the ANTT-Approach 36 months after implementation and demonstrated significantly improved practice with best practice elements of aseptic technique (Clare and Rowley 2017).
Compliance
Just like any other critical clinical competency, once ANTT ® has been established across an organization, it is necessary to maintain practice standards. Again, like any clinical competency, this is not an easy task and is best not underestimated. For perspective, despite huge investments of time and money in hand hygiene practice, compliance internationally is commonly reported at approximately ≤50% (Fuller et al. 2011;Kingston et al. 2015;Brühwasser et al. 2016). Soberingly, effective aseptic technique requires staff compliance with an additional of four or five essential key practice components.
ANTT ® addresses this compliance challenge with a multifactorial approach including: • Periodic assessment of ANTT ® competency • Simple but explicit picture-based ANTT ® Guidelines visible in relevant practice areas to help make expectations explicit and measurable • Surveillance-see below
Surveillance of Practice
Periodic 'snapshot' audit of various locations within a hospital can be an effective way of monitoring practice standards and informing the regularity of ANTT ® competency reassessment.
Surveillance of Outcomes
It is important that healthcare organizations monitor for signs of poor standards of ANTT ® by surveillance of relevant infective organisms. Point prevalence surveys (PPS) clearly demonstrate successes in reducing targeted organisms (Hopkins et al. 2012); however, they also illus-trate the risks of only focusing on several highprofile harmful microorganisms and losing sight of the larger issue of the mechanisms and processes of infection prevention that help drive reductions in HAI across the board. In the most recently reported English NHS PPS from 2012, it was clear that the high-profile national targeting of MRSA bacteremia had reduced incidence considerably; however, it was also clear that other resistant organisms increased in prevalence (Hopkins et al. 2012).
Effective surveillance has highlighted the risks of not considering PVCs and CVADs as equally important in the context of healthcareassociated infections (DeVries and Valentine 2016), and the successful monitoring of the application of targeted resources is yielding improved outcomes (DeVries et al. 2016) along with a better understanding of adherence to best practice guidelines (Yagnik et al. 2017).
Developing a Meaningful Evidence Base for Aseptic Technique
As previously discussed, the pre-ANTT ® evidence base for aseptic technique was weak and lacking in robust studies to define and describe practice. Since the development of ANTT ® , a more robust, standardized and generalizable evidence base is beginning to develop as more countries and healthcare organizations implement a single practice standard for aseptic practice.
Research generation provides a better and more complete evidence base for aseptic technique, and ANTT ® is providing the framework for much of this development (Beaumont et al. 2016;Flynn et al. 2015). The Association for Safe Aseptic Practice (ASAP) is a not-for-profit, non-governmental organization (NGO) based in the United Kingdom (UK) that promotes and administrates the ANTT ® project. Using an epidemiological approach, the association produces standardized surveillance tools to assist in data collection, analysis and dissemination of ANTT ® implementation and maintenance. A standard research protocol template aids in the design and implementation of local research studies helping to further develop a global evidence base for aseptic technique. It should be noted that ANTT ® is trademarked to prevent variable alteration of the framework and standard approach. Utilization of ANTT ® is very much encouraged and supported (see www.antt.org)!
Case Study
Nurse Smith was responsible for infection prevention at the hospital. As part of her objectives for the year, she identified aseptic practices with intravenous devices as a focus.
The first step in her initiative was to observe and document practices within the hospital (a pre-audit phase). After pinpointing gaps, confusion and compliance issues with aseptic technique, Nurse Smith con-tacted The-ASAP for advice and resources. Working collaboratively with The-ASAP/ ANTT ® Team she developed simple onepage educational sheets that emphasized the key points of the Aseptic Non Touch Technique (ANTT ® ). Nurse Smith, using a mixture of standardized ASAP and locally developed ANTT ® resources, created a local multimodal form of education to roll out the key points.
• The education summary pages were used in department/unit huddles with a 5-min session of training with one page of the education per week. • Posters were set up in locations around each unit, and observers were recruited to watch for opportunities to stimulate question and answer quick sessions with the nurses.
• ANTT ® (visual) Clinical Practice
Guidelines were deployed in clinical areas as education and training aids. • Throughout the 3-month period, educational lectures from visiting clinicians were given to all the medical and nursing staff emphasizing the use of ANTT ® . • The ANTT-Approach was written into local policy and procedure documents, explicitly creating the expectation of standard aseptic practice. • Nurse Smith used standardized assessment forms to competency assess clinical staff in ANTT ® .
Follow-up observations and documentation revealed an 80% improvement in specific key points and practices of ANTT ® and aseptic management of intravenous devices.
A long-term plan was developed to conduct periodic observation, using a checklist of key points and practices consistent with hospital policy, and to integrate annual aseptic education into the computer-based training already used for staff education. | 2019-07-15T22:29:34.938Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "73be5ec902d954d7a387b837f538031095a75a95",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-03149-7_11.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b12251584f7e3c645944c8956cabcea9a3e16868",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119704642 | pes2o/s2orc | v3-fos-license | Adding many random reals may add many Cohen reals
Let $\kappa$ be an infinite cardinal. Then, forcing with $\mathbb{R}(\kappa)$$\times$$\mathbb{R}(\kappa)$ adds a generic filter for $\mathbb{C}(\kappa);$ where $\mathbb{R}(\kappa)$ and $\mathbb{C}(\kappa)$ are the forcing notions for adding $\kappa$-many random reals and adding $\kappa$-many Cohen reals respectively.
Introduction
For a cardinal κ > 0 let R(κ) be the forcing notion for adding κ-many random reals and let C(κ) be the Cohen forcing for adding κ-many Cohen reals 1 .
It is a well-know fact that forcing with R(1) × R(1) adds a Cohen real; in fact, if r 1 , r 2 are the added random reals, then c = r 1 + r 2 is Cohen [1]. This in turn implies all reals c + a, where a ∈ R V , are Cohen, and so, we have continuum many Cohen reals over V . However, the sequence c + a : a ∈ R V fails to be C((2 ℵ0 ) V )-generic over V . In fact, there is no sequence c i : i < ω 1 ∈ V [r 1 , r 2 ] of Cohen reals which is C(ω 1 )-generic over V .
In this paper, we extend the above mentioned result by showing that if we force with R(κ) × R(κ), then in the resulting extension, we can find a sequence c i : i < κ of reals which is C(κ)-generic over the ground model: Theorem 1.1. Let κ be an infinite cardinal. Then, forcing with R(κ) × R(κ) adds a generic filter for C(κ).
In Section 2, we briefly review the forcing notions C(κ) and R(κ). Then in Section 3, we state some results from analysis which are needed for the proof of above theorem and in Section 4, we give a proof of Theorem 1.1.
The author's research has been supported by a grant from IPM (No. 95030417). He also thanks Moti Gitik for his useful comments and suggestions. 1 See Section 2 for the definition of the forcing notions R(κ) and C(κ).
Cohen and Random forcings
In this section we briefly review the forcing notions C(κ) and R(κ), and present some of their properties.
2.1. Cohen forcing. Let I be a non-empty set. The forcing notion C(I), the Cohen forcing for adding |I|-many Cohen reals is defined by which is ordered by reverse inclusion.
The reals c i are called Cohen reals. By κ-Cohen reals over V , we mean a sequence c i : i < κ which is C(κ)-generic over V .
Random forcing.
In this subsection we briefly review random forcing. Suppose I is a non-empty set and consider the product measure space 2 I×ω with the standard product measure µ I on it. Let B(I) denote the class of Borel subsets of 2 I×ω . Recall that B(I) is the σ-algebra generated by the basic open sets For Borel sets S, T ∈ B(I) set where S △ T denotes the symmetric difference of S and T . The relation ∼ is easily seen to be an equivalence relation on B(I). Then R(I), the forcing for adding |I|-many random reals, is defined as The following fact is standard.
Using the above lemma, we can easily show that R(I) is in fact a complete Boolean algebra. Let F ∼ be an R(I)-name for a function from I ×ω to 2 such that for each i ∈ I, n ∈ ω This defines R(I)-names r ∼ i ∈ 2 ω , i ∈ I, such that The reals r i are called random reals. By κ-random reals over V , we mean a sequence r i : i < κ which is R(κ)-generic over V .
Some results from analysis
A famous theorem of Steinhaus [2] from 1920 asserts that if A, B ⊆ R n are measurable sets with positive Lebesgue measure, then A + B has an interior point; see also [3]. Here, we need a version of Steinhaus theorem for the space 2 κ×ω .
Note that the above addition is continuous. Proof. We follow [3]. Set µ = µ κ be the product measure on 2 κ×ω . As S is Borel and non-null, there is a compact subset of S of positive µ-measure, so may suppose that S itself is compact. Let U ⊇ S be an open set with µ(U ) < 2 · µ(S). By continuity of addition, we can find an open set V containing the zero function 0 such that V + S ⊆ U.
We show that V ⊆ S − S. Thus suppose x ∈ V. Then (x + S) ∩ S = ∅, as otherwise we will have (x + S) ∪ S ⊆ U, and hence µ(U ) ≥ 2 · µ(S), which is in contradiction with our choice of U . Thus let y 1 , y 2 ∈ S be such that x + y 1 = y 2 . Then x = y 2 − y 1 ∈ S − S as required.
Similarly, we have the following: Suppose S, T ⊆ 2 κ×ω are Borel and non-null. It follows from Lemma 3.2 that for some p ∈ C(κ), [s p ] ⊆ S + T. Thus, by continuity of the addition, we can find x ∈ S and y ∈ T such that: • (x + y) ↾ dom(p) = p. In this section, we complete the proof of Theorem 1.1. Thus force with R(κ) × R(κ) and let G × H be generic over V . Let r α : α < κ , s α : α < κ be the sequence of random reals added by G × H.
For α < κ set c α = r α + s α . The following completes the proof: Lemma 4.1. The sequence c α : α < κ is a sequence of κ-Cohen reals over V .
(2) (x + y) ↾ dom(p) = p. Using continuity of the addition and further application of Lemma 3.2 and the remarks after it, we can find x ′ , y ′ such that: (4) x ′ ∈ S ∩ [s x↾dom(p) ] and y ′ ∈ T ∩ [s y↾dom(p) ]. | 2017-01-16T03:39:38.000Z | 2017-01-16T00:00:00.000 | {
"year": 2017,
"sha1": "6c58975f1d616c60fa374a98049685a420631c4e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6c58975f1d616c60fa374a98049685a420631c4e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265493826 | pes2o/s2orc | v3-fos-license | The need of listening comprehension in the teacher training programme in
Listening has long been the neglected skill in second language acquisition, research, teaching and assessment. However, in recent years there has been an increased focus on L2 listening ability because of its perceived importance in language learning and acquisition. The present study has explored the need of listening comprehension courses in the teacher training programme in the faculty of education in Hodeidah university. After one semester training in listening, the researcher used an IELTS test as an authentic speech to test the level of second year students in the faculty of education. The statistical analysis of the results provided some evidence in support of the need of listening comprehension in the training courses of teachers in the English department of the faculty of education in Hodeidah university.
The need of listening comprehension in the teacher training programme in the faculty of education, Hodeidah University... Dr. Shameem Ahmad Banani
ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ل ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة مجلة عدد سبعون و مس خ 2102 042
considerable influence on course design and textbook writing.Although listening is very important at the beginner's level, its importance does not diminish as the learners progress to more advanced levels of language proficiency.Practising listening at all stages of learning not only develops this skill but also expands and consolidates other elements of language knowledge, such as vocabulary, grammar and intonation.However, while the importance of listening learning is widely recognized today, there are different views as how to approach the teaching of listening.While some authors, such as Krashen and Terrell (1983), believe in the value of more exposure to spoken language during which learner's unconsciously develop their listening skills and acquire other elements of foreign language, other authors, such as Rost (1990Rost ( ,1994) ) and Ur (1984), agree that in order for learners to benefit from practicing listening, it is necessary to develop this skill in a direct and systematic way.
However, careful observation of College English teaching practice has found that the teaching of listening skills is still the weak link in the language teaching process.Despite students having mastered the basic elements of English grammar and vocabulary, their listening comprehension is often weak.
In the faculty of education in Hodeidah University students in the English department take courses in reading, writing and spoken, whereas the listening skill is totally neglected.At the same time, students study some courses in linguistics and literature and many of them experience considerable difficulties because the lecturers are foreigners.One may conclude that listening is a crucial skill for successful studies at the English Department, but although students are exposed to listening English through attending lectures and classes, this skill is not developed in a systematic way.This observation was the starting point for the research into the need of listening comprehension in the training programme of the faculty of education in Hodeidah University.
Theoretical basis of listening and comprehension:
According to Howatt and Dakin (1974), listening is the ability to identify and understand what others are saying.This process involves understanding a speaker's accent or pronunciation, the speaker's grammar and vocabulary, and comprehension of meaning.An able listener is capable of doing these four things simultaneously.
Thomlison's (1984) definition of listening includes "active listening", which goes beyond comprehension as understaning the message content, to comprehension as an act of empathetic understanding of the speaker.Furthermore, Gordon (1984) argues that empathy is essential to listening and contends that it is more than a polite attempt to identify a speaker's perspectives.Rather more importantly, empathetic understanding expands to "egocentric The need of listening comprehension in the teacher training programme in the faculty of education, Hodeidah University... Dr. Shameem Ahmad Banani
ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ل ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة مجلة عدد سبعون و مس خ 2102 041
prosocial behavior".Thus, the listener altruistically acknowledges concern for the speaker's welfare and interests.Ronald and Roskelly (1985) define listening as an active process requiring the same skills of prediction, hypothesizing, checking, revising, and generalizing that writing and reading demand; and these authors present specific exercises to make students active listeners who are aware of the "inner voice" one hears when writing.
Language learning depends on listening since it provides the aural input that serves as the basis for language acquisition and enables learners to interact in spoken communication.
Listening is the first language mode that children acquire.It provides the foundation for all aspects of language and cognitive development, and plays a life-long role in the process of communication.
According to Bulletin (1952), listening is the fundamental language skill.It is the medium through which people gain a large portion of their education, their information, their understanding of the world and of human affairs, their ideals, sense of values, and their appreciation.In this day of mass communication, much of it oral, it is of vital importance that students are taught to listen effectively and critically.Furthermore it is recognized by Wipf (1984) that listeners must discriminate between sounds, understand vocabulary and grammatical structures, interpret stress and intonation and retain and interpret this within the immediate as well as the larger socio-cultural context of the utterance.Rost (2002) defines listening in its broadest sense, as a process of receiving what the speaker actually says (receptive orientation); constructing and representing meaning (constructive orientation); negotiating meaning with the speaker and responding (collaborative orientation); and creating meaning through involvement, imagination and empathy (transformative orientation).Listening then is a complex, active process of interpretation in which listeners match what they hear with what they already know.
According to second language acquisition theory, language input is the most essential condition of language acquisition.As an input skill, listening plays a crucial role in students' language development.Krashen (1985) argues that people acquire language by understanding the linguistic information they hear.Thus language acquisition is achieved mainly through receiving understandable input.Given the importance of listening in language learning and teaching, it is essential for language teachers to help students become effective listeners.In the communicative approach to language teaching, this means modeling listening strategies and providing listening practice in authentic situations: precisely those that learners are likely to encounter when they use the language outside the classroom.Besides methodologies for the teaching of (Brown,1991;Ur, 1984;Anderson & Lynch,1988;Rost, 1990;Brown &Yule, 1983) point out that listening develops through the process of exposing learners to listening texts on which they perform tasks specially designed to promote the development of certain sub-skills.Most authors stress the importance of three main factors in the teaching of listening at all levels: listening materials, listening tasks and the procedure for organizing listening activities.The interplay of these three factors plays a significant role in designing effective listening activities.Therefore, we in Yemen should establish "listening-first" as fundamental in foreign language teaching.
Description of the study: 3.1 The Research Question
a.The main concern of the study is: Do our students in the faculty of education need special courses in listening comprehension?
b.The second concern of the study is to introduce a systematic way of teaching listening comprehension courses in the department of English in the faculty of education.
Subjects
This study was conducted with second year students in the faculty of education.The duration of the study was one semester and the total number of the students who participated in the study was 185 students, divided into two groups.
Material
During the semester the researcher used a text book "English Pronunciation in Use " by Mark Hancock as an introduction to listening comprehension.The aim of choosing is that it gradually moves from the pronunciation of individual sounds towards connected speech and intonation of English in a systematic way.Also it is an authentic exposure of English.The researcher used an audio CD player in teaching the course.Extra listening activities were also practiced in the class taken from "Prepare for IELTS, General Training Module" by Penny Cameron & Vanessa Todd.These activities included writing numbers, writing the spelling, listening to conversations, and listening to radio items.The aim of giving these activities was to prepare the students for the assessment test.
Instrument:
The assessment of the students' listening skills was carried out using the listening part of IELTS.The test consisted of three sections each of which assessed the ability to comprehend a different type of listening material.The total number of questions was (30).Section one contained (12) items, in this section the students were asked to listen to a short conversation and complete the form.Section two contained (12) items divided into two parts.In the first part the students were asked to listen to a longer conversation and complete the The need of listening comprehension in the teacher training programme in the faculty of education, Hodeidah University... Dr. Shameem Ahmad Banani
ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ل ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة مجلة عدد سبعون و مس خ 2102 041
calendar.In the second part the students were asked to circle the correct answer.Section three contained (6) items and in this section the students listened to a short conversation and complete a language school enrolment form.The maximum score was (30) in this part of the IELTS test.
Result and Discussion:
Table (1) shows that the number of students who attended the assessment test which is (185) and their score range is (9-30).The lowest score of the test was ( 9) and the highest score was (30).The number of students who scored (10) or less was 7 students and the students whose score range was (11)(12)(13)(14) The overall scores of the students as illustrated in table (2) can be summarized as follows: 4% of the students scored 10 and below, this indicates very poor performance.11% of the students scored 11 -14, 46% scored 15 -18 , 19% scored 19 -22, 13% scored 23 -26 and finally 7% scored 27 -30.This indicates that most students' scores were below good, therefore there is a severe need for introducing listening comprehension courses in a systematic way in the
Conclusion and Recommendations:
The study has explored that the performance of 15% of the students who attended the assessment test is very poor, whereas 7% got excellent grade.20% of the students got very good and excellent, whereas 80% scored below this grade.39% of the students got good, very good and excellent grades, but 61% got below these grades.
On the basis of the results obtained we can conclude that there is a severe need for teaching listening comprehension in the faculty of education.So listening practice should be an integral part in the teacher training programme in the faculty of education as our aim is improving Yemeni students' ability as English speakers and teachers.
Although this study was conducted with one type of instrument, i.e. a piece of listening taken from IELTS examination, and the aim was authentic speech, it is hoped that future studies can use other instruments like DVDs and see the effects of different types of speechthe one which stimulates a radio announcement, a television interview, etc. Furthermore, listening comprehension levels do influence the capacity for improvement in other language skills such as speaking, reading, writing and translating.The evidence from this study suggests sound reasons for emphasizing listening comprehension, which highlights the importance of spending much more time doing it.However, improving Yemeni students' ability as English speakers is a demanding process so much research needs to be done in this area.
The need of listening comprehension in the teacher training programme in the faculty of education, Hodeidah University... Dr. Shameem Ahmad Banani ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة ة ل ةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةةة مجلة عدد سبعون و مس خ 2102 041 teacher
training programme in the English department in the faculty of education. | 2023-11-29T16:05:55.432Z | 2023-01-12T00:00:00.000 | {
"year": 2023,
"sha1": "1eb54067800d6bfdb541307a0fa30bf62b3be7ac",
"oa_license": "CCBYSA",
"oa_url": "https://cbej.uomustansiriyah.edu.iq/index.php/cbej/article/download/8657/7917",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0fe9899841b3c8fb9b25b236dec74563d854d119",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
} |
271555924 | pes2o/s2orc | v3-fos-license | Profile of patients in private home care who developed ventilator-associated pneumonia
ABSTRACT Objectives: to analyze the profile and clinical outcomes of patients who developed Ventilator-Associated Pneumonia (VAP) in private home care and to compare the incidence with national data. Methods: this was a retrospective study with data collected from July 2021 to June 2022 from patient records at a private clinic. Patients using intermittent ventilation or without ventilatory support were excluded. Results: the utilization rate of mechanical ventilation was 15.9%. The incidence density of pneumonia in pediatrics was 2.2 cases per 1000 ventilation-days and in adults was 1.7 cases per 1000 ventilation-days, figures lower than those reported by the National Health Surveillance Agency. There were 101 episodes of pneumonia in 73 patients, predominantly male (65.8%), adults (53.4%), and those with neurological diseases (57.5%). The treatment regimen predominantly took place at home (80.2%), and there was one death. Conclusions: patients in home care showed a low incidence and mortality rate from ventilator-associated pneumonia.
INTRODUCTION
Healthcare-Associated Infections (HAIs) are significant concerns within hospital settings, known to prolong hospital stays, worsen clinical outcomes, and increase care costs.Among HAIs, those related to devices such as mechanical ventilation, urinary catheters, and venous catheters receive particular scrutiny due to the severity and complexity of implementing effective prevention measures (1)(2) .
Ventilator-Associated Pneumonia (VAP) is the leading cause of death from infections acquired within hospitals, exceeding complications associated with bloodstream infections, sepsis, and respiratory infections in non-ventilated patients (3)(4)(5) .The incidence density of VAP in North American hospitals varies from 0 to 4.4 cases per 1,000 ventilation-days (6)(7) , while higher rates are observed in European centers.The EU-VAP/CAP study reported an incidence density of 18.3 cases per 1,000 ventilationdays (8) .Developing countries have also reported high rates, with 18.5 cases per 1,000 ventilation-days (9) .Brazilian data from 2020 show an incidence density of 10.6 cases per 1,000 ventilationdays in intensive care settings (10) .In the United States, each VAP case is estimated to result in an additional cost of $40,000 per hospitalization (6,11) .
The Home Care sector (HC) has experienced exponential growth over the past few decades in Brazil and globally, with an increasing number of patients and a rise in clinical complexity (12)(13) .Home Care includes services for patients who require medication administration, enteral nutrition, wound care, rehabilitation therapies, oxygen therapy, and more complex therapies such as parenteral nutrition and Home Mechanical Ventilation (HMV) (14) .The increased clinical complexity of home-based patients is linked to the more frequent use of medical devices and ventilatory support, thus increasing the risk of VAP (15) .
Data on the incidence density of VAP in home settings are scarce but vital for monitoring service quality and assessing the effectiveness of preventive measures implemented.According to Silva et al. (16) , the literature indicates that pneumonia is the primary cause of hospital readmissions among pediatric patients.
HMV is increasingly prevalent, and just as in hospital settings, preventing VAP is crucial for maintaining safe care.The home environment is recognized for offering advantages over hospitals in terms of patient safety, reducing infection-related risks, and decreasing the incidence of VAP (17) .
OBJECTIVES
To analyze the profile and clinical outcomes of patients who developed Ventilator-Associated Pneumonia (VAP) in private home care and compare the incidence with national data.
Ethical Aspects
The study was conducted in accordance with national ethical guidelines, under the Certificate of Presentation for Ethical Appraisal (CAAE) and was approved by the Research Ethics Committee of the Padre Bento Hospital Complex in Guarulhos, São Paulo.Patient consent was waived as this observational study only utilized information from the Electronic Health Records (EHR), without the use of biological material and without any alteration to or influence on the routine treatment of the research participants, thereby posing no additional risk or harm to their well-being.All data were handled, analyzed, and presented anonymously for descriptive purposes, adhering to all confidentiality and privacy guidelines and norms.
Design, Period, and Location of the Study
This was a retrospective epidemiological observational study guided by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) tool.It was conducted through the analysis of secondary data obtained from the EHR from July 2021 to June 2022, across all units of Home Doctor.Home Doctor is a private home care company with 27 nationally distributed units, headquartered in São Paulo.
Population, Inclusion, and Exclusion Criteria
All EHRs of patients who used continuous Home Mechanical Ventilation (HMV) were included.Excluded were those who did not receive home ventilatory support during the study period or who used ventilation intermittently.
Study Protocol
Infection cases were analyzed by the Home Infection Control Service, through the completion of a specific form detailing signs and symptoms generated in the EHR with each new prescription.Through this analysis, infectious cases meeting epidemiological criteria for VAP were identified.For the diagnosis of VAP in patients on HMV, objective criteria previously defined by the institution were utilized, based on those proposed by the Association for Professionals in Infection Control and Epidemiology (18) , the Centers for Disease Control and Prevention (19) , and ANVISA (10) (Chart 1).Patients diagnosed with COVID-19 were not included in the study.
All patients on mechanical ventilation routinely receive a set of preventive measures adapted from those proposed by the Institute for Healthcare Improvement for hospitals (20) .These measures include hand hygiene, elevation of the head of the bed, and oral hygiene with 0.12% chlorhexidine.
The following data were collected from the Electronic Health Record (EHR) and exported to an Excel spreadsheet: utilization rate and duration of Home Mechanical Ventilation (HMV) until the development of Ventilator-Associated Pneumonia (VAP), VAP incidence density, gender, age group, age, diagnosis, treatment (location and antibiotic used), and outcome.
Analysis of Results and Statistics
The utilization rate of continuous Home Mechanical Ventilation (HMV) was calculated by dividing the number of patients on HMV by the total number of patients treated during the period, then multiplying by 100.
of Profile of patients in private home care who developed ventilator-associated pneumonia Cezar FS, Jacober FC, Gonçalves HAG, Cantarini KV, Oliveira CF.
This study found that among the 73 patients who developed one or more episodes of Ventilator-Associated Pneumonia (VAP), the majority occurred in male patients (65.8%) and adults (53.4%), with an average age of 57 years.The most prevalent diseases among patients with VAP who required continuous Home Mechanical Ventilation (HMV) were related to neurological pathologies (57.5%) (Table 1).
Chart 1 -Diagnostic Criteria for Ventilator-Associated Pneumonia Criterion 1 A chest X-ray demonstrating pneumonia or the presence of a new pulmonary infiltrate, opacification, or cavitation AND at least one of the following respiratory changes: fever, new onset or worsening cough; increased or worsening chronic sputum; pleuritic chest pain; altered or recently worsened breath sounds (crackles, rhonchi, wheezes, or bronchophony); increased respiratory rate (≥ 25 per minute); O2 saturation < 94% in room air or a decrease in baseline O2 saturation of more than 3%.
Criterion 2
At least one of the following signs and symptoms must be present: fever with no other known cause; leukopenia or leukocytosis AND at least one of the following: emergence of purulent secretion or a change in the characteristics of the secretion, increased secretion, or increased need for aspiration; worsening gas exchange (deterioration of the PaO2/FiO2 ratio, increased oxygen needs, or increased ventilatory parameters).
Criterion 3
A patient with an underlying disease with two radiographs showing one of the following findings: new, progressive, and persistent infiltrate; opacification; cavitation.
Criterion 4
The patient must have at least two of the following signs and symptoms: fever; cough; appearance or increase in usual secretion; wheezing.Source: ANVISA (10) , 2020.The incidence density of Ventilator-Associated Pneumonia (VAP) was calculated using the following formula: the absolute number of VAP cases divided by the number of patient-days on HMV, multiplied by 1000.The incidence density of VAP in the study population was compared with the latest Brazilian intensive care data reported by ANVISA (10) in 2020.
RESULTS
During this period, Home Doctor treated 1,309 patients, of whom 1,050 were adults and 259 were pediatric.Of these, 208 (N: 102 adults and N: 106 pediatric) used Home Mechanical Ventilation (HMV).The overall utilization rate was 15.9%, with 9.7% among adult patients and 40.9% among pediatric patients.There were 101 episodes of Ventilator-Associated Pneumonia (VAP) between July 2021 and June 2022 in 73 patients, 14 of whom had repeated infectious episodes.The median duration of HMV use until the development of VAP was 245 days, with a standard deviation of 465 days.
Figure 1 presents the comparison of the incidence density of VAP in home care with the data from Intensive Care Units (ICU) reported by ANVISA.The analysis reveals that the occurrence of VAP in ICUs is more than double in pediatrics and almost six times higher in adults when compared to the rates recorded at home.Of the 101 episodes of Ventilator-Associated Pneumonia (VAP), treatment predominantly occurred in the home setting; however, 20 patients (19.8%) were treated in a hospital environment, with an average hospital stay of 9.6 days.The most commonly used antibiotics for home treatment were Amoxicillin + Clavulanate (23.5%),Ceftriaxone (21.0%), and Meropenem (12.3%).There was one death (1%) in the home setting during the period analyzed (Table 2).
DISCUSSION
Ventilator-Associated Pneumonia (VAP) in ICUs is a widely studied topic concerning the identification of risk factors, the definition and implementation of preventive measures, monitoring of incidence density, and the development of preventive and corrective actions, as well as the analysis of its financial and clinical impact.It is estimated that approximately 1 million patients per year are admitted to Home Care (HC) in Brazil, covering both public and private services (21) , with increasing use of Home Mechanical Ventilation (HMV), making VAP a concern in the home care setting.
Home Care is recognized for its psychosocial benefits compared to hospital care, allowing greater autonomy for the individual and the family.Moreover, it provides a unique care organization where the patient is naturally in physical isolation, without contact with other patients, and the health professional acts exclusively with one patient, which can contribute to the reduction of care-related risks and offers biological plausibility for the reduction of infection indicators compared to hospital settings (22) .The utilization rate of continuous HMV in adult patients found in this study was 9.7%, which is lower than the 41.5% rate reported by ANVISA (10) .In pediatrics, the utilization rate of HMV is similar to the rates in pediatric ICUs, with 40.9% and 39.1%, respectively.
The low incidences of VAP indicate safe and high-quality care.The reported density of VAP in Brazil in 2020 was 4.5 cases per 1,000 ventilation-days in pediatric ICUs and 10.6 cases per 1,000 ventilationdays in adult ICUs (10) .The current study showed 2.2 cases per 1,000 ventilation-days in pediatrics and 1.7 cases per 1,000 ventilationdays in adult patients, both of which are lower than the Brazilian epidemiological data from ICUs.This finding is corroborated by the study of Chenoweth et al. (23) , which analyzed a population of 57 patients receiving HMV and demonstrated a rate of 1.3 episodes of VAP per patient and a density of 1.5 cases per 1,000 ventilation-days.In contrast, Silva et al. (16) , analyzing 9 pediatric patients from January 2008 to June 2009, found 23 episodes of VAP and an average density of 6.8 cases per 1,000 ventilation-days, higher than hospital data, which may be attributed to their small sample size and the fragility of the implemented preventive measures.
The literature indicates a higher prevalence of hospital-acquired Ventilator-Associated Pneumonia (VAP) in adult male patients (71.4%), with an average age of 60.5 years (24) .In pediatrics, records show occurrences of VAP around the age of 4.5 years (16) .The demographic profile of VAP in this study aligns with that found in the literature, predominantly affecting male patients with an average age of 57 years and a prevalence of central nervous system pathologies (57.5%).
Regarding VAP treatment, the use of broad-spectrum antibiotic therapy is common in hospital settings (11,(25)(26) .In this study, the drugs used had a narrower spectrum, which suggests lower antimicrobial resistance in the home care environment.As for clinical outcomes, ICU data show that VAP developed in a hospital environment is severe, with mortality rates reaching up to 50% in various studies (8,10,(27)(28) .In contrast, this study reported a minimal home mortality rate from VAP.
Study limitations
The results of this study should be interpreted considering its limitations.Although the data originate from a broad network of home care services, they may not accurately reflect patient conditions in other regions or different home care contexts.Comparing the small sample size of patients with Ventilator-Associated Pneumonia (VAP) to hospital literature might limit the statistical power of the comparisons and the ability to identify significant differences between groups.Additionally, the lack of similar studies hampers more effective comparisons.Future research should consider prospective and multicentric approaches to overcome these limitations and provide a more detailed understanding of the determinants of VAP in patients undergoing home mechanical ventilation.
Contributions to the field
This work contributes by disseminating epidemiological data on patients in Home Mechanical Ventilation (HMV) who developed VAP at home, demonstrating the lesser complexity of this complication compared to hospital settings.Furthermore, it supports care and management teams for patients on mechanical ventilation in developing improvement plans aimed at reducing rates of home VAP as an indicator of good clinical practice.
CONCLUSIONS
Home VAP has a lower incidence density than that reported by ANVISA for ICUs in 2020, demonstrating the capability of Home Care to treat patients with continuous HMV and low infectious complications.The home environment offers physical and social advantages that, in conjunction with preventive measures, are capable of reducing VAP incidence.The highest incidence of home VAP occurred in adult male patients diagnosed with neurological diseases.Patients with home VAP had the opportunity for treatment in their own environment, eliminating the need to travel to the hospital and the routine use of broad-spectrum antimicrobials, while also showing low mortality.
FUNDING
This work was carried out with the support of the Home Doctor Home Care institution.
CONTRIBUTIONS
Cezar FS, Jacober FC and Gonçalves HAG contributed to the conception or design of the study/research.Cantarini KV contributed to the analysis and/or interpretation of data.Gonçalves HAG and Oliveira CF contributed to the final review with critical and intellectual participation in the manuscript.
Figure 1 -
Figure 1 -Comparison of the Incidence Density of Home Ventilator-Associated Pneumonia and the data reported by ANVISA in 2020 (N: 101)
Table 1 -
Sociodemographic and clinical profile of patients with Ventilator-Associated Pneumonia on Home Mechanical Ventilation from July 2021 to June 2022.Home Doctor (N: 73) N -Absolute number; Other -Congenital malformations, endocrine, nutritional and metabolic diseases, renal and gastrointestinal diseases.
of patients in private home care who developed ventilator-associated pneumonia
Cezar FS, Jacober FC, Gonçalves HAG, Cantarini KV, Oliveira CF. | 2024-07-31T15:07:59.532Z | 2024-07-29T00:00:00.000 | {
"year": 2024,
"sha1": "25bd71c693472a8992c0c2b9cb336372b4194fef",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b42bd66b84b1be6ec95a3e8966c524f871d5ab7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153339234 | pes2o/s2orc | v3-fos-license | Manhood Trials and the Law of Mortality
The present paper introduces a continuous eight-parameter survival function intended to model mortality in modern male populations.
Introduction
"Mathematically a statistical model is essentially a specified family of admissible probability distributions.[...] A statistical model should give a correct and as far as possible an explanatory description of the actual data structure.It must also be statistically analysable, in practice as well as in theory.[...] Faced with the complex physical reality, with imprecise knowledge as to causes and mechanisms behind the variation in data, and equipped with limited mathematical and numerical resources, one often has to settle with a model that describes data in rough outlines.[...] A good model can thereby be compared with a good caricature: A strongly simplified picture, that bears in mind and often exaggerates some distinctive features, but still clearly resembles reality."(Sundberg, 1981.See Note 1) In accordance with the above definition of a statistical model, several mathematical expressions to describe human mortality at all ages simultaneously have been devised.With degrees of simplification as criteria these models can roughly be divided into two types.The first type states that mortality is decreasing rapidly with age during infancy, thereafter it levels out and starts to increase slowly until the age of senescence where the increase becomes more and more rapid with age.The second type is more complex in that it, apart from the above features, also allows for an added mortality risk associated with the passage into adulthood.The laws of Wittstein (1883), Petrioli (1981) and Siler (1983) are examples of the first type, while the laws of Thiele (1872) and Heligman & Pollard (1980) are examples of the second type.An increased mortality among people in early adulthood is often referred to as an accident hump (Heligman and Pollard, 1980;Hartmann, 1987;Kostaki, 1992) believed to be caused mainly by accidents among males and accidents and maternal mortality among women (Heligman and Pollard, 1980).
In a recent thesis (Hannerz 1999), the following formula, which belongs to the first type of models, was proposed: where F(x) denotes the probability that a new-born person will be dead within x years, G(x) is the corresponding logodds, log(F/(1-F)), and c and the a i 's are parameters.The parameter a 0 may be negative but the rest of the parameters should be positive.
By transforming equation (1) we also obtain analytical expressions for the survival function and the hazard function (the force of mortality) Graphic representations of equation ( 1)-( 4) are given in figure 1, where the formula has been fitted to mortality among Swedish females in 1986.The specific role and influence of each of the parameters in (1) might be most easily understood by studying the curve of G as a function of age.The term a 1 x -1 makes the increase of the curve extremely rapid in the ages close to zero, thereby allowing for a high mortality among infants.The shape of the curve in the ages of the labour force (16-65 years) is mainly given by the term a 2 x 2 /2, while a 3 e cx /c is taking over as the dominant term in the older ages, and thereby allows for a rapid increase in mortality rates in the age of senescence.Finally, the parameter a 0 determines the level of the curve without influencing its shape.Ample details on the rationale behind the law have been given by Hannerz (2001).In that paper it was shown that equation (1), although it belongs to the first type of models, fitted mortality data of modern Swedish females better than both of the second type models, the laws of Heligman-Pollard and Thiele.It also had the benefit of giving an analytically expressible survival function, which is not the case with Thiele's law, and of being defined not only for integer ages but for all ages, which is not the case with the Heligman-Pollard formula.In Sweden, any added hazards associated with the passage into womanhood, thus, seem to have been reduced with time, to the point where it no longer would be of any statistical importance.A natural question is if this is the case also for men.
Figure 1:
The functions f, µ, l and G, fitted to mortality in the female population of Sweden 1986.The grey bars in the top left picture constitute the ungraduated frequency function.
Reports about gang wars in the big cities of USA, where young men kill each other, and in Sweden where different motorcycle bands meet in mortal combat, as well as debates on the maturity of young males in regard to driver licences etc., seem to indicate a generally raised mortality risk for young men.
In a book edited by Cohen (1991) the topic of rites of passage into manhood, is discussed.One of many global examples is teenage "surfistas" riding atop speeding trains swerving through the hills above Rio de Janeiro: "If they touch the electric lines or fail to duck at the right moment, they risk serious injury or death."This particular discussion ends with the following quotation: "As mythologist Joseph Campbell pointed out, boys everywhere have a need for rituals marking their passage to manhood.If society does not provide them they will inevitably invent their own."(Cohen 1991).
The first aim of the present work was to find out if, among Swedish males, there is still a statistically important added risk associated with the passage into manhood.If this is the case then a second aim would be to find out if the term accident hump gives an appropriate description of such a risk elevation, and a third aim would be to develop a continuous and analytically expressible survival function that describes mortality reasonably well among men.
Some gender comparisons
With the definition of 'statistical model' in mind the question to resolve was: Do we need to formulate the mortality function differently for men than what was done for women in equation ( 1)?To evaluate the appropriateness of equation ( 1) with regard to mortality among males it is necessary to know something about its goodness-of-fit to male mortality data in comparison with its goodness-of-fit to female mortality data.To acquire such knowledge, Pearson's chi-square tests (Pearson, 1900), with regard to (1) were performed on national mortality data, in one-year classes (0-99 years), for Swedish males and females 1982.The parameters were estimated through the maximum-likelihood method (Bickel & Docksum, 1977).Programming in BASIC solved all equations involved.Since there are five parameters to estimate, both tests would have 95 degrees of freedom.In view of the fact that the data set that was used did not exactly consist of identically and independently distributed observations and since the power of the test was extremely large (49 thousand deaths among the males, 42 thousand deaths among the females) we can, however, not use the standard significance test associated with the chi-square distribution, to evaluate the results.The chi-square sums obtained, however, are interesting when compared with each other.For males the chi-square sum was 265 and for females it was 142 -a remarkable difference.Further, based on the publications 'Statistical Abstract of Sweden 1968Sweden , 1969Sweden , … ,1992', a plot of the ratio of mean empirical mortality rates in 1968-1992 between Swedish men and women was made (figure 2).As seen in the figure, there is a clear bulge in the risk ratio in the beginning of adulthood with its maximum at 20 years.Since (1) fits well when applied to female mortality, these results clearly speak in favour of the theory that the passage into manhood is associated with an added mortality risk that ought to be regarded as statistically important.
Figure 2:
The average empirical mortality risk for Swedish males divided by that for Swedish females 1968-1992, plotted against age.
Traffic casualties
A possible explanation of the bulge in the risk-ratio between men and women during the passage into adulthood is that reckless driving could be regarded as a manhood trial so that the traffic casualties would cause a non-negligible bulge in the density function of the men but not in that of the women.The deaths in 1982 caused by traffic were therefore removed from the data and new goodness-of-fit tests, of the model described by equation (1), were performed.Now, without the traffic casualties, the chi-square sum was 138 for males and 140 for females.From this test we may conclude that the only reason why equation (1) does not fit a Swedish male mortality experience would be an accident hump in the beginning of adulthood.
A mathematical treatment of the accident hump
Pearson (1895) suggested that all different causes of death could be divided into five types of mortality.Each type would follow its own frequency function, and by conditioning on type of mortality a frequency function for the total mortality would be obtained by a weighted sum of the five conditional functions.He also proposed specific families of frequency functions for each of the five types of mortality, and subsequently wound up with a model for all-cause mortality that involved 14 parameters to be determined from the sample.Although the mortality model of Pearson might be considered too complex to be useful in actuarial and demographic work (Brass, 1974), the approach to the problem -the application of a mixture distribution to handle multimodality-has been broadly embraced by statisticians (e.g.Moore and Gray 1993;McLaren et al. 1991;Boos and Brownie 1991;Mendell, Thode and Finch 1991;Hall and Titterington 1984;Larson 1985;Basford and McLachlan 1985;Davenport, Bezdek and Hathaway 1988).
PERCENT
If we assume that there is such a thing as a manhood trial -an added hazard intentionally or unintentionally imposed on males, either by themselves or by the environment, which is associated mainly with the passage into manhood-and that some males would die as a direct consequence of such a trial, then mortality among males could be divided into two types: mortality from manhood trials and mortality from other causes.A function that fits a male mortality experience would thus be obtained by the following scheme: Let F 1 (x) be the death distribution given that the subject will die a "natural" death and F 2 (x) be the distribution given that the death is caused by a "manhood trial".Let α be the probability that a death will be natural; then The following model was tried on the Swedish males of 1982: A BASIC program was used to estimate the parameters through maximisation of the likelihood function.A goodness-of-fit test with 92 degrees of freedom was then performed with a resulting chi-square sum of 127.This should be compared with 265, which was the corresponding sum for the smaller model, equation (1).In other words, when it comes to goodness-of-fit we gained quite a bit by adding the three extra parameters a 4 , a 5 and α -well worth the complications of having three more parameters.The parameter estimates are given in table 1, and a graphic representation of the resulting density function is given in figure 4. As seen in the table, α is close to one.This indicates after all that not a very large percentage of the Swedes die because of a "manhood trial".
Figure 4:
The probability density functions f 1 , f 2 and f, fitted to mortality in the male population of Sweden 1982.
As seen in equation ( 5), G 2 (x) contains 5 parameters, but (for the sake of parsimony) only two of those, a 4 and a 5 , are unique to that function while the other three are shared with G 1 (x).The reader might wonder if those two parameters are sufficient to render f plastic enough to attain the different shapes that one might associate with a death distribution for people who die from "manhood trials".Experiments have shown that they are.The effect on f 2 when a 4 is varied, while the other parameters retain the values given in table 1, is shown in figure 5.The effect on f 2 when a 5 alone is varied is shown in figure 6.The effect of varying both a 4 and a 5 , in a way that still retains the maximum of f 2 at the same place, is shown in figure 7. The ultimate test would of course be if a completely rectangular survival function could be obtained, i.e. if we could describe the mortality distribution even if all manhood trials were to occur only at one specific age.In figure 8 it is shown that such distributions can be obtained without having to vary any other parameters but a 4 and a 5 .
Figure 5:
The effect on f 2 when a 4 is varied, while the other parameters retain the values given in table 1.To summarise, many different shapes and locations of the accident hump can be obtained by varying the parameters a 4 and a 5 .The only reason for not dropping the parameters a 2 , a 3 and c in G 2 is to make sure that the mean expectation of life, for the people dying from "manhood trials", exists and is finite.According to a theorem (Hannerz 2001) the above expectation will always exist and will be finite if at least one of the last two terms in G 2 is kept in the model.
Discussion
In the present work a five-parameter survival function, intended to model mortality in modern female populations, was tested on mortality among Swedish males.It was concluded that the model did not fit, and that the reason for the lack of agreement between the model and the data was a hump in the observed mortality curve, associated with the passage into manhood.A continuous eight-parameter survival function was introduced to handle that problem.The concept of manhood trials, in a wide sense, was used to justify the structure and parameterisation of the function.That concept is admittedly woolly, the hump is, however, very real.Since the age where the peak of the hump occurs is the most favourable age with regard to reaction times, intelligence and muscular strength (Åstrand and Rodahl 1986;Sinclair 1985;Mortensen 1997), it is also evident that the cause of the elevated mortality would not be a sudden slump in the physical ability to withstand destruction, but rather an increased exposure to risk.Medical development has virtually erased the hump associated with maternal mortality.Some people might see it as a goal to eliminate the male hump, and if they do so the law introduced in the present paper might be useful in that it provides a parameter with which to monitor the progress of such work.It also defines an end phenomenon (α=1).Another way of regarding the present work is that it merely affords a mathematical model that can be used a) to smooth out errors in the data, and b) to calculate life expectancies and death probabilities as a function of age and risk period.In this context the application of the system is independent of the interpretation of its parameters.
Figure 3:The proportion of deaths among Swedish males 1982, caused by traffic accidents.
Figure 6 :Figure 7 :
Figure 6:The effect on f 2 when a 5 is varied, while the other parameters retain the values given in table 1.
Table 1 :
Parameter estimates for Equation (2) fitted to mortality in the Swedish male population 1982. | 2014-10-01T00:00:00.000Z | 2001-05-17T00:00:00.000 | {
"year": 2001,
"sha1": "523e5c301861eac852317c1fa4e45a2b64c9a464",
"oa_license": "CCBYNC",
"oa_url": "https://www.demographic-research.org/volumes/vol4/7/4-7.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "523e5c301861eac852317c1fa4e45a2b64c9a464",
"s2fieldsofstudy": [
"Mathematics",
"Sociology"
],
"extfieldsofstudy": [
"Economics"
]
} |
11131253 | pes2o/s2orc | v3-fos-license | Impact of Splenectomy on Thrombocytopenia, Chemotherapy, and Survival in Patients with Unresectable Pancreatic Cancer
Background Patients with unresectable pancreatic cancer (PDAC) or endocrine tumors (PET) often develop splenic vein thrombosis, hypersplenism, and thrombocytopenia which limits the administration of chemotherapy. Methods From 2001 to 2009, 15 patients with recurrent or unresectable PDAC or PET underwent splenectomy for hypersplenism and thrombocytopenia. The clinical variables of this group of patients were analyzed. The overall survival of patients with PDAC was compared to historical controls. Results Of the 15 total patients, 13 (87%) had PDAC and 2 (13%) had PET. All tumors were either locally advanced (n = 6, 40%) or metastatic (n = 9, 60%). The platelet counts significantly increased after splenectomy (p < 0.01). All patients were able to resume chemotherapy within a median of 11.5 days (range 6–27). The patients with PDAC had a median survival of 20 months (range 4–67) from the time of diagnosis and 10.6 months (range 0.6–39.8) from the time of splenectomy. Conclusions Splenectomy for patients with unresectable PDAC or PET who developed hypersplenism and thrombocytopenia that limited the administration of chemotherapy, significantly increased platelet counts, and led to resumption of treatment in all patients. Patients with PDAC had better disease-specific survival as compared to historical controls.
Introduction
The pancreas has a diverse cellular heterogeneity and function, and can give rise to a number of histologically distinct malignancies. Most malignant cancers originate from the ductal epithelium or endocrine cells and include pancreatic ductal adenocarcinomas (PDAC) and malignant endocrine tumors (PETs). Each histologic type has a different molecular signature and clinical course; PDACs are associated with the worst prognosis, and PETs are usually less aggressive. 1,2 Most patients with PDAC (85%) present with locally advanced or metastatic tumors that are unresectable. Treatment with gemcitabine-based chemotherapy has been shown to significantly improve survival, albeit to only a small degree. 3 In contrast, PETs usually present at an earlier stage. Chemotherapy is determined by the grade of the tumor, with high-grade tumors more likely to respond. [4][5][6] Thus, the goal of treatment for unresectable PDAC or PET is treatment with chemotherapy.
By virtue of the anatomic location of the pancreas, locally advanced PDAC or PETs can lead to thrombosis or occlusion of the splenic, superior mesenteric (SMV), and/or portal (PV) vein(s) with resultant hypersplenism. As in patients with cirrhosis and portal hypertension, the enhanced splenic function often produces thrombocytopenia. In addition, cytotoxic chemotherapeutic regimens, especially gemcitabine, often induce bone marrow suppression, which results in thrombocytopenia. When this occurs, many patients must stop their treatment, since serious and potentially lethal side effects could develop.
We hypothesized that a palliative splenectomy for patients with locally advanced unresectable PDAC or PETs who developed hypersplenism and thrombocytopenia that limited the administration of chemotherapy, would extend the duration of treatment and improve disease-specific survival (DSS). To investigate our hypothesis, we analyzed our experience with 15 patients who were managed with this novel treatment strategy and compared the survival of the PDAC subgroup of patients with stage-matched historical controls.
Patients
Approval from the University of California, Los Angeles Office for the Protection of Research Subjects Institutional Review Board was obtained prior to initiating this study. Using a prospectively collected pancreatic cancer database, we performed a review of all patients from 2001 to 2009 with locally advanced or metastatic fine needle aspirate or biopsy (core needle, incisional, or excisional) confirmed PDAC or PET who were unresectable and underwent a splenectomy for severe thrombocytopenia that developed during administration of chemotherapy. The pathology reports were generated by one of four gastrointestinal pathologists on faculty at UCLA. The clinical, radiographic, and histopathologic findings; treatment and perioperative variables; and DSS of these patients were examined.
Clinical variables analyzed included gender, age, and stage at the time of diagnosis, and tumor histology (PDAC and PET). Radiographic variables analyzed included location of the tumor and PV/SMV/splenic vein status (patent vs. nonpatent) on high resolution computed tomography (CT) or magnetic resonance imaging (MRI) scans. Treatment variables analyzed included the pre-and postsplenectomy chemotherapeutic regimen administered and tumor response. Variables directly related to splenectomy that were examined included length of hospital stay, need for conversion to an open operation, white blood cell count and hemoglobin immediately after surgery (postoperative day 1), and pre-and postoperative platelet counts. Preoperative platelet counts were recorded at the last clinic visit prior to surgery. Postoperative platelet counts were recorded on the day of hospital discharge. The time to resumption of chemotherapy after splenectomy was also examined.
Survival Analysis
For survival analysis, the DSS of all patients with PDAC from the time of diagnosis or splenectomy was examined. For those patients who died, the date of death was determined from the clinic charts when available, or alternatively, the social security death index (http://ssdi. rootsweb.ancestry.com/cgi-bin/ssdi.cgi) by an exact match between the patient's name and birth date. If alive, the date of last follow-up was taken as the last time the patient was seen in clinic. The two patients with PET were not included in the survival analysis, as PET are less clinically aggressive than PDAC.
Statistical Analysis
For significance analysis, X 2 and Fisher's exact test were used as appropriate. DSS was estimated using the Kaplan-Meier method. All statistical analyses were performed using JMP statistical software (SAS Corporation, Cary, NC). Significance was assigned at the 0.05 level.
Clinical, Radiographic, and Histopathologic Findings
From 2001 to 2009, 15 patients with unresectable pancreatic cancer who developed hypersplenism and thrombocytopenia, which limited the administration of their chemotherapy, underwent a splenectomy at UCLA Medical Center. The distribution of the clinical, radiographic, and histopathologic findings for these patients is listed in composite in Table 1 and individually in Table 2. Thirteen patients (87%) had primary disease; two patients (13%) recurred after a Whipple operation. The median age of patients was 56 years (range 25 to 62 years). Nine patients were male (60%) and 6 (40%) were female. Most patients had PDAC (n=13, 87%), while only two patients had PET (13%). All patients had locally advanced, stage 3 (n=6, 40%) or metastatic, stage 4 (n=9, 60%) disease. Nine tumors (60%) were located in the head/uncinate process and 6 (40%) were located in the body/tail. On highresolution CT/MRI, the portal or splenic veins were thrombosed in 12 (80%) patients (Fig. 1); the three other patients had documented splenomegaly on CT/MRI. In fact, splenomegaly was not routinely reported in the radiology report per the usual practice of the UCLA gastrointestinal radiologists for pancreas-protocol CT scans or MRIs. The median spleen weight was 348 g (range 164-525) but may be an underestimate of the actual spleen size due to morcellation prior to extraction. The spleen volumes are likewise not reported for similar reasons.
Treatment and Procedure Variables
The median time from the initial diagnosis of cancer to splenectomy was 9.8 months (0.3-58) during which all patients were administered chemotherapy. Chemotherapy was stopped due to thrombocytopenia within 2 weeks of surgery for all patients. Most patients with PDAC were administered a gemcitabine-based combination therapy (n=9, 69%) both before and after splenectomy; a 5fluorouracil (5-FU)-based combination regimen was used less frequently (n=4, 31%). All patients had at least a partial tumor response to both drug treatments; there were no complete responses.
Survival Analysis
The median follow-up for all survivors was 35 months (range 13-63) from the time of diagnosis and 25 months (range 0.6-51) from the time of splenectomy. The 13 patients with PDAC had a median survival of 20 months (range 4-67) with a 5-year DSS of 25% from the time of diagnosis, and a median DSS of 10.6 months (range 0.6-39.8) from the time of splenectomy (Fig. 2). Both patients with PET had well-differentiated tumors. One patient died of disease after 107 months, and the other is still alive with disease after 60 months.
Discussion
PDAC is the fourth leading cause of cancer-related deaths in the United States, with an overall 5-year survival of 4%. In 2009, 42,770 patients in the USA were diagnosed with PDAC and 35,240 died from their disease. 7 The poor outcome of patients with PDAC has been attributed to the advanced stage of disease at diagnosis, the poor response to current systemic and local therapies, and the aggressive biologic nature of the disease. Resection for PDAC provides the only chance for cure, but only about 15% of patients are eligible for surgery. 8 Even those patients who undergo a "curative resection" have a 5-year survival rate of 35% in the best series. 9 Most patients (85%) present with locally advanced or metastatic tumors, and they have a median survival of less than 12 or 5 months, respectively. 7 Chemotherapy can significantly extend DSS and decrease disease-related morbidity. 3 PETs have been studied much less frequently than PDAC primarily due to their low prevalence; only about 2,500 new PETs are diagnosed annually in the United States. [10][11][12] PETs are categorized as functional or nonfunctional depending on whether the secreted peptide is biologically active and produces a clinical syndrome; about 50% of nonfunctional PETs secrete peptides that are clinically silent. 13 Insulinomas are the most common type of PET, and a majority are benign. 14 In contrast, approximately 60% of non-insulin-secreting PETs are malignant. 11,15 Due to their less aggressive clinical behavior than PDAC and resistance to most current chemotherapeutic agents, PETs are treated aggressively with resectional therapy. However, cytotoxic chemotherapy is given to patients with unresectable PETs. Therapy is determined by the grade of the tumor. 4-6 Thus, chemotherapy is the primary goal of treatment for unresectable PET or PDAC for as long as the patient can tolerate it.
Locally advanced or recurrent pancreatic tumors of either histologic type in the head of the gland can involve the splenic vein, SMV, or PV. Tumors in the body or tail can involve the splenic vein. Either can cause venous Figure 1 Representative pancreas-protocol CT scan from a patient with a PDAC located in the body/tail who has complete occlusion of the splenic vein and an enlarged spleen. 16 Left-sided portal hypertension, hypersplenism, and thrombocytopenia may result, which limits the patients' ability to tolerate aggressive chemotherapy. In this study, we examined the perioperative morbidity and effectiveness of splenectomy on restoring platelet counts to normal, administration of chemotherapy, and survival in our small series of 15 patients. A similar analysis was performed on 11 patients with hepatitis C, cirrhosis, and portal hypertension. 17 In this series, splenectomy reversed the hypersplenism-induced thrombocytopenia, and patients could resume pegylated interferon therapy. A recent meta-analysis 3 of 50 trials (7,043 participants) on the effectiveness of 5-FU-or gemcitabine-based chemotherapy and radiotherapy for inoperable pancreatic cancer found that chemotherapy can significantly improve 1-year mortality (p<0.00001) in patients with locally advanced or metastatic PDAC and can also significantly decrease morbidity. Gemcitabine-platinum combinations significantly reduced 6-month mortality on subgroup analysis (p<0.001) and currently are the standard of care for the disease. Unfortunately, a number of factors often limit administration of chemotherapy to patients with pancreatic cancers. These include a poor functional or nutritional status; an unresponsive tumor and thus no clinical benefit to giving the drugs; bone marrow suppression that can result in severe anemia, leucopenia, and thrombocytopenia, 18 or isolated thrombocytopenia. The potential causes of isolated thrombocytopenia include hypersplenism, bone marrow suppression with preferential inhibition of platelet production, or other very rare causes such as gemcitabine-associated thrombotic microangiopathy, 19 or capecitabine (Xeloda)-induced idiopathic thrombocytopenic purpura. In fact, a major side-effect profile listed on the gemcitabine package insert includes thrombocytopenia. Thus, patients who are receiving chemotherapy, particularly gemcitabine-based regimens, are at risk of developing thrombocytopenia. With concurrent hypersplenism, the risk is even higher, as bone marrow production of platelets is usually be suppressed. Hypersplenism may unmask subclinical thrombocytopenia.
A recent study to develop a prognostic score that would predict survival after resection for PETs, using 3,851 patients from the National Cancer Data Base (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004), found that age, grade, and distant metastases were the most significant predictors. 20 Administration of adjuvant chemotherapy was not associated with increased survival. Nevertheless, cisplatin and etoposide combination therapy is effective in treating patients with poorly differentiated PETs, while streptozocin, doxorubicin, and 5-fluorouracil is the standard cytotoxic regimen for functional PETs. 21 In fact, several studies suggest that PETs are more responsive to chemotherapy than endocrine tumors in other parts of the gastrointestinal tract, most notably carcinoid tumors. Our two patients with PETs who underwent splenectomy and aggressive chemotherapy have had excellent survival outcomes. As listed in Table 2, one patient is still alive with disease after 60 months and recently underwent an extensive resection of the primary tumor and multiple liver metastases. The other patient eventually died of disease after a rather long 9-year course.
Patients, with either PDAC or PET, who are offered splenectomy must demonstrate a good functional status, preferably with thrombocytopenia as the only factor limiting treatment. A complete blood cell count should be obtained preoperatively to exclude cytotoxic chemotherapy-induced bone marrow suppression as the primary cause of thrombocytopenia. If other blood elements are also low, particularly the absolute neutrophil count, then the chemotherapy should be considered as the primary cause of thrombocytopenia and splenectomy deferred. In this case, the dose of chemotherapy should be lowered or combination changed; alternatively, one might elect to give drugs that stimulate bone marrow production, such as granulocyte colony-stimulating factor or erythropoietin. If isolated thrombocytopenia is present, with the other elements normal, and there is evidence of hypersplenism on high-resolution imaging (e.g., portal vein thrombosis or an enlarged spleen), then splenectomy should be pursued. Ideally, we prefer that patients have a good response to chemotherapy, as measured by a decrease in tumor size or extent of disease on imaging and tumor markers; although, this was not the case in the present series, as patients underwent splenectomy over a wide time range from the time of diagnosis. CA19-9 is the best serum marker of response for PDAC; 22 chromogranin, synaptophysin, pancreatic polypeptide, or gastrin can be used for PET. 23 Patients must not have end-stage disease and severe malnutrition. We require that patients have a preoperative abdominal CT or MRI scan, which are usually being done for disease surveillance during treatment. The primary tumor is evaluated to ensure that it is not growing into the splenic hilum or to note additional features (varices, etc.) that will help in planning the procedure. In addition, the abdomen is evaluated for any signs of carcinomatosis and/or ascites. By using these stringent preoperative criteria prior to splenectomy, perioperative morbidity and mortality can be minimized, and platelet counts are likely to respond.
Patients who are not operative candidates can alternatively undergo splenic artery embolization or external beam splenic irradiation, as these two treatments can also potentially reverse hypersplenism-induced thrombocytopenia. 24 Embolization should be considered as second-line treatment after splenectomy because it can be associated with significant postoperative pain and splenic abscesses. 25 Furthermore, splenic irradiation is rarely performed for hypersplenism but can be effective for relief of pain associated with splenomegaly in patients with hematologic disorders. 26 In our experience, as discussed previously, splenectomy is safe and can be performed with minimal morbidity and a short hospital stay.
There were no deaths in our series; hospital stay was short (median 3 days), and patients' platelet counts responded rapidly with quick resumption of chemotherapy (median 11.5 days). The median follow-up for all survivors was 35 months (range 13-63) from the time of diagnosis. The 13 patients with PDAC had a median survival of 20 months (range 4-67) with a 5-year DSS of 25% from the time of diagnosis, and a median survival of 10.6 months (range 0.6-39.8) from the time of splenectomy.
Conclusion
In conclusion, while the optimal treatments for patients with locally advanced or metastatic PDAC or PET are in evolution, we found that our novel strategy of splenectomy for the development of hypersplenism-induced thrombocytopenia that limited chemotherapy treatment was effective. Splenectomy was performed with minimal morbidity, and was associated with a rapid increase in platelet counts and a short time before resuming chemotherapy. In addition, patients with PDAC who underwent this novel treatment strategy had significantly improved DSS as compared to historical controls.
Grant Support/Assistance None
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2017-08-03T00:16:57.160Z | 2010-03-23T00:00:00.000 | {
"year": 2010,
"sha1": "bf345908defab1b2cb347757f7561c8a9c38d8cb",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11605-010-1187-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8205a6efc89173c92ce69c29a2f5addb77952cc0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
97972842 | pes2o/s2orc | v3-fos-license | Physicochemical Properties of Poly ( lactic acid-co-glycolic acid ) Film Modified via Blending with Poly ( butyl acrylate-co-methyl methacrylate )
A series of poly(lactic acid-co-glycolic acid) (PLGA)/poly(butyl acrylate-co-methyl methacrylate) (P(BAco-MMA)) blend films with different P(BA-co-MMA) mole contents were prepared by casting the polymer blend solution in chloroform. Surface morphologies of the PLGA/P(BA-co-MMA) blend films were studied by scanning electron microscopy (SEM). Thermal, mechanical, and chemical properties of PLGA/P(BA-co-MMA) blend films were investigated by differential scanning calorimeter (DSC), thermogravimetric analysis (TGA), tensile tests, and surface contact angle tests. The introduction of P(BA-co-MMA) could modify the properties of PLGA films.
Introduction
It is well-known that polymer blending has been a widely used way for improving or modifying the physicochemical properties of polymer materials [1,2] .An important property of a polymer blend is the miscibility of its ingredients, as it could affect the mechanical properties, morphology, permeability, and degradation [1,2] .Many researches regarding to miscibility in multi-component polymer systems have been reported.Among them, the polymer blends between biopolymers and synthetic polymers are of particular significance as they could be used as biomedical and biodegradable materials [3][4][5] .
The biodegradable aliphatic polyesters such as PLGA have versatile biodegradation properties because of their molecular weight and chemical compositions [24] .Nevertheless, there have been lots of attempts to improve the properties of the polymers to make them suitable for specific applications.For example, to prolong the circulation time of PLGA nanoparticles in a blood stream in vivo, PLLA/PEG di-block copolymers were coated onto the surface of PLGA nanoparticles by blending PLLA-PEG di-block copolymers with PLGA during nanoparticle formulation process [25] .It was postulated that the surface orientated PEG chains onto the surface of nanoparticles not only suppress the adsorption of serum proteins but also reduce the extent of cell recognition [9] .As known, protein adsorption onto polymer surfaces could strongly affect the cellular interactions with foreign surface [26] , induce the macrophage activation followed by immunologic response to foreign materials [27] and control the adhesion behavior of the cells [28] .
For PLGA film, different mole ratios of LA to GA could change its properties; the increase of GA content could promote the hydrophilicity of the film but decrease its tensile strength.Usually, the polymer blending is used to modify the properties of the PLGA films.Relative to the PLGA (e.g., mole ratio of LA to GA is 60:40), the synthetic P(BA-co-MMA) (mole ratio of BA to MMA is 1:3) holds relatively higher hardness [29,30] .The introduction of the P(BA-co-MMA) could modify the properties of PLGA film, and further enlarge its research fields.However, to the best of our knowledge, no experimental work has so far been reported on the studies of the physicochemical properties of PLGA/ P(BA-co-MMA) blend films.In the present work, a series of PLGA/P(BA-co-MMA) blend films with different P(BA-co-MMA) mole content were prepared by casting the polymer blend solution in chloroform.Surface morphologies of PLGA/P(BA-co-MMA) blend films were investigated by SEM technique.Thermal, mechanical, and chemical properties of PLGA/P(BA-co-MMA) blend films were studied by DSC, TGA, tensile tests, and surface contact angle tests.It was found that the introduction of the P(BA-co-MMA) could exert marked effects on the properties of PLGA films.received.Poly(butyl acrylate-co-methyl methacrylate) (M w =50000, mole ratio of BA to MMA is 1:3) was purchased from Shandong Shituo Chemical Co. Ltd. (China).Chloroform and other solvents were of analytical grade and used without further purification.
Preparation of PLGA/P(BA-co-MMA) blend film
The polymer blend films were prepared by casting a 30 wt% polymer blend solution in chloroform onto clean glass plates and drying them under vacuum at 50 °C.Also, it is found that, when P(BA-co-MMA) mole content in polymer blend is over 12%, the polymer blend can not form continuous film.
Scanning Electron Microscopy (SEM)
The morphologies of PLGA/P(BA-co-MMA) blend films were observed using a scanning electron microscope (Sirin 200, FEI, Holland).Gold was sputtered on the samples in vacuum.Acceleration voltage was 5 kV.
Differential Scanning Calorimetry (DSC)
DSC measurements were made on a DSC Q100 (TA, USA) differential scanning calorimeter; the temperature was calibrated with indium in a nitrogen atmosphere.About 8 mg samples were weighed very accurately.The temperature was controlled within the range of 20-160 °C, the heating rate was 15 °C/min.The DSC thermograms were obtained from a second heating procedure in order to observe the glass transition temperature [9] .
Thermogravimetric Analysis (TGA)
Thermogravimetric analysis (TGA) was carried out on a STA 4490 C TG-DTA analyzer (NETZSCH, Germany) at a heating rate of 10 °C/min under nitrogen atmosphere over the temperature range of 30-550 °C.Samples of approximately 12 mg were used for the measurements.
Tensile tests
Tensile tests were carried out with an Instron 4468 machine (USA).The crosshead speed was set to 100 mm/min.For each data point, five samples were tested and the average value was taken.
Surface contact angle tests
The static contact angle was measured with an optical contact angle meter CAM 200 (KSV Instrument Ltd., Finland).A 5 microlitre drop of pure distilled water was placed on the polymer blend film surface using a syringe with a 22-gauge needle.The measurements of each contact angle were performed within 10 s after each drop to ensure that the droplet did not soak into the compact.The surface contact angles were the mean of five determinations [31] .
Water-resistant pressure (mm) measurements
The water-resistant pressure (mm) measurements of PLGA/P(BA-co-MMA) blend films were carried out according to a conventional method.The sample films were made in round shape with a diameter of 30 mm.The sample films were used to seal the mouth of a long round tube with graduation in millimeters (tube diameter: 10 mm).After the tube mouth was sealed with the film, it was upset, and deionized water was added into the long round tube drop by drop.As soon as the deionized water permeated through the film, the height of water column was written down [32] .
SEM analysis
Figure 1 presents the surface photographs of PLGA/ P(BA-co-MMA) blend films with various P(BA-co-MMA) mole contents: (a) 0%, (b) 4%, (c) 8%, and (d) 12%.As it can be seen from Figure 1, the introduction of P(BA-co-MMA) changed the surface morphologies of the PLGA films.The surface morphologies of PLGA films become rougher and appear a little phase-separation phenomenon with the introduction of the P(BA-co-MMA) segments.As known, with the increase of the P(BA-co-MMA) segments content in polymer blends, on the one hand, PLGA chains and P(BA-co-MMA) segments could interact by entanglement; on the other hand, the self-aggregation action could exert between the P(BA-co-MMA) segments.
DSC analysis
Figure 2 displays the DSC curves of PLGA/P(BAco-MMA) blends with various P(BA-co-MMA) mole contents: (a) 0%, (b) 4%, (c) 8%, and (d) 12%, the corresponding data are listed in Table 1.As seen from Figure 2 and Table 1, the glass transition temperature of the PLGA segments in the polymer blends slightly increased with the increase of P(BA-co-MMA) mole content.Compared with the PLGA chains (mole ratio of LA to GA is 60:40), the P(BA-co-MMA) segments (mole ratio of BA to MMA is 1:3) are more rigid.The introduction of the rigid P(BA-co-MMA) segments could increase the glass transition temperature of PLGA segments in the polymer blends by interaction between PLAG chains and P(BA-co-MMA) segments.
TGA analysis
Figure 3 shows the TG curves of PLGA/P(BA-co-MMA) blends with different P(BA-co-MMA) mole contents: (a) 0%, (b) 4%, (c) 8%, and (d) 12%.From Figure 3, it can be seen that the thermograms of the polymer blends decrease with the increasing temperature especially in the range of 100-200 °C corresponding to the evaporation of water, and of 200-400 °C corresponding to the decomposition of the polymer blends.Also seen from Figure 3, the initial decomposition temperature of the PLGA segments in the polymer blends slightly increased with increasing the P(BA-co-MMA) mole content.As discussed above, the PLGA chains and the rigid P(BAco-MMA) segments could interact by entanglement.
Figure 4 exhibits the DTG curves of PLGA/P(BAco-MMA) blends with different P(BA-co-MMA) mole contents: (a) 0%, (b) 4%, (c) 8%, and (d) 12%.The corresponding data are listed in Table 2.As shown in Figure 4 and Table 2, the maximum decomposition temperature of the PLGA segments in the polymer blends slightly increased with the increase of P(BA-co-MMA) mole content.
Tensile tests
Figure 5 indicates the relationship between the tensile strength of PLGA/P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.As seen from Figure 5, the tensile strength of the polymer blend film increased Zhu, G. et al. -Physicochemical properties of poly(lactic acid-co-glycolic acid) film modified via blending with poly(butyl acrylate-co-methyl methacrylate) with the increase of P(BA-co-MMA) mole content in the polymer blend film.As already noted, compared with PLGA chains used in the study, P(BA-co-MMA) segments are relatively rigid and the introduction of P(BA-co-MMA) segments could promote the tensile strength of the polymer blend film by interaction between PLGA chains and P(BA-co-MMA) segments.
Surface contact angle tests
Figure 6 reveals the relationship between the surface contact angle of PLGA/P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.As shown in Figure 6, the surface contact angle of the polymer blend film increased with the increase of P(BA-co-MMA) mole content in the polymer blend film, suggesting that the hydrophilicity of the polymer blend film decreased.As known, PLGA segments hold a little hydrophilicity, while the P(BA-co-MMA) chains are hydrophobic, the introduction of P(BAco-MMA) segments could decrease the hydrophilicity of the polymer blend film.
Water-resistant pressure (mm) measurements
Figure 7 represents the relationship between the water-resistant pressure of PLGA/P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.As shown in Figure 7, the water-resistant pressure of PLGA/P(BAco-MMA) blend film increased with the increase of P(BA-co-MMA) content in the polymer blend film.As already discussed above, PLGA chains hold a little hydrophilicity, while the P(BA-co-MMA) segments hold hydrophobicity, indicating that the increase of the water-resistant pressure of the polymer blend film was connected with the introduction of the hydrophobic P(BA-co-MMA) segments.
Conclusions
A series of PLGA/P(BA-co-MMA) blend films with various P(BA-co-MMA) mole contents were prepared by casting the polymer blend solutions in chloroform.SEM photographs showed that the introduction of P(BA-co-MMA) changed the surface morphologies of the PLGA films.DSC measurements indicated that the introduction of P(BA-co-MMA) increased the glass transition temperature of the PLGA segments in the polymer blends.TG measurements demonstrated that the introduction of P(BA-co-MMA) increased both the initial decomposition temperature and the maximum decomposition temperature of the PLGA segments in the polymer blends.Tensile tests verified that the tensile strength of the polymer blend film increased with the increase of P(BA-co-MMA) content.Both the surface contact angle tests and the water-resistant pressure tests proved that the introduction of P(BA-co-MMA) segments promoted the hydrophobicity of the polymer blend film.
Materials
Poly(lactic acid-co-glycolic acid) (M w =90000), which was composed by a 60:40 ratio of lactic acid and glycolic acid units, was purchased from Jinan Daigang Biological Technology Co., Ltd.(China), and used as .et al. -Physicochemical properties of poly(lactic acid-co-glycolic acid) film modified via blending with poly(butyl acrylate-co-methyl methacrylate)
Table 1 .a
The glass transition temperature of PLGA segments in the polymer blends with different P(BA-co-MMA) mole contents.P(BA-co-MMA) (mol %) Glass transition temperature (°C)Ten experiments were tested.
Figure 5 .
Figure 5. Relationship between the tensile strength of PLGA/ P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.
Figure 6 .
Figure 6.Relationship between the surface contact angle of PLGA/P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.
Figure 7 .
Figure 7. Relationship between the water-resistant pressure of PLGA/P(BA-co-MMA) blend film and P(BA-co-MMA) mole content.
Table 2 .
The maximum decomposition temperature of PLGA segments in the polymer blends with different P(BA-co-MMA) mole contents. P( | 2017-09-17T00:57:05.242Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "0c3330ad2e9463c9c47569c5a014ef2c51e06cf7",
"oa_license": "CCBY",
"oa_url": "https://revistapolimeros.org.br/article/10.4322/polimeros.2013.097/pdf/polimeros-23-5-619.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0c3330ad2e9463c9c47569c5a014ef2c51e06cf7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
64346560 | pes2o/s2orc | v3-fos-license | Prehospital intravenous fentanyl administered by ambulance personnel: a cluster-randomised comparison of two treatment protocols
Background Prehospital acute pain is a frequent symptom that is often inadequately managed. The concerns of opioid induced side effects are well-founded. To ensure patient safety, ambulance personnel are therefore provided with treatment protocols with dosing restrictions, however, with the concomitant risk of insufficient pain treatment of the patients. The aim of this study was to investigate the impact of a liberal intravenous fentanyl treatment protocol on efficacy and safety measures. Methods A two-armed, cluster-randomised trial was conducted in the Central Denmark Region over a 1-year period. Ambulance stations (stratified according to size) were randomised to follow either a liberal treatment protocol (3 μg/kg) or a standard treatment protocol (2 μg/kg). The primary outcome was the proportion of patients with sufficient pan relief (numeric rating scale (NRS, 0–10) < 3) at hospital arrival. Secondary outcomes included abnormal vital parameters as proxy measures of safety. A multi-level mixed effect logistic regression model was applied. Results In total, 5278 patients were included. Ambulance personnel following the liberal protocol administered higher doses of fentanyl [117.7 μg (95% CI 116.7–118.6)] than ambulance personnel following the standard protocol [111.5 μg (95% CI 110.7–112.4), P = 0.0001]. The number of patient with sufficient pain relief at hospital arrival was higher in the liberal treatment group than the standard treatment group [44.0% (95% CI 41.8–46.1) vs. 37.4% (95% CI 35.2–39.6), adjusted odds ratio 1.47 (95% CI 1.17–1.84)]. The relative decrease in NRS scores during transport was less evident [adjusted odds ratio 1.18 (95% CI 0.95–1.48)]. The occurrences of abnormal vital parameters were similar in both groups. Conclusions Liberalising an intravenous fentanyl treatment protocol applied by ambulance personnel slightly increased the number of patients with sufficient pain relief at hospital arrival without compromising patient safety. Future efforts of training ambulance personnel are needed to further improve protocol adherence and quality of treatment. Trial registration ClinicalTrials.gov (NCT02914678). Date of registration: 26th September, 2016.
Background
Efficient analgesic treatment is fundamental to ensure patient comfort and to facilitate transport from incident site to hospital [1,2]. Notwithstanding, several studies have shown that acute pain remains insufficiently treated in emergency settings, which may be even more frequent in an austere and uncontrolled prehospital environment [3,4]. A recent prehospital investigation has reported a a high prevalence of 43% for insufficient pain relief at hospital arrival among trauma patients treated by experienced physicians [5].
Intravenous opioids are the mainstay in the rapid relief of severe acute pain, but side effects, such as sedation and respiratory depression, cannot be overlooked [6][7][8][9][10][11][12][13][14][15][16][17][18]. Due to patient safety, non-physician staff are therefore provided with simple one-drug protocols with dosing restrictions, however, with the potential risk of insufficient pain relief. In a recent prehospital study including 2348 adults and adolescents treated with intravenous fentanyl in a maximum dose of 2 μg/kg by Danish ambulance personnel, we showed that a substantial number of patients (58.4%) had moderate to severe pain at hospital arrival. In the same study, the frequency of abnormal vital parameters, as proxy measures of opioid-induced serious adverse effects, was low [19].
The aim of this study was therefore to explore the impact of liberalising a standard treatment protocol from a maximum dose of 2 μg/kg to an allowed maximum dose of 3 μg/kg on the intensity of pain at hospital arrival. We hypothesised that a larger proportion of patients would experience sufficient pain relief with the liberal treatment protocol. In addition, we examined the frequency of abnormal vital parameters.
Study design
This study was a two-armed, open-label, cluster-randomised controlled trial comparing the impact of a standard and a liberal fentanyl treatment protocol on efficacy and safety parameters.
Setting
The trial was conducted in the Central Denmark Region over a one-year period from 1st October 2016 to 30th September 2017. Sixty-six ambulances from 36 ambulance stations provide the emergency medical service for the 1,300,000 inhabitants of Central Denmark Region, corresponding to 22% of the Danish population. The Region covers both rural and urban areas [20]. The prehospital model is two-tiered with Emergency Medical Technicians (EMTs) or Paramedics (PMs) doing ground ambulance transport, and prehospital physicians/anaesthesiologists being dispatched to potentially life-threatening conditions in a rendezvous setup (e.g. in case of an opioid overdose and need of advanced airway management). Few stations in the Central Denmark Region also have nurse-manned emergency care units that can be dispatched in cases where physicians are unavailable [21].
All patient care recordings were either automatically (peripheral oxygen saturation, blood pressure and pulse) or manually (Glasgow Coma Scale (GCS)), respiratory rate) entered into an electronic touchscreen-based prehospital medical record by EMTs/PMs. Assessment of pain intensity (numeric rating scale (NRS), 0-10) was required according to protocol from before initiation of fentanyl treatment and every 5-10 min until hospital arrival. Furthermore, each registration was logged with the exact date and time combined with the title (EMT/PM/ physician/nurse) and unique user identification number of the treating healthcare provider.
Population
The sample included all patients treated with intravenous fentanyl by EMTs or PMs. We did not consider patients for analysis if: 1) prehospital physicians were dispatched to the scene and/or were providers of treatment, 2) they were treated by personnel other than EMTs/PMs (nurses), 3) they had unregistered personal civil registration numbers and 4) the same patient appeared more than once in the inclusion period, in which only the first case was included. Patients with pain scores registered at both baseline and hospital arrival were considered for the analysis of efficacy, and all patients were considered for the analysis of safety.
Randomisation
Randomisation was performed at the ambulance station level with allocation to either the standard treatment protocol for intravenous fentanyl (max 2 μg/kg) or the liberal treatment protocol (max 3 μg/kg). The 36 ambulance stations were stratified into 5 levels based on the average number of transports per month (level 1 (n = 8): < 70 transports), level 2 (n = 12): 70-199 transports, level 3 (n = 8): 200-399 transports, level 4 (n = 4): 400-899 transports, and level 5 (n = 4): ≥ 900 transports). Each type of treatment protocol was then randomly allocated in a 1:1 ratio within each of the 5 levels. Randomisation of ambulance stations was carried out by an impartial statistician at Aarhus University using STATA version 13.1 (StataCorp, TX, USA).
Blinding
Once randomised, ambulance station leaders and personnel were instructed to follow the assigned treatment protocol throughout the study period and were thus not blinded. Patients treated with intravenous fentanyl were unaware of the treatment protocol applied.
Intervention
A liberal fentanyl treatment protocol was implemented in the ambulance stations allocated to the intervention arm with a 2-month run-in period prior to study start between 1st August 2016 and 30th September 2016. The liberal treatment protocol allowed EMTs/PMs to administer intravenous fentanyl at a maximum dose of 3 μg/kg, with the first dose limited to 1.5 μg/kg. The new fentanyl treatment protocol was implemented at the enrolled stations following standard operating procedures and thus left to the discretion of the leaders of the enrolled ambulance stations. For ambulance stations not allocated to the new protocol, the maximum dose of fentanyl was 2 μg/kg with the first dose limited to 1 μg/kg. The exact dosing of fentanyl was decided by the EMT/PM on a μg/kg basis considering patient characteristics such as comorbidity and age.
Outcomes
Primary and secondary outcomes pertained to the patient level. The primary outcome was the proportion of patients with sufficient pain relief at hospital arrival, defined as an NRS equal to or below 3 [22]. As proxy measures of safety, secondary outcomes included any occurrence of abnormal vital parameters defined as follows: GCS < 15, respiratory rate < 10/min, peripheral oxygen saturation < 90% and mean arterial pressure (MAP) < 70 mmHg [calculated as ((2 x diastolic blood pressure) + (systolic blood pressure))/3] [23]. To investigate whether the mean cumulative dose of fentanyl increased following change of protocol, fentanyl administration practices for each EMT/PM was assessed for a one-year period prior to study start between 1st August 2015 and 31st July 2016.
Data analysis
Information regarding clinical outcomes was extracted from the electronic medical record. Dispatch data on prehospital time stamps and geographical information on each transport were obtained from the technical software used by the dispatch staff of the Emergency Medical Communication Center [21]. Individual data linkage across registries and exact information on age and sex was enabled by each patient's central personal registration number [24]. The Danish National Patient Registry were used to obtain the patients' primary hospital diagnoses classified according to the Danish version of the International Classification of Diseases, 10th revision (ICD-10) [25] and to calculate a 10-year Charlson Comorbidity Index (CCI) score [26,27]. Primary hospital diagnosis codes were considered reasonable for linkage with electronic medical record data if patients were admitted within 12 h of prehospital (EMCC) contact.
All statistical analyses were conducted using STATA version 13.1 (StataCorp, TX, USA). Means with 95% confidence intervals (CI) are included for continuous parametric variables. Continuous nonparametric data are provided as medians with interquartile ranges (IQR). Categorical and binomial data are presented as numbers and proportions with 95% CIs. A chi square test, unpaired or paired students t-test or Mann-Whitney U-test is used for descriptive statistics when appropriate. A multi-level mixed effect logistic regression model was fitted on binary outcomes in order to account for the potential correlation between patients within the same cluster (i.e. patients nested within EMT/PM and EMTs/PMs nested within ambulance station). The highest cluster level was defined as 1) ambulance station, followed by 2) EMT/PM and 3) the patient. For the adjusted estimates of safety outcomes, the model could not be fitted without removing the highest level of clustering. The results are presented as unadjusted and adjusted odds ratios (OR) with 95% CIs and patients treated with the standard protocol as reference.
The covariates considered in the multivariate statistical analyses are based on the existing literature and clinical experience: age (restricted cubic splines) [5,[28][29][30][31][32], sex (binary) [5,31], CCI score (0, 1, 2, 3+) [33][34][35], underlying cause of pain (binary) [28,29,36,37] and patient time with EMT/PM (logarithmic function of time) [29,31,32,38]. On the EMT/PM cluster level, a mean cumulative dose of fentanyl was added as a covariate, and the title of ambulance personnel (EMT or PM) was used as a proxy measure of experience. Ambulance station size (categorical) was added on the highest cluster level. Age and prehospital patient time were categorised as above to ensure the best statistical model fit. The underlying cause of pain was defined by primary hospital discharge codes (ICD-10), and patients were stratified according to ICD-10 chapter 19 (injury, poisoning, and certain other consequences of external causes (yes/no)).
To handle potential differences in baseline pain intensity [5,[28][29][30][31][32]39] between the standard and the liberal treatment group, we assessed the relative (%) change in NRS from baseline to hospital arrival in a supplemental post-hoc analysis. The relative change in pain scores during ambulance transport was assessed with a multilevel mixed effect ordinal logistic regression model [40], using the same cluster levels and covariates as described above.
Sample size
We used a previous study from the same prehospital setting and with a similar patient population treated with intravenous fentanyl by EMTs/PMs as reference, in which 42% achieved NRS ≤ 3 at hospital arrival [19]. With a fixed number of clusters (36 ambulance stations), an intracluster correlation at 0.05 [41], 90% power and an alpha level at 5%, at least 1980 patients in each arm would be required in order to identify a 10% points increase in patients with sufficient pain relief (NRS ≤ 3) at hospital arrival in the liberal pain treatment protocol compared to the standard pain treatment protocol. The data analyses are based on complete cases and include no imputation for missing data. Two-sided tests with P values < 0.05 were considered statistically significant.
Ethics
The study was approved by the Danish Data Protection Agency (no. 1-16-02-294-16) and the National Board of Health (no. 3-3013-2002/1). The local ethics committee was consulted, and the study was approved with a waiver of patient consent. The trial was registered at ClinicalTrials.gov (NCT02914678).
Results
Patient flow is presented in Figs. 1 and 2 gives an impression of how patients are grouped in clusters. A total of 2598 patients were treated with the standard treatment protocol and 2680 patients were treated with the liberal treatment protocol. No major differences in baseline characteristics of patients were seen (Table 1). As regards the primary outcome, the proportion of patients who had sufficient pain relief at hospital arrival was higher in the liberal treatment group than in the standard treatment group [ Table 4. Generally, patients with missing pain scores were older, more comorbid, lower initial pain scores, and received lower doses of fentanyl compared with patients with complete information on pain scores. For the patients with missing pain scores, no differences in patient characteristics were observed between the standard treatment group and the liberal treatment group.
Discussion
In this large two-armed, cluster-randomised controlled trial we found that patients treated with a liberal protocol were more likely to have sufficient pain relief at hospital arrival than patients treated with a standard treatment protocol. It can be argued that the difference was small, and the question remains why more than half the patients had insufficient pain relief, even with a liberalised protocol. The explanation may be found in the conservative doses of administered fentanyl, which again may reflect concerns of malingering patients, apprehension of inducing side effects, and worries about blurring symptoms and further diagnostics [42,43]. The same concerns may have arisen in other prehospital studies in which higher doses (120-220 μg) were only obtained when administered by physicians or Australian/American paramedics without dosing restrictions (Table 5).
No differences in the occurrence of abnormal vital parameters were observed between the two groups and none of the patients were given naloxone, which suggests that EMTs/PMs can safely administer fentanyl in a liberalised protocol. Few other randomized controlled trials on prehospital analgesia for adults have been conducted including small samples ranging from 24 to 312 patients. Similar to our study, two randomised controlled trials have investigated the impact of increasing the opioid doses on analgesic efficacy and safety. Woollard et al. investigated 172 prehospital patients with diverse aetiologies of pain. The researchers found that a rapid administration regiment leading to higher cumulative doses (14.8 mg) of intravenous nalbuphine (a semi-synthetic opioid) was more effective than a cautious regiment (10.7 mg) with a lower dose (ΔNRS: 4.29 units vs. 3.49 units, P = 0.028). In terms of side effects, a higher frequency of drowsiness in the patients treated with the rapid regiment was observed [44]. Bounes et al. reached the opposing conclusion in a study on 106 prehospital patients with acute pain, finding no superior analgesic effect or difference in safety parameters for a higher fixed dose of intravenous morphine compared to a lower fixed dose (absolute doses not given) [10].
Other studies have compared newer synthetic opioids with intravenous morphine. Smith et al. investigated 204 trauma patients in a physician-staffed helicopter and found no difference in analgesic effect, occurrence of abnormal vital parameters or adverse effects when compared with intravenous fentanyl [6]. A smaller physician-based study on a diverse sample of 60 patients in France demonstrated no difference between intravenous fentanyl and morphine in terms of efficacy, vital sign abnormalities and mild adverse effects [7]. Attempting to find a faster onset of action for intravenous fentanyl in 207 patients with ischemic type chest pain, Weldon et al. found no analgesic superiority compared with intravenous morphine. Also, no differences in vital signs or adverse effects was found [12]. Another small study of 36 patients with ischemic type chest pain found more rapid onset and more effective pain relief of intravenous alfentanil than intravenous morphine and no differences in vital signs [13]. Investigating another fast-acting opioid, Bounes and colleagues found no superior analgesic effect of intravenous sufentanil compared to intravenous morphine for 108 trauma patients; in addition, vital signs and adverse effects were similar [9]. Other randomised controlled trials have assessed the impact of ketamine [45], combinations of morphine and ketamine [8,11] or other combinations [46,47] or regional nerve blockades [48,49]. Most of the trials have been inconclusive, underpowered and with varying levels of quality, which probably reflect the jurisdictional and practical challenges of conducting research in an austere prehospital environment [50]. For the studies demonstrating analgesic superiority [8,11,13,44], the frequency of adverse effects was also higher, so neither specific conclusions nor recommendations can be made. Few other non-randomised (before-after) trials have sought to optimise prehospital pain management by modifying existing practice and educating the involved healthcare providers. These studies all found insignificant changes in pain scores, pain score documentation, and/or the cumulative opioid doses provided [35,[51][52][53][54][55][56][57].
Strengths and limitations
The strengths of this two-armed, cluster-randomised controlled trial rest on the large sample size and the real-world population-based prehospital data individually merged with validated national registries [24,25]. We mitigated the risk of unobserved secular effects on estimates by adding a non-historical control group and adopting a pragmatic, cluster-randomised design, reflecting a real-life Given for the patients with an initial pain score but not a subsequent pain score at hospital arrival, which accounts for 307 patients in the standard group and 292 in the liberal group delivery of intervention. We undertook a stratified randomisation in order to mitigate the risk of unbalanced patient samples and cluster-specific characteristics at the ambulance station level. However, the study also has a number of limitations that need to be addressed. First of all, similar to other prehospital investigations into pain management, pain scores were missing in our study (up to 27.8%) and this may have introduced bias. Depending on study design and patient selection, the proportion of patients with missing pain data in other prehospital studies ranges from 15% [58], 25-35% [5,28,35,36,38], 40-60% [30,[59][60][61] to 70-80% [52,62]. Missing data on pain scores impose a potential threat to the validity of the findings and should be taken into account when interpreted. We presented characteristics of patients with and without pain scores and found that patients with incomplete pain scores were older, had more comorbidities and received lower doses of fentanyl.
Second, the implementation of the liberal protocol was left to the discretion of the enrolled ambulance station leaders. This approach thus reflects the effect of protocol changes when implemented in real-life settings but may also partly explain the relatively small difference observed between the groups.
Third, our study was probably not powered to detect small differences on abnormal vital parameters as proxies of adverse effects, as these events are infrequent, and therefore a risk of type II errors exists. As the most conservative example, 7093 patients in each arm would have been required in order to find an occurrence of hypotension of 3% among patients treated under the standard protocol compared to 4% among patients treated under the liberal treatment protocol.
Last, the occurrence of fentanyl side effects can be difficult to quantify precisely with discrete measures. Exemplified by respiratory depression, hypoxemia may very well be correlated with oxygenation as measured by pulse oximetry when patients are breathing at atmosphere oxygen levels. However, it will be affected by factors such as supplemental oxygen therapy or peripheral vasoconstriction [63]. The presence or absence of abnormal vital parameters should therefore be interpreted in the light of these precautions.
Conclusion
Liberalising an intravenous fentanyl treatment protocol applied by EMTs/PMs resulted in slightly more patients having sufficient pain relief at hospital arrival compared to patients treated under a standard treatment protocol. Fentanyl doses were conservatively administered in both groups and the high overall proportion of patients with insufficient pain relief suggests that more should be done to ensure protocol adherence. No differences in the occurrence of abnormal vital parameters were observed between the two groups, suggesting that future efforts in optimising intravenous fentanyl protocols for non-physician staff can be made safely under ongoing evaluation and monitoring of the patients. | 2019-02-07T19:13:54.939Z | 2019-02-07T00:00:00.000 | {
"year": 2019,
"sha1": "a8353aa8cfb2dfaf3967b614226ba4b753af3a7a",
"oa_license": "CCBY",
"oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/s13049-019-0588-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8353aa8cfb2dfaf3967b614226ba4b753af3a7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265365446 | pes2o/s2orc | v3-fos-license | Hierarchical Concept to Build Value Co-Creation Gibson Guitar Community in Indonesia
This research connects the Consumer Culture Theory (CCT) perspective with the Service-Dominant Logic (S-D Logic) theory to show how to deeply understand brands through the eyes of consumers and the co-creation of brand culture to create value. The research method used is qualitative with ethnographic research design and in-depth interviews with key informants and the data analysis process using NVIVO 12 Plus to get comprehensive results. The final result of this study found the right model concept in implementing value co-creation hierarchically to support the sustainable growth of the Gibson Guitar brand.
This research connects the Consumer Culture Theory (CCT) perspective with the Service-Dominant Logic (S-D Logic) theory to show how to deeply understand brands through the eyes of consumers and the co-creation of brand culture to create value.The research method used is qualitative with ethnographic research design and in-depth interviews with key informants and the data analysis process using NVIVO 12 Plus to get comprehensive results.The final result of this study found the right model concept in implementing value co-creation hierarchically to support the sustainable growth of the Gibson Guitar brand.
INTRODUCTION
Through 2022, the musical instrument industry market is experiencing strong growth driven by a surge in interest in music and more people taking up musical hobbies.This has resulted in a substantial increase in sales across a wide range of musical instruments including guitars, pianos, keyboards, drums, as well as wind instruments.The overall revenue generated by the Musical Instruments industry market in 2022 stands at USD 38.20 million, marking a significant 10.7% surge compared to the previous year's revenue of USD 34.50 million.The Musical Instrument industry market has a wide array of players ranging from well-known brands such as Gibson, Fender, Yamaha, Roland, and Steinway & Sons to smaller boutique manufacturers specializing in High-End and Custom Instruments.Each segment of the industry caters to a unique customer base and caters to everyone from professional musicians and artists to enthusiasts and beginners.
The development of technology and information is currently moving very quickly and has a significant effect on every aspect of business, without exception in the music industry business, especially Musical Instrument.As a result, the music industry must develop sophisticated methods to use fandom as a marketing tool (Gamble et al., 2019) along with the spread of transmedia storytelling (Scolari, 2009: Zeiser, 2015).User communities for music fans have gradually emerged over time and are now manifesting as user-driven marketing strategies (Gamble et al., 2019).Brand communities (Muñiz andO'Guinn, 2001: Schau et al., 2009) can be both brand-created communities and independent fanbased communities (Bagozzi and Dholakia, 2002).Guschwan (2012) uses the term 'brandom' to describe brand-controlled fan communities.There is a wealth of literature on fans, fandoms and consumer fandoms (Duffett, 2013;Hills, 2002: Hao, 2020), but there is still a lack of knowledge on the perspectives and motivations of music fans to take part and co-create in the highly commercialized and strategic music market especially in brand communities (Baym, 2012).
Co-creation indirectly increases customer engagement, customer loyalty and customer interaction with the company, which provides long-term positive consequences beyond the co-creation results created in the near future (Syaukat, 2012).Prahalad and Ramaswamy (2004) state that customer innovation experiences can be generated from the value co-creation process by accommodating a heterogeneous group of consumers, both active and passive, accommodating consumer community involvement and involving customers emotionally and intellectually.In this research, the author examines value cocreation from the community elements of the guitar brand from Nashville, United States, namely Gibson Guitar to be able to understand the characteristics of the community elements of the guitar brand music industry and the relationship between community elements in Indonesia.There are 3 reasons for choosing Gibson Guitar as the object of research, namely first, Gibson Guitar is a world-renowned guitar brand that has existed for more than 127 years and has become a guitar brand benchmark for musicians, especially guitarists around the world.Second, Gibson Guitar experienced bankruptcy in 2018 and then rose and reaffirmed the Gibson brand image by creating a new fan community through social media channels (Youtube, Instagram, Website) which quickly grew and included fans from all over the world without exception in Indonesia.Thirdly, this case provides a rich empirical context as the company uses social media freely to communicate with its fans and fans are active in their relationships both with the company, the artist and among themselves, as recommended by Hine (2015).
This research connects the Consumer Culture Theory (CCT) perspective of the Gibson Guitar brand community elements in Indonesia with the Service-Dominant Logic (S-D Logic) theory to show how to deeply understand the brand through the eyes of consumers and the co-creation of brand culture so as to create value.Founded in 1894, this guitar brand from Nashville, Tennessee has accompanied many professional musicians over the years thanks to the evolution of its products.In this research, the author conducts in-depth research and analysis on how the elements of the Gibson Guitar brand community in Indonesia hierarchically can build value co-creation and how the roles between the elements of the Gibson Guitar brand community in Indonesia hierarchically interact and facilitate each other in building value co-creation of Gibson Guitar, which is one of the legendary guitar icons around the world.
THEORETICAL REVIEW
To facilitate the flow of the explanation of this theoretical basis, the researcher draws a scheme to facilitate the flow, which is as follows : Source : Data processed by researchers, 2023.
Consumer Culture Theory / CCT
According to Arnould (2018) Consumer Culture Theory (CCT) is the scientific study of consumer choice and behavior from a social and cultural perspective, as opposed to psychological and economic.CCT refers to a set of theoretical perspectives that address the dynamic relationship between consumer behavior, markets, and cultural significance.Consumer culture is considered as "the social order in which the relationship between cultural and social resources, between symbolic resources and the material and meaningful ways of life on which they depend, is mediated by the market".And consumers are part of an interconnected system of commercially produced product images that they use to construct their identities and guide their relationships with others.Consumer Culture Theory (CCT) is an investigative field that seeks to explore the complexities of consumer culture.Instead of treating culture as a fairly homogeneous system with common meanings, ways of life and values shared by members of society, CCT explores significant heterogeneity and overlapping distributions.Substantial cultural groups exist within broader historical and social frameworks through globalization and market capitalism.
Service-Dominant Logic (S-D Logic)
The emergence of Service-Dominant Logic is a new way of looking at the marketing concept.This new marketing model changes the concept from Goods-Dominant Logic (G-D Logic) to Service Dominant Logic (S-D Logic).The main elements of Goods-Dominant Logic (G-D Logic) are physical output and discrete transactions.While Service-Dominant Logic (S-D Logic) emphasizes intangibility, exchange processes and relationships (Tjiptono & Chandra, 2011) as key factors.Activities in Service-Dominant Logic (S-D Logic) are customeroriented where customers play a role as part of value co-creation.
Community
A community is a component of society that shares information on a particular topic.Its formation is horizontal because it is carried out by individuals who occupy an equal position.The unifying force of a community lies primarily in the common interest in meeting the needs of social life, often based on similarities in cultural, ideological, and socio-economic contexts.Communities today are becoming increasingly important in the world of marketing.The emotional bond between influential community members is crucial for a brand.It can simply be concluded that hierarchy describes a level, level or level of structure in language use.Basically, hierarchy is universal and can be felt by everyone, not just to talk about political interests.So when living every nook and cranny of life, there is always a hierarchy that becomes the limit.Hierarchy itself is divided into two, namely formal hierarchy and informal hierarchy.The difference between formal and informal hierarchies provides an important background and way in which individuals work within multiple hierarchies simultaneously.In sociological theory, the elements of the social stratification system in society are status and role.
Value Co-Creation
Value co-creation has attracted a lot of academic attention after the papers written by Prahalad and Ramaswamy ( 2004) and Vargo and Lusch (2004) written in the same year.Prahalad and Ramaswamy ( 2004) view value co-creation as a social change where customers today are more educated, connected and active due to easy access to information, the influence of globalization, networking and empowerment and their own experiences.Prahalad and Ramaswamy ( 2004) define value co-creation as a combination of value creation between companies and customers that aims not to please customers by the company, but the elaboration of customers to build service delivery experiences according to their needs.Meanwhile, Vargo and Lusch ( 2004) view value co-creation as a result of the evolutionary theory of Service-Dominant Logic (S-D Logic) marketing that focuses on customizing customer needs and how companies can provide their services to suit their customers' needs.Service-Dominant Logic (S-D Logic) focuses on operant resources (dynamic resources such as people), which are resources that are able to act with operand resources (static physical resources such as machines and raw materials) or even other operand resources to create value.The explanation of the framework above is about how this research was conducted.The framework shows the attachment between one aspect and another in seeing and analyzing how the elements of the Gibson Guitar brand community in Indonesia can hierarchically build value co-creation.The analysis is then continued to see how the roles between the elements of the Gibson Guitar brand community in Indonesia hierarchically interact and facilitate each other in building value co-creation.The tools used to assist this analysis process are interview protocols and NVivo 12 Plus as one of the most accurate tools for obtaining analysis results from qualitative research.
METHODOLOGY
To gain an in-depth understanding of how the elements of the Gibson Guitar brand community in Indonesia interact and facilitate each other in building value co-creation, the researcher uses an ethnographic approach.This research uses an ethnographic approach because in examining the community elements of the Gibson Guitar brand in Indonesia, researchers hope to obtain a comprehensive explanation from the informants interviewed and are directly involved in the same social situation so that the information provided is not limited to a certain scope but can cover all interactive activities with relation to the formation of hierarchies and building value co-creation of the Gibson Guitar brand in Indonesia.
Qualitative research, in the early stages of research does not yet have a clear picture of the aspects of the problem under study.Development of the research focus is carried out while collecting data, this process is known as "emergent design".Everything related to data collection and collection takes place continuously until the research is considered over.The analysis technique in this research is also assisted by using a software application, namely NVivo 12 Plus for Windows.
Word Frequency Query
Researchers conducted data reduction by looking at the most frequently discussed topics from all data that had been imported into NVivo 12 Plus.Based on the search results with the Word Frequency Query feature of NVivo 12 Plus from various data sources that have been imported, the word "personal" is the word with the most frequency, which is 0.48% of all research data sources, followed by the word "now" (0.42%), "territory" (0.41%) "community" (0.34%), "distributor" (0.32%), and from all research resources.The following figure shows the search results of the most frequently discussed words during the interview sessions :
Value Co-Creation in Gibson Guitar Brand Community Element
Value co-creation in the community element of the Gibson Guitar brand in Indonesia has not been implemented thoroughly because business people currently still adhere to traditional views in accordance with the Goods-Dominant Logic (G-D Logic) theory where marketing focuses on operand resources or goods as units of exchange.This is evidenced by the results of the analysis test using the NVivo 12 Plus application where users (artists and consumers) have expressions and issues in the form of discourse or discourse around products from the results of interactions and dialog between users in the Word Tree.In accordance with the theory of marketing collaboration on Service-Dominant Logic (S-D Logic) which views consumers as an operant resource and is considered to be invited to collaborate in value co-creation or as an endogenous influence on the value to be created.The interrelated relationship in building value co-creation can be seen with the communication and interaction carried out by artists and consumers through offline means (direct face-to-face visits) or online through social media with retailers and distributors.This does allow various types of information conveyed by users (artists and consumers) regarding Gibson Guitar brand products to be quickly received by related parties such as retailers and distributors, but in its application, although currently the company can be accessed directly through social media networks such as Instagram, Youtube or the company's official website, the business system run to date is still one-way as in the theory of Goods-Dominant Logic (G-D Logic).What is interesting from the results of the indepth interview and the results of the analysis test using the NVivo 12 Plus application is that there are new findings in the form of other elements in the Gibson Guitar brand community, namely collectors.
The Role of Each Element of the Gibson Guitar Brand Community in Indonesia in Building Value Co-Creation
Factors related to efforts in building value co-creation of the Gibson Guitar brand in Indonesia cannot be separated from the role of each element of the community based on the results of in-depth interviews consisting of : 1. Company Gibson Brands as the principal and brand holder of the Gibson Guitar brand has the main role as the owner of assets in the form of trademarks and also a vital role as a place for production activities to take place so that the products produced can reach consumers.
Artist (Public Figure)
The artist referred to in this study is a resource person, namely a guitarist who is a public figure and has been recognized by the general public.Even though they are not the official brand ambassadors of Gibson Guitar, their role in building value co-creation is very important by using the product and being seen by the public so that they can lead opinions and perceptions related to various things about Gibson Guitar.
Distributor
Serves as an intermediary or party who buys products directly from producers or is the first hand after producers in a country or region.Distributors are also parties who sell products back to retailers (dealers) or can also sell directly to consumers (end users).In other words, distributors have a role as intermediaries between producers and consumers.
Retailer
Connects distributors and consumers, provides information to principals or distributors regarding consumer preferences, market demand, competitor products to market trends and helps promote products and provide education about products to consumers.The most important part of the retailer's role is to facilitate consumer access to Gibson Guitar brand products.
Consumer
Acts as a party who uses or consumes Gibson Guitar products and helps preserve the Gibson Guitar brand either directly or indirectly by using the guitar and publicizing it either offline or online through social media.
Gibson Guitar Brand Community Element Business Hierarchy
Analyzing the results of the informants' answers in the interview session shows that status and role are the main components of the formation of business hierarchies in the Gibson Guitar brand community elements, so that researchers can conclude more specifically about the order of business hierarchies in the Gibson Guitar brand community elements both globally (worldwide) and in Indonesia.Based on the recapitulation of informants' answers, the majority of answers are obtained which say that the existence of the company and artist guitar hero figures (brand ambassadors) as a very important factor for the progress of the Gibson Guitar brand, then the existence of distributors is considered important as a party responsible for the distribution of products in the distributor's home country.Furthermore, the existence of retailers is considered as the element that is most likely to be able to interact face-to-face (offline) or online with consumers so that information related to Gibson Guitar brand products will be received more quickly and the publication of the use of these goods gets the most influence through social media posting activities carried out by consumers.Finally, based on the explanation of the answers above, the researcher draws the conclusion that the business hierarchy model of the Gibson Guitar brand community elements globally (worldwide) and in Indonesia can be seen in the following figure : This business hierarchy is an informal hierarchy based on the status and role of each element of the Gibson Guitar brand community taken from the conclusion of interviews with informants, where business stakeholders still adhere to a business perspective in accordance with the Goods-Dominant Logic (G-D Logic) theory, namely marketing focuses on operand resources or goods as units of exchange.
Hierarchical Value Co-Creation Model Concept in Gibson Guitar Brand Community Element in Indonesia
Based on the informants' answers in the interview session, it shows that communication and interaction play an important role as the main elements in efforts to build value co-creation in the community elements of the Gibson Guitar brand in Indonesia and answer research questions regarding how the community elements of the Gibson Guitar brand in Indonesia can hierarchically build value co-creation, so that researchers can conclude more specifically about the concept of a hierarchical value co-creation model in the community elements of the Gibson Guitar brand in Indonesia.Based on the recapitulation of answers from informants, various kinds of discourse or discourse between users (artists and consumers) regarding matters related to the Gibson Guitar brand in Indonesia are obtained which may be overlooked by parties related to the Gibson Guitar brand business so that value co-creation cannot be applied thoroughly in the community element of the Gibson Guitar brand in Indonesia.The community element is currently still running according to the pre-existing business hierarchy and has not been coordinated as a whole unit and is still moving according to the interests of each party.On that basis, the researcher draws the conclusion that the concept of the value co-creation model of the Gibson Guitar brand community element in Indonesia hierarchically can be seen in the following figure : Relationships exist everywhere with the process of interaction between two or more parties.However, the quality of the relationship emerges and originates based on the experience of interacting together over time.Marketing communication is the process by which marketing activities and marketing resources are transformed into economic results.Researchers also see dialog as an important basis for pursuing authentic innovation and creativity in the marketplace, within firms and between firms.Dialogue being essentially relational in nature, it is also useful in approaching knowledge development within and between firms as well as the creation of new and previously unknown knowledge positions.
Engaging in dialogic interaction is not unidirectional, self-serving or achievement by control.Rather, the goal is open-ended, discovery-oriented and value-creating.Researchers emphasize that relationships can provide useful structural support to sustain value creation activities.Service-Dominant Logic (S-D Logic) encourages the sharing of new ideas and new knowledge within the firm as well as with key customers and suppliers.Seen in this way, marketing innovation is a consequence of 'breaking free' from the Goods-Dominant Logic (G-D Logic) mental model that no longer serves the continuous renewal of strategies and competencies.
Service-Dominant Logic (S-D Logic) takes the view that the assessment of customer value is related to the time and place of the service experience.In contrast, Goods-Dominant Logic (G-D Logic) assumes that customer value is appropriately predetermined in the accumulation of various resources and functions used to produce material goods for sale.Furthermore, researchers understand service as a type of social interaction that aims to improve one's situation and thus is a valuable avenue for improvement of the quality of life.In this sense, value is ultimately a social judgment about the desirability of benefits to society and the associated costs and preferential calculus and includes attitudes towards the natural environment and technology.The results of the researcher's analysis are in line with research on Co-Creation Experiences in the Music Business conducted by Harriman Saragih (2018) which states that music consumers can also be seen as partners, as they also contribute to value creation during the ongoing development and commercialization phases.The concept of co-creation has been and therefore can be practiced in various phases of the value chain in the music business i.e. development, promotion, production or distribution.
Researcher Perspective
Related to the research findings on value co-creation in the community element of the Gibson Guitar brand, which is rich in creations and various phenomena have clearly made value co-creation a very important thing for the sustainability of the Gibson Guitar brand, therefore the researcher makes a perspective from 2 sides : Koesoemadinata (2000) defines exploration as a scientific technical activity to find out an area, region, situation, space that was previously unknown of its existence.Scientific exploration will contribute to the treasures of science.Exploration is not only carried out in an area, it can also be in the depths of the sea that have never been explored, space, even the insight of the mind (exploration of the mind).
Source : Data processed by researchers, 2023.
Time Frame Perspective
The time perspective or time frame of the Gibson Guitar brand has three periods, namely the past related to old values, the present related to reality and the future as a new vision.According to Yogi Yogaswara (2015) states that "companies with adaptability have the ability to respond to the external environment, internal customers, and internal customers.adaptability has the ability to be responsive to the external environment, internal customers (employees) and external customers, by translating the demands of the external environment.(employees) and external customers, by translating the demands of the business environment into actions so that the company survives, grows, and develops ". into action so that the company survives, grows, and develops ".According to Imam Santoso (2011) change is the very nature of society, it changes the metaphor of "social life" like social life itself.Social life involves continuous change.The most common notion of change indicates some transition in terms of a particular entity that occurs within a certain time.Source : Data processed by researchers, 2023.
The Art of Statement
Ontologically, the Gibson Guitar brand has existed for more than 127 years and has become a benchmark for guitar brands around the world.Epistemologically, there are elements of goodness from the community elements of the Gibson Guitar brand that are revealed in a broader contextual order, so that it becomes a goodness because the values appear in it.Furthermore, automatically when these values grow and develop, it will automatically exist.In the axiological context, the connection between existence and epistemology is seen with the contribution of customers to the Gibson Guitar brand."The best contribution is to continuously present", therefore the researcher analyzes that there has been a shift from performance culture to perception culture inherently in the community elements of the Gibson Guitar brand in Indonesia due to the contribution of customers who are dynamized from old values to new vision (old values are manifested into a new vision).
CONCLUSIONS AND RECOMMENDATIONS
Based on research on the Hierarchical Concept to Build Value Co-Creation Gibson Guitar Community in Indonesia, the researcher can draw several conclusions as follows : 1. Value co-creation of the Gibson Guitar brand in Indonesia has not been implemented thoroughly because business people currently still adhere to the old perspective in accordance with the Goods-Dominant Logic (G-D Logic) theory where marketing focuses on operand resources or goods as units of exchange.2. The interrelated relationship in building value co-creation can be seen with the communication and interaction carried out by artists and consumers through offline means (direct face-to-face visits) or online through social media with retailers and distributors.This does allow various types of information submitted by users (artists and consumers) regarding Gibson Guitar brand products to be quickly received by related parties such as retailers and distributors, but in its application the business system that has been carried out to date is still one-way like the perspective in Goods-Dominant Logic (G-D Logic).
The role of each element of the Gibson Guitar brand community in
Indonesia hierarchically in an effort to build value co-creation is as follows: a. Artist and Consumer Apart from contributing to the purchase and use of goods, artists and consumers also play a role in preserving the Gibson Guitar brand by using, publishing or posting through social networks, discussions and dialogues both directly and online so that it becomes a discourse or discourse which can create new values for the company.b.Artist As artists, especially guitarists in the Gibson Guitar brand community element in Indonesia, artists are people who play a role and have the ability to do creativity so that there is an exposure to new values in the Gibson Guitar brand.
c. Consumer Acts as a party who uses or consumes Gibson Guitar products and dynamizes and explores the values of the Gibson Guitar brand in Indonesia.
d. Retailer
The role of establishing connections and being the closest party that participates in interaction and dialogue with artists or consumers as well as being a liaison between artists and consumers to distributors, goods providers and smooth the flow of Gibson Guitar brand business movements in Indonesia.e. Distributor Serves as a facilitator and also a conductor or access to the values that drive the Gibson Guitar brand business journey in Indonesia.f.Company Is the principal and brand holder of the Gibson Guitar brand which has a role as a place for production activities to take place so that the products produced can reach the audience, consumers and society.
In an effort to create value co-creation hierarchically, the company as the leverage point of Gibson Guitar brand business activities must be able to reach everything that becomes a discourse or discourse related to Gibson Guitar brand products to consumers.4. In the end, the concept of the model formulated from this research explains that the informal business hierarchy between elements of the Gibson Guitar brand community is formed from the recapitulation of informants' answers with the Goods-Dominant Logic (G-D Logic) perspective approach which assumes that customer value has been appropriately predetermined in the accumulation of various resources and functions used to produce material goods that can be sold.Furthermore, the concept of a hierarchical value co-creation model in the community element of the Gibson Guitar brand in Indonesia is formed based on the Service-Dominant Logic (S-D Logic) perspective which takes the view that customer value assessments are related to time and place and service experiences due to interactions and communications originating from users (artists and consumers) either offline (face-to-face) or online through social media.
FURTHER STUDY
This research contributes to the understanding of co-creation and the values underpinning the relationship dynamics and values developed jointly among the elements of the Gibson Guitar brand community in Indonesia, thereby creating value for the community elements both individually and collectively.Future research can develop the research findings on value co-creation in the music industry with the following details : 1. Future research can conduct elaborate research on whether collectors affect co-creation in the music industry community.
2. Further research can also be carried out by exploring elements of the Gibson Guitar brand community in other major cities in Indonesia besides Jakarta so that the results obtained become more general.
Further research on value co-creation and hierarchy in the Gibson
Guitar brand community elements can also involve regional representative artists and community elements in the Southeast Asia region to an international scale.4. Researchers also suggest that future research can further examine other elements in the Gibson Guitar brand community that are possible, such as session players or rehearsal and recording studios that use the Gibson Guitar brand.
Figure 3. Community Elements in the Music Industry HierarchyIt can simply be concluded that hierarchy describes a level, level or level of structure in language use.Basically, hierarchy is universal and can be felt by Source : Data processed by researchers, 2023.
Figure 5 .
Figure 5. Qualitative Research Design In researching the community elements of the Gibson Guitar brand in Indonesia with a qualitative research design, the data was analyzed in the following way (Miles and Huberman, 2014) : 1.Data reduction, which is an activity of summarizing field notes by sorting out the main things related to research problems.A summary of the various notes in the field is then arranged systematically in order to provide a sharper picture and facilitate tracking if at any time the data is needed again.Researchers use data reduction to facilitate data collection in the field.2. Presentation (Display) Data.Data presentation activities are useful for seeing the overall picture of the research results, either in the form of a Source : Data processed by researchers using NVivo 12 Plus, 2023.
Figure 6 .
Figure 6.Summary of Percentage of Most Discussed WordsWord CloudTo find out the most frequently discussed words during the interview sessions, the researcher used a Word Cloud to facilitate understanding and get the full picture.The following figure shows the Word Cloud of the 30 dominant words used in this research data source.
Figure 8 .
Figure 8. Business Hierarchy in Gibson Guitar Brand Community Elements Globally and in Indonesia Source : Data processed by researchers, 2023.
Figure 9 .
Figure 9. Hierarchical Value Co-Creation Model Concept on Gibson Guitar Brand Community Elements in Indonesia matrix or coding.From the results of data reduction and data presentation, researchers can then draw conclusions to verify the data so that the data becomes meaningful.Researchers use this data presentation to see a picture of the research.3. Conclusion and Verification.To determine conclusions that are more reasonable and no longer in the form of trial and error conclusions, verification is carried out throughout the research in line with member checks, triangulation and observation of audit activities, thus ensuring the significance or meaningfulness of the research results.Researchers use this method to verify clear and firm conclusions.
1. Coverage Area Perspective Table 1 Researcher Perspective and Expert Adjustment Coverage Area
Table 2
Researcher Perspective and Expert Adjustment Time Frame | 2023-11-23T16:05:35.419Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "2004054292ad28ab7ac4723b52433bfdcd2f399b",
"oa_license": "CCBY",
"oa_url": "https://journal.formosapublisher.org/index.php/ijba/article/download/5572/6359",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a5ea74afd36963986c82e2fbcaa02de7e7d409ad",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
225076948 | pes2o/s2orc | v3-fos-license | PSA Kinetics as Prognostic Markers of Overall Survival in Patients with Metastatic Castration-Resistant Prostate Cancer Treated with Abiraterone Acetate
Background Abiraterone acetate (AA) is widely used in the treatment of patients with metastatic castration-resistant prostate cancer (mCRPC). However, a significant percentage of patients will still progress, highlighting the need to identify patients more likely to benefit from AA. Parameters linked to prostate-specific antigen (PSA) kinetics are promising prognostic markers. We have examined clinical and PSA-related factors potentially associated with overall survival (OS) in patients treated with AA. Methods Between 2011 and 2014, 104 patients with mCRPC treated with AA after progression to docetaxel at centers of the Catalan Institute of Oncology were included in this retrospective study. Patients were assessed monthly. Baseline characteristics and variables related to PSA kinetics were included in univariate and multivariate analyses of OS. Results Median OS was 16.4 months (range 12.4–20.6) for all patients. The univariate analysis identified the following baseline characteristics as significantly associated with OS: ECOG PS, location of metastases, time between starting androgen deprivation therapy and starting AA, time between stopping docetaxel treatment and starting AA, neutrophil-lymphocyte ratio (NLR), alkaline phosphatase levels, and PSA levels. Factors related to PSA kinetics associated with longer OS were PSA response >50%, early PSA response (>30% decline at four weeks), PSA decline >50% at week 12, PSA nadir <2.4ng/mL, time to PSA nadir >140 days, the combination of PSA nadir and time to PSA nadir, and low end-of-treatment PSA levels. The multivariate analysis identified ECOG PS (HR 37.46; p<0.001), NLR (HR 3.7; p<0.001), early PSA response (HR 1.22; p=0.002), and time to PSA nadir (HR 0.39; p=0.002) as independent prognostic markers. Conclusion Our results indicate an association between PSA kinetics, especially early PSA response, and outcome to AA after progression to docetaxel. Taken together with other factors, lack of an early PSA response could identify patients who are unlikely to benefit from AA and who could be closely monitored with a view to offering alternative therapies.
Introduction
The treatment armamentarium of metastatic castration-resistant prostate cancer (mCRPC) has increased in recent years with the FDA/EMA approval of hormonal therapies like abiraterone acetate (AA) and enzalutamide, cytotoxic therapies like cabazitaxel, 1 and bone-targeted therapies like Radium-223. 2,3 Given the increase in treatment options for mCRPC, the selection of optimal therapeutic regimens is crucial for obtaining the maximum survival benefit. 2,4 Predictive biomarkers of response can help in the selection of an effective treatment while avoiding unnecessary toxicities and excess costs associated with a potentially ineffective drug.
AA is a prodrug of abiraterone and is a selective, irreversible inhibitor of the androgen biosynthesis enzyme CYP17, which is involved in androgen production in the testes, adrenal glands and prostate tumor tissue. AA in combination with prednisone has demonstrated an increase in overall survival (OS) in patients with mCRPC and is approved for patients who progress during or after docetaxel-based chemotherapy, 5 in chemotherapy-naïve patients, 6 and in metastatic castration-sensitive patients. 7 However, approximately one-third of treated patients show primary resistance to AA, 6 highlighting the need for early identification of these patients in order to offer them an alternative therapy before progression-related clinical deterioration renders them ineligible for subsequent therapy, including chemotherapy.
Several studies have suggested a relationship between decrease in prostate-specific antigen (PSA) levels and survival, mainly in chemotherapy treated mCRPC patients. 5 However, the role of PSA decline as a surrogate marker of survival in patients treated with hormonal therapies has not been clearly established. Biosynthesis of extragonadal androgens may contribute to progression in mCRPC, which still depends on androgen receptor signals, and a relationship between PSA kinetics and response to AA has been suggested, 8 indicating that PSA kinetics may be a useful predictive biomarker of response. 2 In order to shed further light on the potential association of baseline patient characteristics and PSA kinetics with OS, we have performed a retrospective study of 104 mCRPC patients treated with AA after progression to docetaxel.
Study Design and Patient Population
This was a retrospective observational study that included 104 mCRPC patients treated at three centers of the Catalan Institute of Oncology (University Hospital of Germans Trias i Pujol, Badalona; Bellvitge University Hospital, l'Hospitalet de Llobregat; and University Hospital of Girona Doctor Josep Trueta, Girona) from August 2011 to October 2014. All patients had histologically confirmed prostate adenocarcinoma with evidence of metastatic disease and all had been previously treated with docetaxel. Patients received AA 1000mg orally once daily plus prednisone 5mg twice daily. Each treatment cycle was 28 days. Treatment was continued until disease progression according to the Prostate Cancer Working Group 2 (PCWG2) criteria, 9 death, or unacceptable toxicity.
During treatment with AA, patients were assessed every four weeks. Clinical visits included a physical examination and routine laboratory analyses, including PSA levels, albumin, alkaline phosphatase (ALP) levels, neutrophil and lymphocyte counts, and hemoglobin levels. Thoracic-abdominopelvic computed tomography (CT) and/or a bone scan were performed before starting AA therapy and during the treatment period in accordance with the protocols of each center.
Statistical Analyses
Data were recorded with Microsoft Excel 2007 and statistical analyses were carried out using R v3.6.0. We built submit your manuscript | www.dovepress.com
DovePress
Cancer Management and Research 2020:12 a multivariate model starting with all untransformed variables after excluding those exceeding 15% of individuals (for which there were insufficient numbers of observations). Data were imported from Excel into R with the readxl R package. Prior to analysis, data wrangling and cleaning was performed with the tidyverse suite of packages (including dplyr, purr, lubridate, stringr, and broom). Descriptive statistics including missing data were collected and visualized with the skimr and rms R packages. Data were described in terms of ranges, medians, frequencies, and percentages. OS was defined as the time from the start of AA treatment to death from any cause. Data were censored at the time of patient death. Optimal cut-off points for possible categorical transformations were identified with the survminer R package. A univariate analysis was performed to determine the association of different factors with OS and Kaplan Meier plots were generated using the survival R package. Due to limitations in sample size, not all variables identified as significant in the univariate analysis could be included in the multivariate analysis. After imputing missing data with the Hmisc R package, a variable selection strategy was followed based on elastic net Cox proportional hazards model fitting via penalized maximum likelihood, using the glmnet R package. This was followed by AIC maximization stepwise backward variable elimination towards model reduction. All models generated on imputed data were tested against intact data. An internal cross-validation was performed by bootstrapping with 200 iterations. Imputation, Cox modeling, stepwise model selection and model validation, were done using the rms R package. Recursive tree partitioning was explored using the party R package. Significance was set at p≤0.05.
Patient Characteristics and Treatment
The main patient characteristics are detailed in Table 1. Median age was 74 years (range, 49-86), 82% of patients had ECOG PS 0-1, and 51.9% had a Gleason score ≥8. Bone metastases were present in 49% of the patients, while 11.6% had only lymph node metastases and 9.7% had visceral metastases. All patients had previously been treated with docetaxel and 32% had received two or more prior chemotherapy regimens.
Baseline Characteristics and OS
In the univariate analysis, the following baseline characteristics were associated with longer OS: ECOG PS <2 (p<0.001) ( Figure 2A); only lymph node metastases (p=0.04); interval between starting androgen deprivation therapy and starting AA ≥20 months (p=0.006); interval between stopping docetaxel and starting AA ≥3 months (p=0.04); NLR <4 (p<0.001) ( Figure 2B); ALP levels ≤ULN (p=0.03); and low/intermediate PSA levels (p<0.001). No other significant association between baseline characteristics and OS was observed ( Table 2). (Table 4). Based on these findings, we generated a decision tree for survival analysis in mCRPC, which indicated that baseline ECOG PS and NLR combined could predict OS ( Figure 5).
Discussion
Several treatment options are now available for mCRPC. Novel hormonal therapies, including AA, have demonstrated activity in patients previously treated with docetaxel, but other therapies, such as cabazitaxel 1 or radium-223, 3 can be useful in patients refractory to hormonal therapies. Primary resistance to AA in predocetaxel mCRPC patients is near 20%, and the identification of patients with primary resistance may help avoid ineffective treatments. 10 In the present study, we have examined the impact on OS of several baseline characteristics and parameters related to PSA kinetics in patients with mCRPC treated with AA. The multivariate analysis identified two baseline characteristics (ECOG PS and NLR) and two variables associated with PSA kinetics (early PSA response and time to PSA nadir) as independent markers of OS.
Our patients were treated off-study, yet OS was within the range of that reported in the COU-AA-302 trial, 11 despite the fact that a higher percentage of our patients had PS 2 (17% vs 10%). In the COU-AA-301 trial, 12 ECOG PS >1, presence of liver metastases, time between starting androgen deprivation therapy and starting AA >36 months, low serum albumin, and high serum ALP and LDH levels were associated with a poorer outcome to AA. 13 Our findings regarding ECOG PS, time between starting androgen deprivation therapy and starting AA, and ALP levels were in line with these results. In addition, however, we found that low NLR, low baseline PSA levels, the absence of bone metastases, and high hemoglobin levels were also associated with longer OS.
NLR is a marker of inflammation and immune status related to carcinogenesis, and its role as a potential prognostic and predictive marker has been evaluated in many solid tumors, including mCRPC. In a study carried out at the Princess Margaret Cancer Centre, 14 NLR ≥5 was significantly associated with PSA response and survival in 116 patients with mCRPC treated with AA, 53% of whom had previously been treated with docetaxel. NLR has also been identified as a prognostic marker in patients with mCRPC treated with second-line chemotherapy. 15 Furthermore, a persistent NLR <3 during treatment with enzalutamide was associated with greater treatment efficacy in patients with mCRPC. 16 Taken together with these previous results, our finding in the present study that a high NLR was related to shorter OS suggests that NLR is a promising prognostic marker and leads us to postulate that NLR in combination with ECOG PS can be a useful prognostic marker in patents with mCRPC. Whereas in patients with PS 0 and PS 2, survival is well defined, in the intermediate group of patients with PS 1, NLR can be useful in selecting which patients are likely to benefit from AA, especially as NLR assessment can easily be incorporated into routine clinical practice.
In contrast, age and Gleason score were not associated with OS. In fact, AA has demonstrated activity in mCRPC regardless of patient age, 5 and several studies have found that Gleason score may well lose prognostic impact in heavily pretreated mCRPC patients. 13,[17][18][19] Additionally, the number of prior chemotherapy lines was not associated with OS. More than 50% of our patients who had received a second line of chemotherapy before AA were treated with docetaxel, in accordance with common practice at that time, and the interval between ending docetaxel and starting AA was more highly associated with OS than the number of previous lines.
The prognosis of mCRPC patients ranges from a few months to several years and is influenced by multiple factors. Several studies have identified predictive biomarkers of resistance to hormonal therapies. For example, Wnt pathway genes have been associated with resistance to AA 20 and baseline circulating tumor cell counts have also shown promise as prognostic markers. 21 Our findings on PSA kinetics can further refine these biomarker-based data and can be used in routine clinical practice to identify patients likely to be resistant to AA. PSA nadir and time to PSA nadir were also associated with OS in our study. Previous studies have also reported the impact of these factors on OS, in both chemotherapyand hormonotherapy-treated patients. [22][23][24] Along these same lines, our univariate analysis revealed an association DovePress between PSA nadir and time to PSA nadir with OS. This effect may be due to stroma-epithelial interactions and "tumor regulating fibroblasts", which are now being studied with promising results in vitro and in vivo. 25 Several studies have found that PSA decline can be a marker of response to treatment and of longer OS, 17,[26][27][28][29][30] since PSA levels reflect disease burden. In two Phase III trials of AA -in chemotherapy-pretreated 5 and in chemotherapy-naïve 13 patients -PSA kinetics were significantly associated with survival. 31 In our study, we also observed that PSA kinetics could predict the efficacy of treatment with AA. Importantly, we found a relationship between early PSA response and OS. This finding is in line with previous retrospective studies carried out in a limited number of patients, where early PSA changes were associated with response and survival to AA and enzalutamide. In a study of patients treated with enzalutamide after docetaxel, a PSA increase >20% at week 4 was associated with shorter progression-free survival and OS, while a PSA decline >30% was only associated with progression-free survival but not OS. 32 In a study of patients treated with AA before or after docetaxel, PSA decline at week 4 was significantly associated with response at week 12 and with longer OS, both in patients who had previously received docetaxel as well as in chemonaive patients. 33 Furthermore, Facchini et al reported that a very early PSA response (>50% decline at day 15 of AA treatment) was associated with longer progression-free survival and OS. 8 Finally, Caffo et al found that patients treated with AA or enzalutamide who did not show a PSA decline >50% at four weeks had a 50% risk of progression within three months, compared to an 18% risk for patients with an early PSA response. 6,34 The ability to predict the efficacy of AA at an early time point can be crucial in patient management, since this allows a prompt change in treatment strategy, preventing clinical deterioration which can hinder the administration of subsequent therapies. At present, the majority of physicians switch therapy based on clinical progression, 35 and while the absence of early PSA response may not in itself be a sufficient rationale for changing therapy, it will help identify those patients who require closer monitoring.
Several studies have suggested prognostic models in mCRPC. Halabi et al 36 classified patients in a clinical trial into high-and low-risk groups based on several baseline characteristics, while Armstrong et al 37 developed a risk classification scheme based on pain, visceral metastases, anemia, and bone scan progression that was
DovePress
As with all retrospective studies, our results must be interpreted with caution and should be validated in prospective studies including a larger number of patients. Our study has several limitations, such as the limited number of patients included and a number of patients with some missing data. Despite these limitations, however, our findings indicate that NLR can be a useful baseline marker to predict outcome in patients with mCRPC treated with AA and that early PSA response to AA may be a useful marker of treatment efficacy. While an early change in PSA is generally not a sufficient reason for modifying treatment in clinical trials, 9 we propose to closely monitor patients with an early PSA increase during AA treatment, as well as taking into account other prognostic and predictive markers, when deciding on next treatment options.
Patient Consent
For this type of study, formal consent is not required.
Data Sharing Statement
Supporting data for this study is freely available upon request to the corresponding author.
Ethics Approval
This study was approved by the Ethics Committees of the Catalan Institute of Oncology at University Hospital of Germans Trias i Pujol (PI . This approval covers all three centers of the Catalan Institute of Oncology. The study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All patient data were anonymized and | 2020-10-28T05:08:45.060Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "61cfd5cb319906e764dccee74034bd3007ed2bee",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=62644",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61cfd5cb319906e764dccee74034bd3007ed2bee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211262027 | pes2o/s2orc | v3-fos-license | miR‐21‐5p promotes cell proliferation and G1/S transition in melanoma by targeting CDKN2C
Human melanoma is a highly malignant tumor originating from cutaneous melanocytes. The noncoding RNA microRNA (miR)‐21‐5p has been reported to be expressed at high levels in malignant melanocytic skin tissues, but its potential functional role in melanoma remains poorly understood. Here, we explored the cellular effects of miR‐21‐5p on melanoma in vitro and the underlying mechanisms. Quantitative real‐time PCR was used to show that miR‐21‐5p is significantly up‐regulated in clinical samples from patients with melanoma as compared with adjacent noncancerous tissues. Overexpression of miR‐21‐5p significantly enhanced, whereas knockdown attenuated, cell proliferation and G1/S transition in melanoma cell lines (A375 and M14). Luciferase reporter assays were used to show that the cyclin‐dependent kinase inhibitor 2C (CDKN2C) is a downstream target of miR‐21‐5p. Furthermore, miR‐21‐5p mimics resulted in a decrease in CDKN2C expression, and CDKN2C expression was observed to be inversely correlated with miR‐21‐5p expression in melanoma tissues. Rescue experiments were performed to show that overexpression of CDKN2C partially reversed the effects of miR‐21‐5p up‐regulation on A375 cells. Consistently, knockdown of CDKN2C abolished the effects of miR‐21‐5p down‐regulation on A375 cells. Overall, our studies demonstrate that miR‐21‐5p can promote the growth of melanoma cells by targeting CDKN2C, which may induce G0/G1 phase arrest of melanoma cells.
Human melanoma is a highly malignant tumor originating from cutaneous melanocytes. The noncoding RNA microRNA (miR)-21-5p has been reported to be expressed at high levels in malignant melanocytic skin tissues, but its potential functional role in melanoma remains poorly understood. Here, we explored the cellular effects of miR-21-5p on melanoma in vitro and the underlying mechanisms. Quantitative real-time PCR was used to show that miR-21-5p is significantly up-regulated in clinical samples from patients with melanoma as compared with adjacent noncancerous tissues. Overexpression of miR-21-5p significantly enhanced, whereas knockdown attenuated, cell proliferation and G1/S transition in melanoma cell lines (A375 and M14). Luciferase reporter assays were used to show that the cyclin-dependent kinase inhibitor 2C (CDKN2C) is a downstream target of miR-21-5p. Furthermore, miR-21-5p mimics resulted in a decrease in CDKN2C expression, and CDKN2C expression was observed to be inversely correlated with miR-21-5p expression in melanoma tissues. Rescue experiments were performed to show that overexpression of CDKN2C partially reversed the effects of miR-21-5p up-regulation on A375 cells. Consistently, knockdown of CDKN2C abolished the effects of miR-21-5p down-regulation on A375 cells. Overall, our studies demonstrate that miR-21-5p can promote the growth of melanoma cells by targeting CDKN2C, which may induce G0/G1 phase arrest of melanoma cells.
Melanoma is thought as the most aggressive form of human cutaneous neoplasm with markedly increased incidence in recent decades [1]. Currently, surgical intervention for early-stage disease and systemic chemotherapy for locally advanced disease have been the mainstays of treatments for patients with melanoma [2,3]. Although there is a certain beneficial effect, the clinical outcome remains unsatisfactory for patients, with a 10-year survival rate still less than 10% [4]. With the development of molecular biology, it provides a great possibility for us to investigate the precise molecular mechanism underlying melanoma pathogenesis.
MicroRNAs (miRs) are small (22-26 nucleotides), noncoding RNAs that can regulate gene expression primarily by recognizing and binding to the potential targeting sites in the 3 0 UTR of target mRNAs, and thereby are involved in diverse biological processes [5,6]. One possible mechanism underlying melanoma pathogenesis is altered expression of the miRNA expression profile. For example, miR-373 expression was reported to be highly up-regulated in melanoma tissues and able to promote cell migration by negatively targeting salt-inducible kinase 1 [7]. Noori et al. [8] demonstrated that miR-30a inhibits melanoma tumor metastasis by targeting ZEB2 and E-cadherin. Recent studies have highlighted the importance of miR-21-5p in tumor progression specifically in the process of cell proliferation, including non-small lung cancer [9], colon cancer [10] and ovarian cancer [11]. Interestingly, increased miR-21 expression has been observed during the transition from a benign melanocytic lesion to malignant melanoma [12]. Similarly, miR-21-5p has been reported to be overexpressed in malignant melanocytic skin tissues compared with benign tumors by global miRNA profiling [13], as well as next-generation sequencing by Babapoor et al. [14] and Latchana et al. [2]. Functionally, miR-21 has been demonstrated to enhance the invasiveness of melanoma cells by inhibition of tissue inhibitor of metalloproteinases 3 [15]. However, how miR-21-5p regulates cell proliferation in melanoma cells remains poorly understood.
Accumulating evidence indicates that deregulation of cell cycle at G1/S restriction point (cell growth), S phase (DNA replication) and G2/M phase (mitosis) are considered as critical issues among virtually all types of human tumors, including melanoma [16]. Cyclin-dependent kinase inhibitor 2C (CDKN2C, also named p18) is an important cell-cycle regulator and cyclin-dependent kinase inhibitor (CDKI) [17,18]. Despite a negative regulatory role of CDKN2C in G0/ G1 cell-cycle progression, the functional role of CDKN2C in cell-cycle regulation still remains unknown in melanoma.
The main objective of this study was to reveal the regulatory role of miR-21-5p and CDKN2C in melanoma cell proliferation and cell-cycle progression. Furthermore, we further evaluated whether miR-21-5p regulates cell proliferation and cell cycle by directly targeting CDKN2C in melanoma cells. Our study may provide new evidence to elucidate the role of miR-21-5p and CDKN2C in melanoma progression.
Tissue samples
Melanoma tissues, along with matched adjacent tissues, were collected from 20 patients with melanoma who underwent surgical resection from Affiliated Hospital of North Sichuan Medical College. This cohort included 12 male and 8 female patients, with a median age of 52 years (range: 34-72 years). These 20 patients with melanoma were diagnosed as stage I (n = 5), II (n = 8) or III (n = 7) according to the consensus of the tumor node metastasis staging system. Before surgery, the patients were confirmed to not receive any antitumor treatment. Collected fresh tissue samples were snap frozen in liquid nitrogen with subsequent storage at À80°C until use. We obtained written informed consent from all patients. The study protocol conformed to the ethical guidelines outlined in the Declaration of Helsinki and was approved by the research ethics committee of Affiliated Hospital of North Sichuan Medical College (approval no. 2014121M).
Cell culture
Two human melanoma cell lines, A375 and M14, were purchased from ATCC (Manassas, VA, USA) and cultured according to the supplier's information in Dulbecco's modified Eagle's medium (Gibco, Rockville, MD, USA) supplemented with 10% FBS (Gibco), which were both maintained in a humidified atmosphere containing 5% CO 2 at 37°C.
Oligonucleotide transfection
The miR-21-5p mimic, inhibitor and their negative controls (NCs) were purchased from GenePharma Co. Ltd. (Shanghai, China). Small interference RNA for CDKN2C (si-CDKN2C) and its NC (si-NC) were also synthesized by GenePharma Co. Ltd. The CDKN2C overexpression plasmid was constructed by inserting CDKN2C cDNA into a pcDNA3.1 vector by GenePharma Co. Ltd. For transfection, A375 or M14 cells were seeded at a density of 2 9 10 5 cells per well in a six-well culture dish, and transfection was performed when 80% confluence was achieved. A total of two 8-µL (500 ngÁµL À1 ) plasmids (pcDNA3.1-CDKN2C, si-CDKN2C) or mimics (miR-21-5p mimic, inhibitor and NC) and 8 µL Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) were suspended in 100 µL Opti-MEM (Gibco), and the final concentration of CDKN2C plasmids or mimics used was 1 mgÁmL À1 . Then, the mixture was added into the cell culture and incubated for 48 h.
Quantitative real-time PCR
After using TRIzol reagent (Invitrogen) extraction to obtain RNA from tissues and cell lines, we reverse transcribed the RNA samples into cDNA by M-MLV RT kit (Promega Corporation, Madison, WI, USA). The quantitative real-time PCR was done using iQ SYBR Green Supermix Kit (Bio-Rad, Hercules, CA, USA) on a 7900HT Fast Real-Time PCR system (Thermo Fisher Scientific, Inc., Waltham, MA, USA). The PCR procedure condition was as follows: preheating step of 10 s at 95°C, followed by denaturation at 95°C for 5 s for 40 cycles, and annealing and extension at 60°C for 20 s. According to Livak and Schmittgen's method [19], relative gene expression was calculated using the 2 ÀDDCt values.
Cell proliferation assay
The proliferation of A375 and M14 cells was evaluated by Cell Counting Kit-8 (CCK-8) detection kit (Beyotime Biotechnology, Shanghai, China). In brief, the transfected cells were seeded at 3 9 10 3 cells per 6.4-mm dish and cultured overnight. At every 24 h, 10 lL CCK-8 solutions was put in each well, and cells were incubated at 37°C for another 2 h. To determine the absorbance (A) values of each well, we measured absorbance at 450 nm using a microplate reader (Thermo Fisher Scientific).
Cell-cycle analysis
After 48-h transfection, cell-cycle distribution was analyzed by Cell Cycle Detection Kit (KeyGen Biotech, Nanjing, China). In brief, cells were collected and fixed in 70% ethanol overnight at 4°C. Following washing with PBS, cells were treated with RNase A (10 UÁmL À1 ) and stained with propidium iodide (50 lg/mL) at room temperature for 30 min. Then, the mixture samples were measured using BD FACSCalibur flow cytometer equipped with MODFIT Software (BD Biosciences, San Jose, CA, USA). The proportion of cells in the G0/G1, S and G2/M phases was quantified in each group.
Western blot
Total cellular protein was isolated using radioimmunoprecipitation assay lysis buffer (Beyotime Biotechnology). Thirty micrograms of protein was separated by 10% SDS/PAGE and then transferred onto poly(vinylidene difluoride) membranes (Millipore, Billerica, MA, USA). After blocking membranes with fat-free milk in TBS with 0.1% Tween 20 for 2 h at room temperature, the proteins were probed with specific primary antibodies against p18 and glyceraldehyde-3 phosphate dehydrogenase, and then incubated with horseradish peroxidase-conjugated secondary antibody (Santa Cruz Biotechnology, Dallas, TX, USA) at room temperature for 1 h. The protein bands were visualized with an enhanced chemiluminescence solution (Thermo Fisher Scientific).
Statistical analysis
All of the in vitro experiments were performed at least in triplicate. Each value was expressed as mean AE standard deviation (SD). The statistical software package SPSS version 19.0 software (SPSS Inc., Chicago, IL, USA) was used for data analysis. The correlation between miR-21-5p and CDKN2C was measured by Spearman's correlation analysis. Comparison between two groups and multiple groups, the Student's t-test and one-way ANOVA followed by Dunnett's post hoc test or Tukey's test were carried out. A P value of less than 0.05 is considered to be statistically significant.
miR-21-5p was negatively associated with CDKN2C mRNA expression in melanoma tissues
To investigate the potential biological role of miR-21-5p in cell proliferation of melanoma, we analyzed the expression profile of miR-21-5p in 20 paired melanoma tissues and matched adjacent tissues using quantitative real-time PCR. From Fig. 1A, we can see that the expression of miR-21-5p was significantly increased in melanoma tissues compared with adjacent tissues. Meanwhile, we determined the expression of CDKN2C mRNA in these tissues. As shown in Fig. 1B, CDKN2C mRNA levels were notably down-regulated in melanoma tissues in comparison with adjacent tissues. Furthermore, Spearman's correlation analysis further demonstrated that the expression of miR-21-5p was inversely correlated with the CDKN2C mRNA levels in melanoma tissues (P = 0.0264; Fig. 1C).
CDKN2C was a target gene of miR-21-5p in melanoma
To further investigate the molecular mechanisms of miR-21-5p, we predicted the potential target genes of miR-21-5p. Among these candidate targets, CDKN2C associated with cell-cycle regulation was selected as a potential target of miR-21-5p. As presented in Fig. 3A, the seed sequence of miR-21-5p is complementary to the 3 0 UTR of CDKN2C and is highly conserved. Subsequently, luciferase reporter assay was carried out to further confirm the correlation between miR-21-5p and CDKN2C. As expected, miR-21-5p mimic transfection significantly reduced the WT CDKN2C 3 0 UTR luciferase activity but did not strikingly affect the MUT CDKN2C 3 0 UTR luciferase activity (Fig. 3B). Quantitative real-time PCR analysis suggested that the expression of CDKN2C mRNA was significantly suppressed by overexpression of miR-21-5p, whereas higher expression of CDKN2C mRNA was found after miR-21-5p inhibition in A375 and M14 cells (Fig. 3C). Consistently, the p18 protein expression was found to be repressed after transfection with the miR-21-5p mimic, but significantly elevated after transfection with the miR-21-5p inhibitor in A375 (Fig. 3D) and M14 cells (Fig. 3E). These data suggest that CDKN2C might be a direct target of miR-21-5p in melanoma cells.
Overexpression of CDKN2C significantly attenuated the effects of miR-21-5p overexpression on melanoma cells To investigate whether CDKN2C was a downstream functional factor involved in miR-21-5p regulating cell proliferation and cell-cycle G1/S transition in melanoma, we performed rescue experiments by cotransfecting miR-21-5p mimic and pcDNA-CDKN2C into A375 cells. After 48-h transfection, we first evaluated the proliferative capacity of A375 cells using CCK-8 assay. As a result, the enhanced cell proliferative capacity of A375 by miR-21-5p mimic transfection solely was remarkably attenuated by CDKN2C overexpression (Fig. 4A). Furthermore, the percentage of cells at G0/G1 phase was obviously decreased in A375 cells cotransfected with miR-21-5p mimic and empty pcDNA vector, which was significantly reversed by cotransfection with miR-21-5p mimic and pcDNA-CDKN2C plasmid (Fig. 4B,C). The results imply that addition of CDKN2C could significantly attenuate the impact of miR-21-5p overexpression on cell proliferation and cell-cycle G1/S transition. Knockdown of CDKN2C notably abolished the effects of miR-21-5p down-regulation on melanoma cells To further improve the earlier rescue experiments, we transfected A375 cells with miR-21-5p inhibitor together with si-NC or si-CDKN2C, followed by functional analysis. As shown in Fig. 5A, a significantly lower proliferative rate was found in A375 cells cotransfected with miR-21-5p inhibitor and si-NC than that in cells cotransfected with NC and si-NC. However, CDKN2C knockdown reversed the impaired cell proliferation ability induced by miR-21-5p down-regulation. Similarly, cell-cycle G0/G1 phase induced by miR-21-5p inhibitor transfection was abolished by CDKN2C knockdown in A375 cells (Fig. 5B,C). The data further demonstrated that miR-21-5p promoted cell proliferation and cell-cycle G1/S transition by directly down-regulating CDKN2C in melanoma cells.
Discussion
In this study, we observed that the expression of miR-21-5p is up-regulated in melanoma tissues. Consistently, increased miR-21 expression has been observed in malignant melanocytic skin tissues compared with benign tumors [12][13][14]. Different from the study by Martin del Campo et al. [15], who showed enhancement of the invasiveness of melanoma cells by inhibition of tissue inhibitor of metalloproteinases 3, our functional experiments further indicated that miR-21-5p mimics significantly enhanced, whereas inhibitor The cell proliferation rate was measured by CCK-8 assay every 24 h. (C, D) Flow cytometry was performed to detect cell-cycle distribution in A375 and M14 cells. Each sample was analyzed at least three times. Data were expressed as mean AE SD. Differences were evaluated using one-way ANOVA, followed by Dunnett's test. *P < 0.05, **P < 0.01, ***P < 0.001, compared with NC. and M14 cells transfected with miR-21-5p mimic, inhibitor and NC; each sample was analyzed three times. Data were expressed as mean AE SD. Differences were evaluated using one-way ANOVA, followed by Dunnett's test. **P < 0.01, ***P < 0.001, compared with NC. Fig. 4. Addition of CDKN2C could attenuate the effects of miR-21-5p overexpression on melanoma cells. A375 cells were transfected with miR-21-5p mimic together with empty vector or pcDNA-CDKN2C. (A) Cell proliferation was measured by the CCK-8 assay. The CCK-8 assay was performed every 24 h for 4 days. (B, C) Flow cytometry was performed to detect cell-cycle distribution. Each sample was analyzed at least three times. Data were expressed as mean AE SD. Differences were evaluated using one-way ANOVA, followed by Tukey's test. *P < 0.05, ***P < 0.001, compared with NC + vector; # P < 0.05, ## P < 0.01, ### P < 0.001, compared with mimic + vector. suppressed, A375 and M14 cell proliferation. Similarly, miR-21-5p exerts an oncogene by promoting cell growth in colon cancer [10], ovarian cancer [11] and esophageal squamous cell carcinoma cells [20]. In addition, miR-21-5p increases the cisplatin resistance of non-small-cell lung cancer cells via modulating cell proliferation [21].
Related research suggests that cell proliferative capability is primarily controlled by the regulation of cell cycle to monitor DNA integrity [22]. In general, regulation of cell proliferation mainly occurs at the G0/G1 phase, during which cells integrate many signals, including growth factors and DNA damage, to determine whether cells enter into the S phase or exit to their resting stage [23,24]. Next, we found that miR-21-5p knockdown significantly induced cell-cycle G0/G1 arrest, which was significantly reversed by miR-21-5p overexpression. In agreement with our data, miR-21-5p inhibitor induced cell-cycle G0/G1 phase arrest and inhibited the proliferation of colon adenocarcinoma cells [10].
Thus, we speculated that attenuation of cell proliferation by miR-21-5p knockdown might be ascribed to miR-21-5p inhibitor-induced cell-cycle G0/G1 phase arrest in melanoma cells.
Accumulating evidence has shown that miRNAs could target cyclin-dependent kinase/cyclin complexes to affect the cell-cycle G1/S transition in tumor cells. For instance, overexpression of miR-95-3p promotes cell-cycle G1/S transition by directly targeting p21, a CDKI encoded by CDKN2C in hepatocellular carcinoma [25]. Ablation of endogenous miR-25 restored expression of CDKN1C and significantly inhibited proliferation of glioma cells by promoting normal cell-cycle progression [26]. Enhanced miR-188 expression suppresses human nasopharyngeal carcinoma cell proliferation and cell-cycle G1/S transition [27]. Here, we found that CDKN2C 3 0 UTR has a binding site with miR-21-5p, and confirmed that CDKN2C was a direct target of miR-21-5p in melanoma cells. CDKN2C is an important cell-cycle regulator highly related to the G1/ (B, C) Flow cytometry was performed to detect cell-cycle distribution. Each sample was analyzed at least three times. Data were expressed as mean AE SD. Differences were evaluated using one-way ANOVA, followed by Tukey's test. *P < 0.05, **P < 0.01, ***P < 0.001, compared with NC + si-NC; # P < 0.05, ## P < 0.01, ### P < 0.001, compared with inhibitor + si-NC.
S checkpoint [17]. Previous studies verified that CDKN2C as a CDKI could inhibit cell proliferation and cell-cycle G1/S transition [28,29]. In our experiment, knockdown of CDKN2C imitated, whereas its overexpression attenuated, the effects of miR-21-5p on melanoma cell growth and cell-cycle G1/S transition. Similarly, cell-cycle inhibitor CDKN2C has been reported to decrease cell proliferation in medullary thyroid carcinoma [30] and nasopharyngeal cancer [31]. Notably, CDKN2C plays a suppressive role in human melanoma as reported by Jalili et al. [32]. The facts indicated that miR-21-5p may promote melanoma cell proliferation and cell-cycle progression via down-regulating CDKN2C. Of course, some limitations appeared in our study, including the relevant results in vivo, miR-21-5p/CDKN2C-mediated downstream signaling pathway and the clinical significance of miR-21-5p/CDKN2C in patients with melanoma, which will be further explored in the next work.
In summary, this study demonstrated that miR-21-5p was up-regulated in melanoma, which accelerates the malignant progression of melanoma by promoting cell proliferation and cell-cycle G1/S transition through targeting CDKN2C. Our preliminary results suggest that the miR-21-5p/CDKN2C axis might be a potential therapy in future melanoma treatment. | 2020-02-25T14:05:12.108Z | 2020-02-23T00:00:00.000 | {
"year": 2020,
"sha1": "ac450d0cbb0844aed4df4fc5013c542f46185e0a",
"oa_license": "CCBY",
"oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.12819",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfa939d0a5cbc83bd60ea6dfe8193bdf28c4d302",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
210784741 | pes2o/s2orc | v3-fos-license | Hydrocarbon Degradation Potential of Heterotrophic Bacteria Isolated from Oil Polluted Sites in Sakpenwa Community in Rivers State
In this study, hydrocarbon degradation potentials of heterotrophic bacteria isolated from oil-polluted soil were examined. Samples were collected from Sakpenwa, an oil producing community in Tai LGA of Rivers State, Nigeria and analyzed for physicochemical and microbiological properties using standard techniques. Hydrocarbon utilizing bacteria (HUB) were isolated by vapour phase transfer method using mineral salt medium. The biodegradation study was carried out on a standard laboratory shaker for 30 days in Bushnell -Haas agar supplemented with 5% of crude oil. Fifteen (15) bacterial isolates were screened for hydrocarbon degradation potentials of which five isolates exhibited high hydrocarbon degradability. The following parameters were monitored using each of the five isolates and a consortium during the biodegradation study: Colour change, Optical density (OD), pH, Total Petroleum Hydrocarbon (TPH), Total Hydrocarbon
INTRODUCTION
The world depends on petroleum, and its use as fuel has led to intensive economic development worldwide. The great need for this energy source has led to the gradual exhaustion of natural oil reserves and alteration in the balance of the ecosystem. The threat caused by environmental pollution due to the continuous release of petroleum and petrochemical products has been recognized as a significant and serious problem to man and his environment [1,2]. In Nigeria, 80% of the crude oil used is supplied from the South-South region of the country. Therefore, as a result of high oil exploration activities going on in this part of the country over years [3], substances like gaseous emissions, oil spills, effluents and solid waste are discharged into the environment, thus, polluting the environment [4].
The biotic component of the soil which occupied not greater than 5% soil space includes living microbes; bacteria, archaea and fungi which are responsible for 80-90% of soil processes and formation such as recycling of nutrients, transformation of organic matter, and maintenances of soil structure in microbial decomposers [5,6]. Since microorganisms in the soil are involved in various bio-geochemical processes, hydro-carbon pollutants entering the environment are susceptible to their activities and thus bioremediation largely depend on them. The degradation of hydrocarbons is influenced by many factors (temperature, relative humidity, soil structure, soil moisture, soil pH, soil biota, pollutant's structure, dose, toxicity, and bioavailability [7].
The aim of this study was to ascertain the hydrocarbon degradation potential of heterotrophic bacteria isolated from oil polluted sites in Sakpenwa community in Rivers State, Nigeria.
Study Area and Sample Collection
The study site was located at the oil polluted sites in Sakpenwa community in Ogoni land, Tai Local Government Area, Rivers State, Nigeria. Soil samples were collected 500 m and 1000 m away from the major spill sites. Fifty grams (50g) of the oil-polluted soil samples were collected from each of the sampling points using a spade like soil sampler to excavate soil 7-10 cm depth from the surface of the soil. The collected soil samples were transported in plastic nylon bags from the polluted sites to the Department of Microbiology, University of Port Harcourt laboratory for analysis within 24 hours.
Samples Preparation
The soil samples collected were passed through a mesh sieve (2 mm pore size) to remove large particles and were thoroughly mixed. Thereafter, 5 g of each soil sample was suspended in 45 ml of distilled water. The suspended samples were mixed properly in a rotary shaker at 100 rpm at room temperature (28± 2ºC) for 90 minutes to liberate the organisms into the liquid medium [8].
Isolation and Enumeration of Total Heterotrophic Bacteria
The total culturable heterotrophic bacterial count for each degradation set-up was enumerated using the streak plate method [9]. Serial dilutions of the samples were made and 0.1ml aliquot of the 10 -1 to 10 -4 dilutions of each sample were transferred onto well dried, sterile nutrient agar plates composed of peptone 5.0 g; Beef extract 3.0 g; agar 15.0 g; Nacl 5.0g; in 100ml of distilled water and incubated at 37ºC for 24 hours (in triplicate). After incubation, the bacterial colonies that grew on the plates were counted and subcultured onto fresh nutrient agar plates using the streak-plate method in other to obtain pure cultures of each colony. Discrete colonies on the plates were then transferred into nutrient agar slants, properly labelled and stored at 4ºC as a stock culture for preservation and identification [10].
Enumeration of Total Culturable
Hydrocarbon Utilizing Bacteria (TCHUB) The enumeration of Total Culturable Hydrocarbon Utilizing Bacteria (TCHUB) was done by applying the vapour phase method described by Atuanya and Ibeh [11]. Appropriate diluent of 0.5 ml of the samples collected from the two different set up (1000 m and 500 m away from the polluted site) labelled A and B respectively were inoculated into modified Mineral Salt Agar medium (MSA). The medium was made of 0.42 g MgS0 4 .7H 2 O; 0.297 g KCl; 0.85 g KH 2 PO 4 ; 0.424 g NaNO 3 ; 1.27 g K 2 HPO 4 ; 20.12 g NaCl; 250 mg Amphotericin B (sold as Fungizone) and 20 g agar powder. These were weighed out and hydrated in 1000 mL of sterile distilled water in a conical flask. The media was sterilized by autoclaving at 121ºC, 15Psi for 15 min, before dispensing into sterile Petri dishes. The gelled MSA was inoculated with 0.5 ml of serial dilutions of the polluted soil sites A and B sample respectively. Filter paper (Whatman No 1) was saturated with Bonny light crude oil, and the crude oil impregnated papers were aseptically placed onto the covers of Petri dishes and inverted. The hydrocarbon saturated filter papers supply hydrocarbon by vapour-phase transfer to the inoculums. The plates were incubated at 28ºC±2ºC for seven (7) days. Colonies were counted from triplicates and mean values were record.
Gram Staining
Isolates were subjected to Gram's reaction to check whether they are Gram-positive or Gram negative. A thin smear was made on a clean grease-free glass slide. The smear was air dried and heat-fixed by rapidly passing it over the flame of a Bunsen burner three times .The fixed smear was covered with crystal violet stain for 30 to 60 seconds and rapidly washed off with clean water and then flooded with Lugol's iodine for one minute. The iodine was washed off with clean water and the smear was decolorized with alcohol and washed immediately with clean water before counter staining with safranin for one minute. The stain was washed off with clean water and the smear air-dried and observed under microscope using oil immersion objective lens. Gram positive bacteria stained purple while gram negative bacteria stained red or pink [12].
Motility Test
Twenty-four hour (24 hr) culture of the isolates in peptone broth was used for motility test. Five drops of each isolate was placed on a cover slip and the concave shaped slide smeared with Vaseline at the edge of the concavity, gently on the cover slip. This slide was carefully inverted and the drop on the cover slip observed under high power objective lens (40×) [12].
Biochemical Tests
The bacterial isolates obtained were characterized and identified based on their cultural, morphological and biochemical characteristics using the scheme of Bergey's Manual of Determinative Bacteriology [13,14].
Catalase test
Three millilitres (3 ml) of 3% hydrogen peroxide solution was poured into different test tubes and sterile glass rod was used to introduce some colonies of the isolates into the test tubes containing the hydrogen peroxide. The contents of the test tubes were observed for gas bubbling. Catalase positive bacteria showed active bubbles while catalase negative did not [12].
Urease test
Christensen's solid medium was prepared by dissolving 1 g peptone; 5 g NaCl; 2 g K 2 HPO 4 and 10 g agar in 1000 ml of distilled water. Phenol red (6 ml) was also added and the pH adjusted to 7.0. This was sterilized at 121ºC for 20 minutes at 15psi and allowed to cool to 50ºC. 10 ml of 10% solution of glucose and 100 ml of 20% urea solution were sterilized by filtration. The glucose and urea were mixed aseptically with the agar medium and dispensed in 5ml amounts into bijou bottles and allowed to solidify in the slope position. Urease positive bacteria indicated pink colour while urease negative bacteria did not show pink colour [12].
Sugar fermentation test
The test was carried out to determine the ability of the isolates to ferment various sugars which is indicated by the production of acids/gas. The following sugars were used: maltose, glucose and lactose. From each sugar, 0.5 g was dissolved in 50 ml of peptone water and sterilized by membrane filtration. A pinch of phenol red was added as indicator and 5ml aliquots were aseptically dispensed into sterile test tube containing sterile Durham tubes which were inverted in the sterile broth. The broth was inoculated with the isolates using sterile wire loop and incubated at 30ºC for 48 hours. The content was observed for change in colour and/or the production of gas [12].
Citrate utilization test
This test was used to study the ability of organisms to utilize citrate present in Simon's medium as a sole source of carbon for growth. Simon's citrate agar was prepared, dispensed into test tubes, autoclaved and allowed to solidity in a slanting position. The isolates were streaked from freshly prepared cultures and incubated at 37ºC for 48 hours. The content of the test tubes were observed for the development of growth with blue colour as opposed to the original green colour of the medium which signifies citrate utilization [12].
Voges -proskauer test
The medium used for this test is glucose phosphate medium. After sterilization, the medium was allowed to cool and the test organism was inoculated into the broth and incubated for five days at 37ºC. After incubation, 1.5 ml of 5% alcoholic alpha napthtol and 0.5 ml of 40% aqueous KOH were added. The test tubes were shaken vigorously and allowed to stand for 5 minutes. The content was observed for the development of pink or red colour [12].
Methyl red test
About 5 drops of methyl red solution was added to 2 ml of a five-day old culture of the isolates inoculated in glucose-phosphate broth. Red colouration indicated a positive test while yellow colour indicated negative [12].
Biodegradation Studies
The method proposed by Ekpo and Ekpo [15] was used. The biodegradation study of hydrocarbons in the polluted soil was carried out using the Bushnell-Haas broth. This medium consisting of MgSO 4 0.02 g; CaCl 0.2 g; K 2 HPO 4 100 g; KHPO 4 1 g; NH 4 NO 3 1 g; F 2 Cl 0.05 g was autoclaved in 2 litres conical flasks, and 99ml of the medium was dispensed into five (5) conical flasks into which 1 ml of sterile crude oil was added. Precisely, 5 ml of each of the bacterial isolates (in liquid broth) were inoculated into five (5) different conical flasks containing the liquid medium. The concentration of day zero was use as control to the other subsequent days. The bacterial cultures were incubated at ambient temperature (4ºC) in an electric shaker of 100 strokes per minute for 30 minutes each day. Sampling period was set for every 5 days for 30 days. Bacterial utilization of hydrocarbon was monitored using their optical density at 600nm wavelength.
Determination of Total Hydrocarbon Content (THC)
The total hydrocarbon content in the soil was determined by the solvent extraction method described in Chithra and Hema [16]. Five grams (5 g) of soil sample was mixed with 100 ml of normal hexane in a flask and corked. The mixture was shaken using a mechanical shaker for 1hr, and then allowed to settle. With the use of a sterile syringe, an aliquot of the oil extract in the solvent solution (20 ml) was withdrawn and put in a previously weighed evaporation dish. The dish and its content were evaporated to dryness in a rotary evaporator and the dish was reweighed to obtain the difference.
The percentage (%) of the degradation was obtained as follows: Weight of residue crude oil = weight of beaker containing extracted crude -weight of empty beaker.
Amount of crude oil degraded= weight of crude oil added to the media -weight of residual crude oil.
pH, Optical Density and Colour Change Determination
HANNA HI983903 digital pH meter was use to determined the pH in a sample to water 1: 10. That is (20 g) of each sample was weighed into a beaker then 200ml of distilled water was added to it. The pH electrode was dipped into solution.
The spectrumlab 7553 was used to measure the optical density by preparing (20 g) of each sample weighed into a beaker then 200 ml of distilled water was added to it and placed in a test tube at a blank space of 600nm. Furthermore, 4 ml of the sample from biodegradation set up flask and diluted samples were mixed in a conical flask and colour change was monitored using the millimetre reading machine. Sampling period was set for every 5 days for 30 days. All analyses were carried in three replicates, and the mean values obtained were recorded.
Determination of Total Petroleum Hydrocarbon (TPH)
One gram (1 g) of the dried sample was weighed into a digestion flask and 20 ml of the acid mixture (650 ml of con.HNO 3 and 80 ml of perchoric acid and 20 ml of conc.H 2 SO 4 ) added, before the flask was heated until a clear digestion is obtained. It was dilute with distilled water to the 25 ml mask of a conical flask, appropriate dilutions are then made for each experimental set up.
Statistical Analysis
Analysis of variance (ANOVA) was carried out to determine the significance levels between various treatments at 95% level of confidence, using Statistical Package for Social Sciences (SPSS, Version 20.0).
RESULTS
The bacteria present in the various study sites including the control as ascertained following standard microscopic, cultural and biochemical methods are shown in Table 1. The bacterial diversity present in the control soil, Site A (500 m) and Site B (1000 m) of this study are as represented in Table 2. The hydrocarbon utilizing bacterial isolates from the study sites are presented in Table 3.
The information from the degradation studies as shown in Table 4 were analysed and represented in bar charts. Changes in colour (Pcu) against Time(Days) (Fig. 1). Changes in total hydrocarbon content(%) against Time(Days) (Fig. 2). Changes in Optical Density (OD) against Time (Days) (Fig. 3). Changes in total petroluem hydrocarbon in (mg/ml) against Time (Days) (Fig. 4). Changes in pH against Time(Days) (Fig. 5), Changes in total cuturable heterotrophic bacteria counts (cfu/ml) against Time (Days) (Fig. 6). also made similar findings on petroleum effluents. It has also been observed that some microorganisms are more abundant in areas of high concentration of hydrocarbons. These micro floras are actively oxidizing the hydrocarbons and this is considered as another source of carbon for use in the ecosystem. Individual microorganisms can metabolize only a limited range of hydrocarbon substrates; hence assemblages of the mixed populations with overall broad enzymatic capacities would be required to achieve considerable biodegradation of petroleum hydrocarbons, as is obtainable in a natural environment. Acinetobacter sp. Acinetobacter sp.
Alcaligenes sp. These results reflect the need for further research and intense prospecting for novel organisms and genes with high crude oil (hydrocarbon) degrading potential in the southern region of Nigeria.
CONCLUSION
The elimination of petroleum hydrocarbon pollutants in the environment can be achieved by microbial degradation. Indigenous bacterial species were able degrade considerable amounts petroleum hydrocarbons in polluted soil.
Further scale-up studies as applicable need to be carried out to ascertain the suitability of the crude oil degrading isolates for field application. heterogeneity PCR fingerprinting-A technique to monitor bacterial population dynamics during rhizoremediation of fuel oil contaminated soil: 12-60. Department of Applied Chemistry and Microbiology, University of Helsinki, Finland. 2008;1:1-129. Pseudomonas sp. | 2019-10-17T08:58:24.909Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "70c055820901e4d0170bc4f65ec9f9802aa49a50",
"oa_license": null,
"oa_url": "https://www.journalsajrm.com/index.php/SAJRM/article/download/30102/56487",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "91b67935c3752c07003c9f1165b54e33de017e7f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
232016735 | pes2o/s2orc | v3-fos-license | Case report of pulmonary metastasis in a male Wistar rat glioblastoma model
Glioblastoma (GBM) is a highly aggressive central nervous system cancer. Its extracranial metastases have rarely been reported in the past few decades. Moreover, the pathogenesis of extracranial GBM metastases remains unclear. Here, we report a case of pulmonary metastasis in a male Wistar rat of C6 GBM model. This reported Wistar male rat was one of the experimental control group without any other intervention except for C6 GBM cells orthotopic implantation. On postoperative day 15, the animal which was reported in this study showed highly cellular, pleomorphic, tumor with nuclear atypia in the brain (Ki67, approximately 65.7%) and lungs (Ki67, 49.5%). Tumor cells in the lung showed immunoreactivity for glial fibrillary acidic protein. Inflammatory CD68+ cell infiltration, weakly positive E-cadherin, and strongly positive staining for vimentin were observed both in tumors in the brain and lungs. Based on further morphological analysis, we speculate that the potential metastatic route into the lung might be hematogenous metastasis.
Background
Glioblastoma (GBM) is a highly aggressive central nervous system (CNS) tumor, accounting for 14.9% of primary tumors and 46.6% of primary malignant tumors of the CNS 1, 2 . Although chemotherapy moderately increasing survival span which provides hope for therapy to GBM patients, however, lack of bioavailability of the drug from the blood to intracranial tumor cells due to the impermeability of blood-brain barrier (BBB) limits the application of some chemotherapies 3 . Hence, pre-clinical tests in orthotopic animal models that mimic the influence of the BBB seem relatively more suitable than subcutaneous transplantation tumor models for developing better therapeutic strategies for intracranial GBM. Clinically, extracranial metastases of GBM have rarely been reported in the past few decades 4,5 , let alone in orthotopic GBM rats. More disturbingly, the pathogenesis of extracranial GBM metastases is still unclear. In the present study, we report a case of pulmonary metastasis in a male Wistar C6 GBM model rat. This rat was one of the experimental control group and did not receive any other treatments except for C6 GBM cell orthotopic implantation. Several large, round, and unevenly sized masses in the lung were discovered. Pathologic specimens of the intracranial neoplasm and pulmonary nodules were consistent with GBM.
Case Presentation
The adult male Wistar rat was one of an experimental group for GBM induction. Animals (220-250 g) were obtained from SPF (Beijing) Biotechnology Co., Ltd. (Beijing, China) They were housed in the animal room controlled at 22-25°C, 12-h light/dark cycles, and 50 ± 10% humidity. The rats were fed a normal standard chow diet (Beijing Huafukang Bioscience Co., Inc., Beijing, China) and tap water ad libitum. The experimental protocol was established, performed in accordance with the guidelines, and was ap-proved by the Institutional Animal Care and Use Committee of the Shanxi Province Academy of Traditional Chinese Medicine. Animals were allowed to adapt to the housing environment before the experiments. They were then anesthetized with intraperitoneally administered pentobarbital sodium (50 mg/kg). Once rats were confirmed to be unconscious by toe-pinch, the cranial region was shaved and the animals were positioned in a stereotaxic frame (Stoelting Co., Chicago, IL, USA). After generating a hole in the cranial bone, 5 × 10 6 C6 cells were implanted into the brain. The coordinates of the injection were 1 mm anterior, 3 mm lateral to the bregma, and 5 mm ventral to the cortical surface. The injection lasted over 10 min, with the needle kept in position for an additional 10 min. All surgical procedures were conducted using sterile instruments in a clean environment. Bone wax was applied to seal the cranial bone after surgery. Rats recovered from anesthesia, were returned to the animal care facility, and had free access to pelleted food and water.
On postoperative day 15, paraformaldehyde perfusion was performed for further morphological analysis. Surprisingly, during the process of opening the chest cavity, the animal reported in this report showed several large, round masses of uneven sizes in the lungs. To identify pathological features of pulmonary nodules, hematoxylin and eosin (H&E) staining and immunohistochemistry of glial fibrillary acidic protein (GFAP) were performed. Light microscopy showed a highly cellular, pleomorphic, tumor with nuclear atypia in the brain (Fig. 1A). Immunohistochemical staining against GFAP confirmed the astrocytic nature of the orthotopic tumor (Fig. 1B). Figure 2A showed the histological examination of the pulmonary lobe, in which metastatic lesions with uneven size were present. Similarly, high cellular, pleomorphic, and nuclear atypia tumor cells were observed in the Lung tissue (Fig. 2B). Moreover, tumor cells in the lung ( Fig. 2C and 2D) showed immunoreactivity for the GFAP, demonstrating the same histological characteristics as intracranial GBM. Immunohistochemical analysis showed that the Ki67-positive index was about 65.7% in the brain and 49.5% in the lung (Fig. 3). Moreover, infiltration of inflammatory CD68 + cells, weak immunohistochemically positive E-cadherin, and strongly positive staining for vimentin were observed both in the tumors in the brain and lung (Fig. 3). By further morphological analysis, it revealed big nuclei, hyperchromatism and heteromorphism cells that the morphology of which is consistent with the C6 cancer cells (Fig. 4A and 4C, H&E staining) were present in the CD31 + microvessels (Fig. 4B and 4D).
Discussion
GBM is the most common and malignant adult brain tumor. The 5-year survival rate is still very low. Current therapeutic strategies for GBM, such as surgery followed by radiation or chemotherapy, usually turn out to be invalid owing to low response or resistance 6 . In a short period of time, GBM cells are able to migrate and invade the surrounding normal brain tissue.
Extracranial metastases of GBM have been rarely reported in past decades. In 1928, the first case of extracranial GBM metastases was reported 7 . It is worth noting that there has been an increase in the number of extracranial dissemination cases reported in recent years. The development of diagnostic methods perhaps partly contributed to this increase. Moreover, it has been shown that patients with extracranial GBM metastases tend to be younger and healthier, which may be attributed to the longer overall survival. The relatively longer overall survival in these patients make them potential developing extracranial metastases 8 . Notably, even multiple metastatic cases have been reported. For instance, in 2018, Rosen et al. reported a case of metastases in the bones, lung, pleura, liver, mesentery, and subcutaneous soft tissue in a patient with GBM 9 .
Because of the low probability of occurrence, relatively little is known about the underlying pathogenesis of extracranial GBM metastases. Although the reported patients with extracranial metastasis tended to be younger and healthier, but actually the incidence is still low in these two groups of patients 8 . This reality reminds us that other factors rather than time are more related to the rare incidence of extracranial metastases of GBM. Previously, it was presumed that the anatomical hurdles inherent to the cerebral environment help confine systemic dissemination of GBM 10 . The compositions of the BBB form a highly selective microfilter. However, previous study revealed that vessels in high-grade glioma show morphological alterations compared with the normal ones 11 . In addition, circulating tumor cells have been identified in the peripheral blood of patients with GBM 12 . Hence, hematogenous spread has been proposed as a route for extracranial GBM metastases. In the pulmonary metastasis case reported in the present study, we found large nuclei, hyperchromatism, and heteromorphism in cells that were morphologically similar to C6 cancer cells in the microvessels of the brain, suggesting the possibility of hematogenous metastasis. However, the specific molecular mechanisms underlying the passage through anatomical hurdles is unclear. Conventional knowledge assumes that the absence of a true lymphatic system may also prevent tumor cells from forming extracranial metastases. However, recently, accumulating evidence has suggested the existence of intradural lymphatic vessels in the brain 13 . Hence, extracranial metastases of GBM through the lymphatic system need further study.
In experimental studies, C6 cells transplanted in rats generate putative models of human GBM. In our study, C6 GBM cells were orthotopically implanted in Wistar rats. In our animal experiment, thoracotomy was performed for perfusion. One rat among hundreds showed several large round masses of uneven size in the lung, which was proven to be pulmonary metastasis focus. The pathological specimen obtained from this animal will be beneficial for exploring the mechanism of GBM incidence and metastasis in fu- Medicine of Shanxi Province (2020ZYYC022).
Fig. 4.
Hematoxylin and eosin (H&E) staining demonstrated big nuclei, hyperchromatism and heteromorphism cells that the morphology of which is consistent with the C6 cancer cells were present in the vascular structure in both the brain (A, ×400, bar = 50 μm) and lungs (C, ×400, bar = 50 μm). The immunohistochemical analysis further demonstrated that cancer cells were present in the CD31 + microvessel (B, brain; and D, lung, ×400, bar = 50 μm). | 2020-12-24T09:09:40.573Z | 2020-12-19T00:00:00.000 | {
"year": 2020,
"sha1": "e19f943dd6f64b7bd7bec3b88c852bc8e4eb9eed",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f72e44f890529ec318b760b87feafb1552362d7e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261566745 | pes2o/s2orc | v3-fos-license | The Ideal Mechanism of Clerical Errors
The problem of article clerical error creates multi-interpretation, ambiguity
INTRODUCTION
The clerical error actually has no legal consequences (zero legal consequences), but it must also be considered regarding the extent of the legal consequences of clerical error. 1 Before elaborating on the issue of the clerical error mechanism in the Job Creation Bill which in the draft previously stipulated that clerical error can be changed by Presidential Regulation. Thus, the Constitutional Court had previously interpreted articles that were clerical errors, for example, Article 127 paragraph (1) of Law No. 37 of 2004 concerning Bankruptcy and Suspension of Debt Payment Obligations (PKPU).
Laws and regulations are part of the national legal system which plays a very important role in the development of national law to realize a legal system based on Pancasila and the 1945 Constitution. 2 Basically, the Legislation and regulations to provide legal certainty and fulfill the sense of justice of the community, not the other way around. 3 Maria Farida Indrati Soeprapto stated that laws that have been passed or promulgated can only apply to the general public if the law is promulgated in the State Gazette or announced in the State Gazette. 4 As the literature review, an earlier study in Ghansham Anand's journal entitled "Exit Plan Against Clerical Errors in Permanent Legal Decisions: A Preventive Effort for the Realization of Non-Executable Judgments". Lahaya, Achmad Imam thesis entitled "Analysis of the Public Prosecutor's Error in Amending the Charge Letter". However, his research only discusses the issue of a clerical error in court decisions, not in the context of law formation as will be studied in this study so it's the novelty of this research.
The novelty of this paper, clerical errors in laws and regulations that have been passed must be corrected through judicial mechanisms, namely through the Constitutional Court so that interpretation can be made by the Judge on the determination of a phrase in the laws and regulations whether it includes clerical error or not. The urgency of this research is for legal reform in Indonesia so that there is an ideal mechanism to overcome the problem of clerical error in a legal product that should remain the authority of the Constitutional Court.
Crucially, the problem of a clerical error in the Legislation which is one of the processes of law formation has received attention from legal circles, namely one of the clerical errors contained in the Job Creation Law which has the authority to change laws with Government Regulations. In addition, other potential laws contain clerical error so that comparisons are made with the United Kingdom and the United States for solving clerical error problems.
The problem with the regulation of the mechanism of clerical error in the draft law that has been jointly approved by the House of Representatives and the President if there are still technical writing errors is corrected by the head of the house of representative discussing the bill and the Government represented by the ministry discussing the bill as stipulated in Article 72 paragraph (1a) Law of the Republic of Indonesia Number 13 of 2022 about Second Amendment to Law Number 12 of 2011 concerning the Establishment of Laws and Regulations which needs to be analyzed regarding its coherence which should be related to the authority of the judicial institution, namely the Constitutional Court.
The problem to be analyzed is related to the ideal mechanism of resolving clerical errors in Indonesia for the implementation of the process of correcting technical errors in the Legislation that are returned to the head of the house of representative discussing the Bill and the Government represented by the ministry discussing the Bill. So that a comparison was made between Indonesia with the United States,United Kingdom, and Singapore.
From the problems described in the research described in the title variable, the formulation of the problem from this paper is: How is the authority of the Constitutional Court in interpreting laws in Indonesia and the problem of clerical errors in laws and regulations in Indonesia and the comparison between Indonesia and the United State, United Kingdom, and Singapore. Then it will examine how to strengthen the role of the Constitutional Court against clerical errors as a preventive multi-interpretation of the substance of articles in laws and regulations in Indonesia.
METHOD
The research method used in this study is normative juridical research with a statutory approach. In addition, this research is also exploratory, namely by searching for new ideas or ideas that can answer existing problems. 5 Thus, the relationship between the authority of the Constitutional Court as a judicial institution and the problem of clerical errors in laws and regulations in Indonesia can be studied through normative juridical research with an exploratory approach to laws and regulations so that relevant principles are found to answer problems and also in the form of standardization or benchmarks for reflection on ideal laws and regulations so that does not contradict the principle of prudence that causes clerical errors.
Clerical Error Authority
According to Article 24C paragraph (1) of the Constitution of 1945, the Constitutional Court has the power to render judgments in the first and final instance, with binding decisions, in cases involving the validity of laws in comparison to the Basic Law, disagreements regarding the legitimacy of state institutions whose authority is granted by the Basic Law, the dissolution of political parties, and cases involving the outcome of general elections.
The issue with the judiciary's subpar defense of typographical items, in this case, the Constitutional Court, is that the Law on the Establishment of Law and Regulations, notably Article 72 Paragraph 1, governs typographical documents. After measures have been jointly approved by the President and the House of Representatives, the leadership of the House of Representatives submits them to the President for ratification into law. (1a) If a draft law that has been approved by the House of Representatives and the President in accordance with paragraph (1) still contains typos, It must be determined that it is useless. Corrections must be made by both the Government, represented by the minister discussing the draft law, and the head of the House of Representative. (1b) The head of the House of Representatives discussing the draft law and the representative of the Government discussing the draft law must both approve the changes made as a result of the revisions mentioned in paragraph (1a). (1) Within a maximum of 7 (seven) days of the date of mutual agreement, the draft legislation amendment and submission referred to in paragraphs (1) through (1b) must be completed.
Additionally, the process outlined in Law on the Establishment of Law and Regulations Article 73 paragraph (1) applies to clerical errors in draft laws that have already been submitted to the President under Article 72. The ministry that coordinates government affairs in the area of the state secretariat along with the ministry that discusses the draft law will meet to discuss the issue. By including them in the discussion of the measure, the House of Representatives leadership made improvements. (2) (2) within 30 (thirty) days of the day it was jointly adopted, the draft law becomes enacted and is binding. (3) If the Draft is found to be legitimate, the ratification clause reads as follows: This Law is proclaimed legal by the provisions of Article 20 Paragraph 5 of the Constitution of the Republic of Indonesia of 1945. (5) Before the text of the Law is published in the State Gazette of the Republic of being ratified, the ratification clause mentioned in paragraph (4) shall be attached to the last page of the Law.
Article 72, paragraph 1a, of the Law on the Establishment of law and regulations explanation: Technical writing errors include illegible handwriting, erroneous article or verse references, clerical errors, and/or improper chapter, section, paragraph, article, verse, or item titles or insignificant sequence numbers.
According to Law Number 30 of 2014 concerning Government Administration, the principle of accuracy requires that a decision or action be based on complete information and documents to support the legality of the decision's determination and/or implementation. As a result, the decision or action in question must be carefully prepared before it is decided upon and/or carried out.
The framer of the draft law must pay attention to the principle of accuracy in the Legislation so that there is no clerical error, this, of course, refers to the Principle of Careful Action or the Principle of Accuracy This principle requires that the government or administration act carefully in carrying out various activities of carrying out government tasks so as not to cause harm to citizens. When it comes to government action to issue decisions, the government should consider carefully and carefully all factors and circumstances relating to the material of the decision, hear and consider the reasons put forward by interested parties, and should also consider the legal consequences arising from the administrative decision of the country. The principle of prudence requires that the governing body, before making a decision, examine all relevant facts and include all relevant interests in its consideration. When important facts are poorly scrutinized, it means they are not careful. The principle of prudence brings with it that a government body should not easily deviate from the advice given, especially if in the advisory committee sit experts in a particular field. 6 Technically, the characteristics of the language of laws and regulations in Law of the Republic of Indonesia Number 12 of 2011: a.
straightforward and sure to avoid similarity of meaning or confusion; b.
sparingly only the necessary words are used; c.
objective and suppresses subjective sense (not emotion in expressing goals or intentions); d.
standardize the meaning of words, expressions, or terms used consistently; e.
provide careful definitions or limits of understanding; f. the writing of words meaning singular or plural is always formulated in the singular; and Positive law solutions do not show the legal certainty for the clerical error mechanism. Clerical error should be returned to the judicial mechanism, in this case, the relevant authority, which is the Constitutional Court, which can provide legal interpretation) against incorrect laws and regulations because the Constitutional Court does not yet have the authority to have legal certainty over clerical error in the omnibus law method and/or in the formation of other laws. In order to prevent the mechanism from being corrected by the Speaker of the House of Representatives fittings that discuss the draft law and the Government represented by the ministry that discusses the Bill but rather by the authority of the Constitutional Court which is carried out through judicial review of the Constitution, legislators must pay attention to the principle of prudence in the Legislation and regulations. This is because laws that have already been passed cannot be changed by the house of representative and must instead be reviewed by a judge before being sent to the Constitutional Court.
Strengthening the Constitutional Court's Authority Against Clerical Errors: A
Step Back to the Judicial Mechanism The essence of the legislative process is not just producing laws but shaping the legal order needed by society. Ineffective legislation processes can eliminate enforceability. The existence of laws that only rely on validity runs as if staggered because they are not supported by legal needs in society. 7 If it contradicts the nature of the legislative process, there is the potential to exist: The controversy that arose over these clauses took the government by surprise and led to claims that Article 170 was a "typo." Achmad Baidowi, a house of representative member and secretary of the Partai Persatuan Pembangunan group, criticized the administration for acting "amateurishly" and "unprofessionally." However, Article 170 was a "misunderstood command," according to Dini Purowono, a member of the Presidential Special Staff, and the public may still provide feedback on the law via the website of the coordinating minister for the economy. But that's insufficient. This massive and complex measure needs to be postponed, and in its stead, a legitimate, sensible, and transparent process should be carried out. 8 However, it is insufficient. This significant and problematic bill should be tabled, and a legal, careful, and transparent public process should be initiated in its place. Mr. Staunton made a clear declaration. Since Mr. Staunton predicts that there won't be many cases because the issue mostly impacts oral fluid tests, which have essentially replaced impairment testing in recent years. However, it still pops up on occasion. Six seasoned lawyers and traffic law specialists claimed they were not aware of the issue brought up in court. Given that the problem mostly affects oral fluid tests, which have largely supplanted impairment testing in recent years, It is challenging to estimate how many cases the issue may have affected since 2016 due to the way Garda arrest data is maintained. 9 The reason for choosing USA and UK is to illustrate that it is better to interpret an Article that is a clerical error in a legal product, must be the authority of the Judiciary. Although they do not have constitutional courts, the resolution of clerical errors lies with the judiciary. The difference in the legal system will also be a reason
Comparison The Clerical Error
United States of America Clerical Error System, For Democratic politicians who are becoming more and more desperate to allow more illegitimate votes to influence forthcoming elections, it would have been quite an opportune "accident." It also offers further evidence to those who have long suspected extensive electionrelated corruption in the United States, which appears to virtually always benefit the far-left Democrat Party cause. through The New York Post, A mistake in the state's automatic voter registration statute would have made it necessary for non-citizens to register even though they were ineligible, but politicians promised to correct it. The bill gives specific state agencies, such as the Department of Motor Vehicles, instructions. As compared to the United Kingdom In the Marley case, Lord Neuberger examined section 20 (1) of the act, which, among other things, specifies that for a will to be rectified, it must not carry out the testator's wishes due to either a clerical error (paragraph a) or the drafter's inability to understand the testator's instructions (paragraph b). After considering the idea of "clerical error," Lord Neuberger concluded that the phrase's ordinary connotation did apply in situations where people had signed each other's wills. 10 He thought that since it "arose in conjunction with office activity of a normal kind," it might have been of a clerical nature because it was a mistake in the traditional sense. He concluded that the situation might be corrected by adding the pertinent language from Mrs. Rawlings' will to Mr. Rawlings' will. Therefore, Lord Neuberger may have broadened the concept of what a "clerical error" is. Therefore, even if it does not help with the definition of what constitutes a "clerical error," After the act was passed, practitioners were granted a legal right to correct it. Case law attempted to define the word "clerical error" after the act. Because the error was not a "clerical error," the court supported the first instance determination that the will could not be amended. "This is a result I have reached with much sorrow," Lady Justice Black said. "But in 1982, parliament made very limited amendments to the law, and it would not be right for a court to go beyond what parliament then decided." If the Supreme Court had not reversed this decision, it would have resulted in an unfair and unjust consequence, thereby returning the law to the time before the 1982 legislation, when wills could only be. 11 A clerical error is a typographical error or the accidental addition or removal of a word, phrase, or figure that alters the meaning of a letter, paper, or document. This kind of error is the outcome of oversight. Such an error was written accidentally, not with intent, and ought to be easily corrected without resistance. The plaintiff is not bound by a court reporter's error that incorrectly records the amount of money owed to them by the defendant as $50 rather than $500 because it was merely a clerical error. Once the court learns of such an error, it may act spontaneously, on its own, or in response to a motion from either side. 12 The slip rule is a procedure through which a decision or order may be corrected by the court if it contains an unintentional error or omission. The court may "at any moment remedy an inadvertent slip or omission in a judgment or order," according to CPR 40.12. The phrase "any time" simply signifies that the power is not limited to extant orders (IC v. RC; notice that although this is a family case, the court was evaluating a clause that is equivalent to CPR 40.12 in the Family Procedure Rules). 13 United States' Clerical Error System, In comparison with the United States, The United States requests an order from this court to fix two clerical errors in the judgment and committal orders sentencing by Fed. R. Crim. P. 36. According to Rule 36, the court may amend any clerical errors in decisions, orders, or other parts of the record as well as inaccuracies resulting from oversight or omission after giving the appropriate notice, if any. The Judgement and Committal Orders that sentence the defendants contain two clerical errors. Before bringing this issue to the Court's attention, the United States waited until it could examine the transcript of the sentencing. Only now have we obtained the transcript of the sentencing? 14 The Appeals Court determined that the modification ruling did not accurately reflect the judge's intentions since, as stated in her reasons, she only meant to follow the guidelines established by the divorce decree. The error was also referred to as a "calculation error" in the order revising the modification decision, and the Appeals Court determined that the judge's interpretation of her original purpose deserves some consideration. Additionally, Rule 60(a) permits judges to remedy mistakes resulting from oversights and clerical mistakes. As a result, the Appeals Court determined that the error was not material and upheld the modifying judgment amendment. 15 Thus, the tabulation between the comparison of solving clerical error problems in the Legislation in Indonesia, the United States, the United Kingdom, and Singapore is as follows: Even in the US, there is a motion form to correct the clerical error, which is filed with the Court with the following example of clerical error for revision form below: 16
Figure 1. Form of Clerical Error Revision
Source: Stan Burman (Sample Motion To Correct Clerical Error in US District Court) In order to request the rectification of a clerical error in a United States court, one must demonstrate that the error or errors are clerical in nature, or that they are the consequence of oversight or omission, and that they are causing the court's records to inaccurately reflect the orders or judgments made. A Rule 60(a) request for rectification of the error must be submitted each time a clerical or mechanical error is found in the Court's records. Federal Rule of Civil Procedure 60(a) is applicable due to Federal Rule of Bankruptcy Procedure 9024. In order to request the rectification of a clerical error in a US court, one must demonstrate that the error or errors are clerical in nature, or that they are the consequence of oversight or omission, and that they are causing the court's records to inaccurately reflect the orders or judgments made. A Rule 60(a) request for rectification of the error must be submitted each time a clerical or mechanical error is found in the Court's records. Federal Rule of Civil Procedure 60(a) is applicable due to Federal Rule of Bankruptcy Procedure 9024. The requirement for submitting a request for the correction of a clerical error in a United States court is that the error(s) in question are either clerical errors or errors brought about by oversight or omission. 17 However, according to Dini Purwono, a member of the Presidential Special Staff, Article 170 Job Creation Draft Law has a typo, it was a "misunderstood instruction" and the public may still comment on the bill via the website of the coordinating minister for the economy. But that's not enough. It is necessary to put this substantial and worrying bill on hold and replace it with a proper, sensible, and open method. 18 Reflection on Laws and Regulations Settings Towards the ideal law and regulations Settings, to avoid clerical errors and not conflict with the principle of accuracy, several things are needed as follows: 1.
Law and Regulations Act must be responsive-based 2.
Von Savigny: that there is a volkgeist in the Indonesian context of Pancasila. The concept of the State of Pancasila Law, in the formation of the law and regulations, must refer to Pancasila and the Constitution of 1945.
3.
Law and Regulations Act must be a guideline for the Formil and Material Aspects of the Formation of Good Laws and Regulations.
4.
Continue to fulfill the Principles of Law and Regulations Formation, and pay attention to caution.
5.
Optimizing the Role of the Constitutional Court in conducting legal interpretation of clerical error on draft laws approved by the house of representative and the Government.
CONCLUSION
The mechanism for clerical error in laws must to solve in the constitutional court. Ideally, the issue of clerical error is interpreted by the judiciary, in this case, the Constitutional Court, Clerical Error Contrary to the principle of carefulness principle so that the Bill has been mutually approved by the House of Representatives and the President if there is still a technical writing error, improvements are made by the head of the House of Representative who discuss the Bill, and the Government represented by the ministry that discusses what is regulated in Law of the Republic of Indonesia Number 13 of 2022 needs to provide parameters for technical writing errors so as not to be biased in interpreting or even changing the meaning of articles that have been approved by the House of Representatives and the President because bills that have been amended even though they have been approved are not necessarily validated again to the party who has approved them after improvements were made by the head of the House of Representatives who discussed the Draft Law.
Draft laws that have been jointly approved by the House of Representatives and the President if there are still technical writing errors because there are no parameters or benchmarks whether an article is only a technical clerical error or legal interpretation is needed, should be done through a judicial mechanism, namely in the Constitutional Court instead of improvements were made by the head of the House of Representatives who discussed the Bill and the Government represented by the ministry that discussed the Draft Law. As is the practice in the United Kingdom, United States, and Singapore that returns technical typos to judicial mechanisms. Even in the US, some forms can be submitted to the Court against clerical errors, making it easier for the system to work on bills that have been approved by the House of Representatives and the President but there are still clerical errors. | 2023-09-07T15:04:52.619Z | 2023-09-04T00:00:00.000 | {
"year": 2023,
"sha1": "da816ac0b00cdcde30dafea3efceb8d473b6ac78",
"oa_license": "CCBYNC",
"oa_url": "https://ejournal.balitbangham.go.id/index.php/dejure/article/download/3650/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "81cce8c77869d55164c4065edb3e8b5edf7aadba",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
268056565 | pes2o/s2orc | v3-fos-license | Efficient photocatalytic bactericidal performance of green-synthesised TiO2/reduced graphene oxide using banana peel extracts
In this study, the fabrication of titanium dioxide/reduced graphene oxide (TiO2/rGO) utilising banana peel extracts (Musa paradisiaca L.) as a reducing agent for the photoinactivation of Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) was explored. The GO synthesis was conducted using a modified Tour method, whereas the production of rGO involved banana peel extracts through a reflux method. The integration of TiO2 into rGO was achieved via a hydrothermal process. The successful synthesis of TiO2/rGO was verified through various analytical techniques, including X-ray diffraction (XRD), gas sorption analysis (GSA), Fourier-transform infrared (FT-IR) spectroscopy, ultraviolet–visible diffuse reflectance spectroscopy (UV–Vis DRS), scanning electron microscope-energy dispersive X-ray (SEM-EDX) and transmission electron microscopy (TEM) analyses. The results indicated that the hydrothermal-assisted green synthesis effectively produced TiO2/rGO with a particle size of 60.5 nm. Compared with pure TiO2, TiO2/rGO demonstrated a reduced crystallite size (88.505 nm) and an enhanced surface area (22.664 m2/g). Moreover, TiO2/rGO featured a low direct bandgap energy (3.052 eV), leading to elevated electrical conductivity and superior photoconductivity. To evaluate the biological efficacy of TiO2/rGO, photoinactivation experiments targeting E. coli and S. aureus were conducted using the disc method. Sunlight irradiation emerged as the most effective catalyst, achieving optimal inactivation results within 6 and 4 h.
Introduction
Bacterial water contamination by E. coli and S. aureus acts as a conduit for these bacteria to infect organisms.A method presently considered safe for degrading microorganisms involves photocatalysis, aided by reducing agents derived from biodegradable materials.This approach is deemed safer owing to its minimal generation of residual substances and its environmentally benign nature, employing substances that are non-harmful to the ecosystem [1].
Semiconductor titanium dioxide (TiO 2 ) is widely utilised in photocatalysis, favoured for its low toxicity, ready availability, costeffectiveness, superior performance, high thermal stability and easy preparation [2].Moreover, TiO 2 is recognised for its remarkable antibacterial properties [3].Nevertheless, its application under natural sunlight is limited due to its considerable band gap-3.0eV for the TiO 2 rutile phase and 3.2 eV for the TiO 2 anatase phase-restricting its activity primarily to the UV spectrum [4,5].
To augment the efficacy of TiO 2 , various strategies involving metals, non-metals and carbon materials have been investigated.Among these, carbon allotropes have garnered substantial attention due to their exceptional electrical conductivity, extensive surface area, high thermal conductivity and remarkable mechanical strength.Incorporating carbon-based materials, particularly graphene, into TiO 2 modifications has resulted in composites exhibiting enhanced activity compared with pure TiO 2 [6,7].This enhancement is attributed to the role of graphene as an electron acceptor, which helps mitigate electron-hole recombination during TiO 2 excitation, thereby improving performance of TiO 2 under visible light [8].However, the hydrophobicity of graphene poses challenges in coupling metal oxides to its surface.A graphene derivative, reduced graphene oxide (rGO), retains similar characteristics to pure graphene but exhibits hydrophilicity, fewer oxygen groups and structural defects in its carbon framework [9].Furthermore, rGO is known for its high antibacterial activity, which is ascribed to the generation of reactive oxygen species (ROS), such as hydroxyl radicals (OH•) and superoxide radicals (•O 2 − ), facilitating the degradation of organic pollutants and the inactivation of bacteria [10,11].
The synthesis of rGO involves a series of processes, including the oxidation of graphite, exfoliation and the subsequent reduction of graphene oxide.Akbar et al. emphasised that chemical reduction is recognised for its simplicity, cost-effectiveness and suitability for large-scale production [12].Numerous reducing agents have been used for reducing GO, including hydrazine hydrate [13], NaBH 4 [14], SOCl 2 [15] and carbon monoxide [16].Nevertheless, some of these chemicals are known for their toxicity and potential environmental hazards.Consequently, there is a pressing need to develop synthesis methods for rGO that are environmentally benign and utilise non-harmful materials.
Numerous studies have explored using eco-friendly reducing agents to convert GO to rGO, highlighting the shift towards environmentally sustainable methodologies.Yang et al. [17] utilised Salvia spinosa, a plant-based reducing agent, to examine its photothermal effects on pancreatic cancer cells.Bhattacharya et al. [18] employed aloe vera extract in the reduction process, aiming for dye removal applications.Mahjuddin and Ochiai [19] synthesised rGO using lemon juice as a reducing agent and applied it for the adsorption of methylene blue.Rani et al. [20] successfully produced green rGO nanosheets using lemon peel extracts (Citrus limon).Verastegui-Dominguez et al. [21] developed an environmentally friendly rGO using natural extracts from Capsicum chinense (Habanero) and Larrea tridentata (Gobernadora) for the degradation of methylene blue.Similarly, Buasuwan et al. [22] investigated the effectiveness of banana peel and juice extracts in reducing GO.Khojah et al. [23] explored the eco-friendly reduction of GO using mint extract, noting a more pronounced reduction effect than that achieved with Tribulus terrestris extract.Rai et al. [24] employed Citrus maxima (Pomelo) juice as a reducing agent for rGO synthesis, highlighting its potential use in supercapacitors.Finally, Mahmoud et al. [25] embarked on rGO synthesis using Ziziphus spina-christi extracts (Christ's thorn jujube) for catalysis, antimicrobial activity and antioxidant applications, demonstrating the versatility of plant-based reducing agents.
Bananas, tropical fruits with a substantial global production of approximately 48.9 MT [26], often lead to discarded peels post-consumption, representing an important source of unused waste.Banana peels are comprised 1.92%-3.25%pectin, a biocompatible and biodegradable heteropolysaccharide that serves as an effective reducing agent, facilitating oxidation-reduction reactions [27,28].Phytochemical analyses, such as the one conducted by Velumani [29], reveal that banana peels (Musa paradisiaca L.) are rich in alkaloid and phenolic compounds, including tannins, saponins and flavonoids, endowing them with antioxidants and antibacterial properties.Buasuwan et al. [22] successfully harnessed a green reducing agent from banana peels and juice extract to reduce GO, efficiently eliminating oxygen-containing groups.Similarly, Olana et al. [30] synthesised a TiO 2 /rGO nanocomposite utilising waste extracts from Citrus sinensis and Musa acuminata peels, demonstrating its effectiveness in the photocatalytic degradation of methylene blue.
In this study, GO was synthesised through a modified Tour method employing KMnO 4 with a mixture of H 2 SO4 and H 3 PO 4 .This method offers an environmental advantage as it avoids the emission of toxic gases.The rGO was produced using a green reducing agent selected for its effective reduction of GO.Employing banana peel extracts as reducing agents to produce rGO epitomises an innovative approach to waste utilisation.This method presents a practical, cost-effective, non-toxic, biocompatible and environmentally benign pathway, leveraging the abundant availability of banana peels.To the best of our knowledge, research specifically dedicated to the synthesis of TiO 2 /rGO nanocomposites using banana peel extracts as a reducing agent for the photoinactivation of E. coli and S. aureus remains unexplored.In this research endeavour, our study focuses on the potential activity of the TiO 2 /rGO nanocomposite in effectively deactivating bacteria.This process harnesses sunlight energy for photocatalysis, offering a promising avenue for sustainable and efficient bacterial inactivation.
Material synthesis 2.2.1. Banana peel extraction
Here, 20 g of Kepok banana peels was mixed with 100 mL of distilled water and heated to 95 • C for 2 h.Next, the mixture was filtered using Whatman filter paper, and the resulting filtrate was collected in a glass bottle.Subsequently, the extract was stored in a refrigerator overnight [31].
Synthesis of GO
A 0.5 g of graphite was oxidised with 90 mL of H 2 SO 4 and 10 mL of H 3 PO 4 .After adding 4.5 g of KMnO 4 , the mixture was stirred for 8 h at 50 • C. Post-stirring, the mixture was allowed to settle and cool to room temperature.Subsequently, 250 mL of distilled water and 10 mL of 30% H 2 O 2 were incrementally added to the mixture.The resultant solution was then filtered, and the collected precipitate was thoroughly washed with 100 mL of 5% HCl.This step was succeeded by another washing with 100 mL of deionised water.The mixture was then stirred at 60 • C for an additional 8 h.The end product, a paste of GO, was subsequently dried in an oven at 60 • C for 2 h, yielding GO powder [32].
Synthesis of rGO
The synthesis of rGO began with the dispersion of 0.1 g of GO in 30 mL of distilled water.This mixture was subjected to sonication for 30 min.Subsequently, 2 mL of banana peel extract was added to the mixture, then placed under reflux at a temperature of 95 • C for 5 h.Post-reflux, the solution was filtered to obtain a precipitate.This precipitate was thoroughly washed with 100 mL of deionised water to remove residual impurities.Finally, the washed precipitate was dried in an oven at 60 • C for 6 h, forming the rGO powder [33].
Synthesis of TiO 2 /rGO
The TiO 2 /rGO composite was synthesised utilising the hydrothermal method.Initially, 0.04 g of rGO was dissolved in a solvent mixture consisting of 80 mL of distilled water and 40 mL of ethanol.This mixture was then sonicated for 30 min.Next, 0.4 g of TiO 2 was added to the solution, followed by continuous stirring for 2 h to achieve a homogenous mixture.This mixture was subsequently transferred into an autoclave and subjected to a hydrothermal reaction by placing it in an oven maintained at 120 • C for 3 h.Next, the mixture was further stirred for an additional 2 h under UV and then filtered to collect the precipitate, which was thoroughly washed with distilled water.Finally, the washed precipitate was dried in an oven at 70 • C for 12 h, forming the TiO 2 /rGO composite powder [34].
Study of photocatalytic activity
The efficacy of the photoinactivation test on bacteria was assessed using the disc diffusion method.Optimal concentrations of the TiO 2 /rGO composite were determined by experimenting with different ratios of TiO 2 /rGO to the medium, specifically NB.The concentration ratio used for E. coli bacteria was 1:1, converting the materials from a powder form to a solution form.For S. aureus bacteria, a concentration ratio of 1:9 was utilised.The effectiveness of photocatalysis was evaluated under three different types of light: UV light, sunlight and in the absence of irradiation, each at three different concentrations.After identifying the optimal type of light and concentration, the exposure times of 2 h, 4 h and 6 h were tested.The results obtained are the formation of clear zones on the media consisting of NA and MHB.
X-ray diffraction (XRD) characterisation
Diffractogram analysis was performed utilising a Bruker D2 Phaser (Germany) within a diffraction angle range of 2θ = 2 • -80 • to generate distinctive peaks characteristic of each material.For the determination of crystal size, the Debye-Scherrer equation was employed, expressed as: where D is the crystal size expressed in nanometers (nm), K is the crystal from factor constant (generally a value of 0.94 is used), λ is the wavelength of the X-ray radiation source (1.54056 Å), β states the value of full width at half maximum (FWHM) (rad) and θ as the diffraction angle (degree).The Debye-Scherrer equation elucidates that the calculated crystal size is inversely proportional to the FWHM value.Furthermore, the intensity of each crystal field substantially impacts the FWHM value, with a high intensity correlating to a reduced FWHM value.By incorporating the values of wavelength, intensity, 2 θ and FWHM, derived from X-ray diffraction (XRD) test results, the Debye-Scherrer equation can be modified to facilitate the computation of crystal size [35].
The diffractogram peaks of the materials are depicted in Fig. 1.Both GO and rGO exhibit a low-intensity peak at 2θ = 10.03 • (111), corresponding to a layer spacing of 0.879 nm.For GO, a broader peak is observed at 2θ = 42.53• (622), indicating a layer spacing of 0.2121 nm, while for rGO, a broadened peak appears at 2θ = 10.768• (111) with a layer spacing of 0.2104 nm, as denoted by JCPDS No. 01-082-2261.Turbostratic carbon, a unique class of carbon, exhibits structural ordering intermediate between amorphous carbon and crystalline graphite phases.This intermediate phase contributes to the indeterminate phase of GO, manifesting as a broadened band at the weak peak of 42.53 • (622), as indicated by JCPDS No. 01-082-2261.Furthermore, the transformation from GO to rGO is temperature-dependent, broadening these bands [36].The rGO peak is broader than that of GO, which retains sharper peaks.This broadening in rGO is attributed to the sonication process during the rGO formation, wherein GO undergoes exfoliation to yield rGO.The reduction in layer spacing from 0.879 nm to 0.2104 nm confirms the successful reduction of oxygen-containing groups and the synthesis of rGO.
TiO 2 and TiO 2 /rGO exhibit distinct and pronounced diffractogram peaks.In TiO 2 , the most substantial intensity is observed at 2θ = 27.481• (110), corresponding to an interlayer spacing of 0.3243 nm.Similarly, the peak with the highest intensity for TiO 2 /rGO is noted at 2θ = 27.469• (110), reflecting an interlayer spacing of 0.3244 nm.The TiO 2 and TiO 2 /rGO peaks show remarkable similarity, with only minor shifts.This resemblance is predominantly due to the substantial presence of TiO 2 within the TiO 2 /rGO nanocomposite, sustaining a ratio of 1:10 (TiO 2 :TiO 2 /rGO).Analysing the diffractogram peaks, it is evident that GO and rGO present an amorphous phase, whereas TiO 2 and TiO 2 /rGO are in a crystalline phase.The TiO 2 /rGO nanocomposite's successful synthesis is evidenced using TiO 2 in the rutile phase, as confirmed by JCPDS No. 00-021-1276.
The Debye-Scherrer equation was applied for material crystal size determination with the mean crystal sizes for each sample are listed in Table 1.The average crystal size for GO is determined to be 7.983 nm, whereas for rGO, it is 5.707 nm.These results indicate that the crystal size of rGO is smaller than that of GO, attributed to the sonication process employed in synthesising rGO, where intensive ultrasonic activity contributes to particle size reduction.In the case of TiO 2 , the average crystal size is measured at 83.375 nm.Conversely, the average crystal size for the TiO 2 /rGO composite is slightly larger, at 88.505 nm.Typically, TiO 2 exhibits a smaller crystal size compared with the TiO 2 /rGO composite.This disparity is primarily due to the synthesis process of TiO 2 /rGO, wherein the combination of TiO 2 and rGO inhibits crystal growth.
Gas sorption analysis (GSA) characterisation
The surface area of materials was evaluated at 77 K through N 2 adsorption/desorption measurements using a Quantachrome Novatouch LX-4 (Austria).Fig. 2 illustrates the pore structure of the material, classified as type IV pores (mesopores) according to the IUPAC classification, evidenced by the hysteresis loops at high partial pressures and a pore size distribution ranging from 2 to 50 nm [37,38].Based on the Barret-Joyner-Halenda method, the calculations revealed that the maximum pore size is predominantly at a radius of 2 nm, aligning with the observed pore size range of 2-50 nm.The surface area of the material was determined using the Brunauer-Emmett-Teller method, revealing a surface area of 39.735 m 2 /g for GO and 47.354 m 2 /g for rGO, as detailed in Table 2.The data indicate that the surface area of rGO is larger than that of GO, attributed to the reduction process involved in rGO synthesis, which enhances the material's surface area.Youn et al. [39] investigated the oxidation state of GO in relation to the surface area of rGO sheets.The study revealed that GO had a higher oxidation state, indicating a substantial concentration of oxygen functional groups intercalated between the graphene layers.This condition facilitated the formation of rGO with an increased specific surface area.The expanded surface area of rGO emphasised the improved electrical conductivity, a consequence of the reduction process employing banana peel extracts.This enhanced electrical conductivity facilitates efficient electron transport during the photocatalytic process [40].
The surface area of TiO 2 is measured at 14.167 m 2 /g, whereas that of TiO 2 /rGO is determined to be 22.664 m 2 /g.These results suggest that integrating rGO into TiO 2 impedes its agglomeration, thereby bestowing the TiO 2 /rGO composite with a considerably larger surface area than TiO 2 [41].The embedding of TiO 2 onto rGO is deemed a successful strategy, as it yields a more efficient photocatalytic material, largely attributed to the increased surface area facilitated by the rGO incorporation.
Fourier-transform infrared (FT-IR) characterisation
Functional groups in GO, rGO, TiO 2 and TiO 2 /rGO materials were characterised through FT-IR analysis using UATR Spectrum Two PerkinElmer (United States).The FT-IR spectra were produced by associating wave numbers with transmittance values.Fig. 3 depicts the FT-IR spectrum for each material, highlighting the specific wave numbers corresponding to various functional groups.Table 3details the functional groups discerned based on their respective wave numbers.The analysis of functional groups within GO materials reveals the presence of oxygen-containing functional groups.Notably, a peak at 586.97 cm − 1 corresponds to the bending vibration of C-O-C, representing the epoxy group.Furthermore, a peak at 1224.23 cm − 1 is attributed to the bending vibration of the C-OH hydroxyl group [42].
The peak at 1627.08 cm − 1 signifies the stretching vibration of the sp 2 plane within the C-C bond.The peak at 3413.42 cm − 1 is associated with the absorption of O-H groups, indicating the presence of a vibration band related to oxygen-binding functional groups, which suggests the oxidation of graphite [43].For the rGO material, the peak at 1073.51 cm − 1 denotes a stretching vibration of C-O bonds [44].In addition, the peak at 1588.20 cm − 1 corresponds to the absorption of C --C, and the peak at 1623.59 cm − 1 indicates C-C absorption.Moreover, the peak at 1732.43 cm − 1 is characteristic of the stretching vibration of C --O in carbonyl/carboxyl functionalities, and the peak at 3391.87 cm − 1 is associated with the stretching vibration of O-H bonds present in water [45].
Analysis of functional groups within GO and rGO materials highlighted a more pronounced binding of carbon groups in rGO, attributable to reducing agents during rGO synthesis, which considerably contribute to carbon content.In rGO, alkene functional groups exhibited slight shifts, partly due to the inherent reduction process in rGO synthesis.The hydrophilic nature of rGO is elucidated by the prevalence of polar groups, predominantly hydroxyl functional groups, on the rGO surface.The diminished absorption intensity in rGO spectra reflects the successful removal of oxygen-containing functional groups, confirming the effective reduction of GO [46].Furthermore, this reduction process not only augments electrical conductivity but also enhances material orientation by rectifying defects [40].The TiO 2 spectrum demonstrates a distinctive absorption peak at 648.96 cm − 1 , indicative of the characteristic vibrations of Ti-O-Ti bonds.In the TiO 2 /rGO composite, an absorption shift is observed due to the formation of Ti-O-C bonds, causing the TiO 2 absorbance edge to migrate towards higher wavelength regions [47].This notable shift in absorption indicates the predominance of successful bond formation, leading to a decrease in the absorption intensity of Ti-O-Ti bonds.
UV-Vis diffuse reflectance spectroscopy (DRS) characterisation
The determination of the band gap energy for both TiO 2 and TiO 2 /rGO was conducted by analysing the UV-Vis DRS data from the Shimadzu UV-2401 PC (Japan).As illustrated in Fig. 4, the band gap energy of TiO 2 /rGO, at 3.15 eV, is marginally narrower than that of TiO 2 , which is 3.052 eV.This reduction is attributed to the integration of rGO within the energy gap of TiO 2 , resulting in a redshift of the absorption edge from TiO 2 to TiO 2 /rGO [48].This observation aligns with the findings of Yu and Tang [49], who reported that incorporating rGO into TiO 2 leads to a decrease in the bandgap value, measuring 3.14 eV for TiO 2 and 3.07 eV for TiO 2 -rGO.A Fig. 3. FT-IR spectra of the materials.
M. Utami et al.
narrower band gap promotes electron transition from the valence band (lower energy level) to the conduction band (higher energy level), effectively minimising electron-hole recombination [50].
The intersection points on the graph depicting the relationship between (αhv) 2 and photon energy (eV) serves as a determinant for the band gap value.This value correlates with absorption regions at different wavelengths.The band gap energy of TiO 2 is 3.15 eV, corresponding to a wavelength of 394 nm.Conversely, the TiO 2 /rGO composite exhibits a marginally lower band gap energy of 3.052 eV, equivalent to a wavelength of 408 nm.Notably, TiO 2 demonstrates pronounced absorption within the ultraviolet spectrum (200-400 nm).Liu reported that the modification of rGO induces a broadening of light absorption, effectively shifting the absorption spectrum of TiO 2 towards the visible region (400-800 nm) [47].These findings underscore that the modification of TiO 2 was successfully executed, substantially enhancing the capacity of TiO 2 /rGO to absorb visible light efficiently compared to pristine TiO 2 .
Scanning electron microscopy-energy dispersive X-ray (SEM-EDX) characterisation
The SEM analysis elucidates the surface characteristics of GO, rGO, TiO 2 and TiO 2 /rGO materials, while the EDX analysis sheds light on the elemental composition of each material, utilising the JEOL JSM-6510LA (Japan).Fig. 5 depicts the morphology of each material at a magnification of 10,000x, showcasing distinctive features.The rGO exhibits a refined texture, resembling thinly coated flakes and appearing more diminutive relative to the GO material, which is characterised by its rough and bulky sheets [51,52].The relatively coarse texture of the GO layer, as compared with rGO, originates from the presence of functional groups containing oxygen atoms, leading to surface distortions [53].This observation underscores the successful synthesis of rGO, marked by a finer material layer.
The integration of the carboxyl functional group (-COOH), originating from pectin compounds in banana peel extracts, contributes to the successful synthesis of rGO by substituting the oxygen-containing groups with carbon.Morphologically, a marked distinction is evident between TiO 2 and TiO 2 /rGO materials, as characterised by the appearance of distinct white lumps.For TiO 2 , the surface presents an evenly distributed white layer.In contrast, the TiO 2 /rGO exhibits a smoother texture, with the white lumps appearing agglomerated.
The elemental composition of each material was determined through EDX analysis, the results of which are detailed in Table 4. Specifically, for the GO material, the composition was found to consist solely of carbon and oxygen, without the presence of any other elements.This absence of extraneous elements indicates the high purity of the synthesised GO [54].The carbon component in GO and rGO primarily originates from the graphite material used as the precursor in the synthesis of GO.However, the oxygen component is introduced during the oxidation process.Although both GO and rGO materials predominantly consist of carbon, the carbon content in rGO is higher than in GO.This increase is attributed to the reduction process of rGO synthesis, which effectively reduces the oxygen-containing groups.Moreover, introducing pectin as a reducing agent in the synthesis process contributes additional carbon to the rGO structure, effectively replacing some oxygen bonds with carbon.
In the rGO material, the presence of potassium is attributed to the banana peel extracts used as a reducing agent for graphene oxide [55].For TiO 2 , oxygen is the predominant element, followed by titanium and carbon.In contrast, the TiO 2 /rGO composite primarily consists of oxygen, carbon and titanium [56].The carbon content originates from the rGO sheet, whereas the titanium component is derived from the TiO 2 crystal.Oxygen is contributed by both the TiO 2 crystal and a small quantity of oxygen-containing groups on the rGO sheet [57].A trace amount of aluminium is identified in both TiO 2 /rGO and TiO 2 , presumably stemming from the commercially sourced TiO 2 .Although the purity of TiO 2 may not be fully guaranteed, the minor presence of aluminium does not impede the efficacy of TiO 2 in the photocatalytic process.These observations confirm the successful integration of TiO 2 onto the rGO surface, evidenced by a reduction in the oxygen content coupled with an elevation in the carbon content in the TiO 2 /rGO composite.
Transmission electron microscopy (TEM) characterisation
The TEM examination using JEOL JEM-1400 (Japan) yielded detailed insights into the particle morphology of GO, rGO, TiO 2 and TiO 2 /rGO, as depicted in Fig. 6.The morphological features observed via TEM exhibit enhanced clarity and contrast than the surface morphology outcomes derived from SEM analysis.As presented in Fig. 6, both GO and rGO materials display a crumpled and layered morphology characterised by wavy sheets interconnected to form a network, primarily due to the substantial π-π interactions among the surfaces of the graphene layers [35].Notably, the rGO material exhibits a more pronounced wrinkled texture and a thinner structure, which can be attributed to the removal of oxygen-containing groups [58].The darker areas within the rGO morphology indicate an increased layer density, a feature attributed to the strong van der Waals forces interlinking each layer [36].In contrast, the TiO 2 particles exhibit an irregular spherical morphology, characterised by pronounced agglomerations.Within the TiO 2 /rGO composite, the TiO 2 clusters appear to be uniformly distributed and deposited on the delicate sheets of the rGO surface [59,60].
The TEM analysis underscored the heterogeneity in particle sizes across each material, as depicted in Fig. 7, discernible from the resultant non-uniform morphology.The mean particle size was determined to be 202.5 nm for TiO 2 and 60.5 nm for TiO 2 /rGO.The predominance of TiO 2 , in terms of particle size and mass percentage relative to rGO, engenders a propensity towards particle agglomeration.The synthesis outcome of the TiO 2 /rGO composite indicates that the material qualifies as nanoparticles.
Photocatalytic activity
The photoinactivation process targeting E. coli was performed at a concentration ratio of 1:1, while the process targeting S. aureus was conducted at a concentration ratio of 1:9.In treatments devoid of radiation, relying solely on TiO 2 /rGO material, it was observed that E. coli did not exhibit clear zone formation on the test plate.However, a clear zone measuring 5 mm was evident on the test plate for S. aureus, categorising it within a weak inhibition zone.This indicates that in the absence of energy from sunlight, TiO 2 /rGO material is insufficiently effective against E. coli and S. aureus.Clear zones did not form around the discs in the unirradiated samples; however, the material's potential as a degrading agent was evident.This potential was indicated by the variation in colour and turbidity between the media surrounding the material and the media agent.This is attributed to the nature of TiO 2 /rGO as a photocatalyst, which requires light to optimise its activity.
Notably, in Fig. 8, each different irradiation time interval is marked by a distinct clear zone.Specifically, irradiation for 2, 4 and 6 h resulted in clear zones with diameters of 22, 28 and 35 mm, respectively.The correlation between extended irradiation time and the enhanced elimination of E. coli bacteria is apparent.The increasing width of the clear zone indicates that 6 h of sunlight exposure is highly effective in degrading E. coli.This heightened efficacy over 6 h can be attributed to the prolonged interaction with E. coli bacteria, which are typically anaerobic and responsive to factors such as ambient air and adequate sunlight exposure.As a result, the bacterial plasma membrane becomes increasingly susceptible to damage, leading to the disruption of the DNA in E. coli cells.
Variations in irradiation time produce clear zones on S. aureus bacteria, as depicted in Fig. 9.After 2 h, 4 h and 6 h of irradiation, distinct clear zones measuring 20, 22 and 5 mm in diameter, respectively, are observed.S. aureus bacteria exhibit a rapid tendency to develop resistance against multiple antimicrobial agents.The most effective timeframe for inactivation is within 4 h, which is attributable to its gram-positive characteristics.These attributes encompass a single-cell plasma membrane surrounded by a dense cell wall predominantly comprises peptidoglycan.Initially, this structural feature makes S. aureus susceptible to specific antibiotic materials; however, over time, S. aureus can adapt, becoming resistant to the effects of these antibiotics.
Two potential mechanisms could be responsible for this phenomenon: either mutations in the bacterial DNA chromosome or the introduction of specific genetic material that interferes with the antibiotic's mode of action.Dat et al. [61] studied the influence of contact time on the antibacterial efficiency of Ag/rGO and observed that the density of the bacterial colony remained relatively high from 0 to 240 min, with a marked decrease starting at 720 min at a concentration of 400 μg/mL for S. aureus.This finding highlights the superior temporal efficacy of the TiO 2 matrix within the TiO 2 /rGO composite compared with Ag alone.Although S. aureus, a gram-positive bacterium, is characterised by a thick cell wall, it demonstrates a shorter period required for bacterial elimination compared to E. coli.This difference may be explained by a greater concentration of the composite in the S. aureus disk diffusion medium, facilitating the more rapid and efficient penetration of TiO 2 through the cell walls [62].For gram-negative bacteria, TiO 2 /rGO is assimilated by lipopolysaccharides, thereby inflicting direct damage to the peptidoglycan layer and enhancing membrane permeability.This sequence of events culminates in the influx of TiO 2 ions into the cytosol, ultimately resulting in bacterial cell death.In contrast, in gram-positive bacteria, TiO 2 /rGO directly breaches the dense peptidoglycan layer, facilitating the infiltration of TiO 2 ions into the cytosol, leading to similar lethal outcomes [57,63].
Photocatalytic microbial inactivation, under light exposure, stimulates ROS production on the surface of the material.The hydrophilic properties of the TiO 2 /rGO surface foster interactions between the composite material and bacteria, thereby triggering a photocatalytic effect.Akhavan and Ghaderi [64] demonstrated that E. coli bacteria were effectively inactivated by graphene/TiO 2 thin-film nanocomposites in aqueous solutions under sunlight irradiation.The light irradiation prompts electron excitation in TiO 2 , leading to the generation of ROS.Within the domain of bacterial photocatalytic inactivation, rGO serves a pivotal role akin to that in the photocatalytic degradation of organic compounds in water and wastewater, acting as a photogenerated electron acceptor.
The high conductivity of rGO expedites electron transport and mitigates the recombination of photoexcited electrons, consequently augmenting the generation of ROS.In addition, rGO extends the range of light absorption into the visible spectrum.The generated ROS disrupt bacterial cell structures, causing alterations in the organic components of the cell wall through redox reactions with released compounds.The incorporation of rGO modifications considerably enhances the quantum efficiency of the microbial inactivation process [65,66].The interaction between ROS (H 2 O 2 , O 2 •, OH• and H 2 O•) and the sulfhydryl (-SH) groups in proteins, as well as with nucleotide base pairs within DNA, leads to DNA degradation.This bacterial photoinactivation mechanism is illustrated in Fig. 10.
In addition to the bactericidal properties of TiO 2 and rGO, the quantification of ROS is pivotal in elucidating the bacterial inactivation mechanism.ROS, which are generated both externally and internally, can inflict damage on bacterial cells.Externally originated ROS attack the cell membrane, while internally generated ROS disrupts proteins, DNA and organelles within the cell.Thabet et al. [67] emphasised the antimicrobial effects of TiO 2 on Saccharomyces cerevisiae, noting that TiO 2 can penetrate cellular membranes through irregularities in the cell wall, thereby causing oxidative damage.Zhou et al. [68] examined the antibacterial efficacy of a hybrid catalyst comprises rGO and TiO 2 , underscoring that ROS, particularly OH•, can disintegrate cell membrane components and compromise membrane integrity.The formation of overlapping d-π electron orbitals, facilitated by chemical bonding interactions, allows for repeated electron excitation.Electrons are then transferred to the rGO surface, reacting with oxygen and water to generate radicals.These resultant ROS are recognised for their crucial role in the observed oxidative activity.Free radicals, as reactive chemical intermediates with one or more unpaired electrons, can inflict cellular damage by donating these unpaired electrons to nearby cellular structures, leading to the oxidation of cell membrane lipids and amino acids.In photocatalytic antibacterial mechanisms, OH• is known to initiate lipid peroxidation in the outer cell wall, culminating in damage to cellular organelles and DNA [69,70].
Over the past decade, extensive research has focused on investigating the antimicrobial efficacy of materials based on rGO, as detailed in Table 5. Photocatalytic systems that integrate TiO 2 and rGO, employing banana peel extracts as a reducing agent, have exhibited notable success in neutralising both E. coli and S. aureus.This is evidenced by the substantial inhibition zones, measuring 35 and 22 mm for each microorganism.The achievement of the largest inhibition zone within a relatively short period and under sunlight illumination highlights the potency and effectiveness of this photocatalyst.Employing banana peels as an eco-friendly reducing agent not only enhances the economic and ecological appeal of this photocatalytic system but also ensures its sustainability and safety.
Conclusions
The successful biosynthesis of the TiO 2 /rGO composite was realised using banana peel extracts (Musa paradisiaca L.).The procedure commenced with the synthesis of GO utilising a modified Tour method, proceeded with the synthesis of rGO through sonication employing banana peel extracts and culminated with the hydrothermal synthesis of TiO 2 /rGO.Characterisation of the TiO 2 /rGO composite through XRD analysis revealed a crystallite size of 88.505 nm for TiO 2 /rGO in the rutile phase.GSA characterisation indicated the manifestation of type IV mesoporosity in TiO 2 /rGO, characterised by a pore size distribution ranging from 2 to 50 nm and a surface area of 22.664 m 2 /g.The FT-IR spectrum confirmed the presence of a characteristic functional group of TiO 2 , notably the Ti-O-Ti bond, at a wavenumber of 648.96 cm − 1 .UV-Vis DRS analysis revealed that TiO 2 exhibited a band gap energy of 3.15 eV, whereas TiO 2 /rGO demonstrated a marginally reduced band gap energy of 3.052 eV.SEM images revealed agglomerated white lumps of TiO 2 dispersed uniformly across the rGO surface.TEM analysis delineated the structural features of TiO 2 /rGO composites, showcasing sheets of rGO and spherical aggregates of TiO 2 with an average particle size of 60.5 nm.The optimal photocatalytic performance of TiO 2 /rGO in degrading E. coli bacteria was attained under sunlight exposure for 6 h.Conversely, for S. aureus bacteria, the peak photocatalytic activity was observed under sunlight for 4 h.
Table 1
The average crystal size of materials.
Table 2
Textural properties of the materials.
Table 3
Functional groups of materials.
Table 4
Constituent elements present in materials. | 2024-03-01T05:16:14.908Z | 2024-02-17T00:00:00.000 | {
"year": 2024,
"sha1": "eed47949472acc6b7663549b8ad5af1a0a6c4434",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "eed47949472acc6b7663549b8ad5af1a0a6c4434",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6229657 | pes2o/s2orc | v3-fos-license | PMD2HD – A web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection
Background Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. Methods The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. Results PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. Conclusion PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.
Background
The mainstream realm of biosciences research is com-prised, from the bibliographic point of view, within the accumulated indexing efforts of the United States National Library of Medicine (NLM). The PubMed [1] database currently maintained by NLM indexes 4571 journals with over 13 million articles [2] (verified on 11-Jan-2005). PubMed's web-based interface delivers comprehensive and usable access to nearly all relevant publications and even provides the option to find articles related to the initial search result. PubMed can be searched using author name(s), keywords, and other criteria as search topics. The results page displays brief information about the retrieved articles (authors, title, journal, PubMed ID) in list form. Individual resulting articles can be marked, and an individual results page can be displayed with selected articles. Opening an author link displays an abstract and typically displays the availability of an electronic full text. Electronic article availability is dependent on the searcher's institutional affiliation as expressed in the computer's Internet protocol (IP) identity. Systems like LinkOut [3] and PubMed's Outside tool [4] can provide electronic access to subscribed articles on opening the respective links article by article. Normally this selected display of relevant articles completes the task of searching PubMed.
A subsequent research working step is acquisition of relevant article full texts that lack direct linking. The individual and institutional subscription situation will provide several common alternatives, such as a local library print or e-journal holding confirmed through the local catalogue. A final resort may be a time-consuming and costly loan request.
PubMed results page Figure 1 PubMed results page. In PubMed selected articles marked with control-a.
Methods
The web tool PMD2HD [5] has been developed to close a knowledge inequity: how can articles without full-text links embedded in PubMed be automatically interpreted for local holding status? The case for a local holding discovery tool was refined using the German Cancer Research Centre library collection. However, the tool can be adapted to use in different library systems or even for searching several collections within a network of federated libraries.
In the scenario envisioned for the use of PMD2HD, a researcher will can carry out a PubMed search as usual, finishing by marking (select all) the complete results web page with the shortcut command control-a (Windows™) and copying it with control-c into the clipboard ( Figure 1). (This way copies the complete list to the clipboard. If the user selects only a few articles by mouse click and control-c; it works in the same way.) In the next step the complete clipboard content is pasted (control-v) into the text box of the PMD2HD tool web page ( Figure 2). By pressing the [Check!] button the users trigger further data process-ing ( Figure 3). A fundamental premise during design and development was that the tool should be as easy to use for scientists as possible, integrated into their standard workflow.
PMD2HD is written in PHP [6], a very common script language for web-based applications. The program starts with extracting all PubMed IDs (formed like: PMID: 15584372) from the entered text and arranges them in a list. Using this list the program queries the PubMed E-Utilities to retrieve a structured record in XML format [7] for each entry. Table 1 displays an example XML record.
The key information of this record is the ISSN. The ISS number serves as the search argument for querying the local library system via the Z39.50 interface (at the German Cancer Research Centre, the Horizon system from Ameritech Library Services is used).
Z39.50 [8,9] is an ANSI standard. Dating from 1996, this protocol has been specified for communication between library database systems and enables searching in heterogeneous databases. The Z39.50 protocol ensures independence from the queried catalogue database, query syntax and operating system. If the catalogue system finds an entry for a requested ISSN, it returns all information about this entry as a record that is structured according to USMARC. One of the record items is the holding record of the journal subscription. Next, the web tool checks if the publication year of the article is in the Library's journal holding range. If so, the catalogue database is queried again via Z39.50 to get all information for this journal.
One drawback in universal applicability of this Z39.50 method is that some ejournals do not conform to an ISSN search, and occasionally a journal name lookup in the online catalogue is necessary. However, a Z39.50 journal name query is not performed by a string matching search. Instead, all names a journal has ever had, including name changes, are returned to the web tool and have to be parsed. Finally, a results page is generated showing all the data that have been collected in the two preceding steps. As PMD2HD prepares a unified list of articles with respect to both electronic and print holdings of local libraries, it can therefore be regarded as efficient tool that reduces the number of manual user actions for list collection and holding verification.
Results
The PMD2HD results page ( Figure 5) combines the main article data in a table, with a second column that describes holding and location information. Five possible actions are presented: 1. If the PubMed entry contains a DOI [10], a direct link to the full text of the article is provided. The full text can be accessed if the organization (or user) is a subscriber of this journal. This means that access is not always possible despite having found the URL of the full text.
2. The holding record of the respective print medium provided from the local library system is provided.
A link to the related ejournal is provided.
4. An indication 'Online?' is shown when the web tool cannot fetch an ejournal address from the online catalogue. This advises the user to check the bibliographic PMD2HD web tool Figure 2 PMD2HD web tool. The user can paste the content from the clipboard directly into the form. data manually when the stored names for the print journal and the ejournal are not exactly the same. 5. A link to the loan request web form. If the holding record of a print journal or an ejournal link is found, a link to the loan request page is not necessary.
For the link to the loan request page, the article data is filled in automatically from the retrieved data fields, and the users only have to enter their names, email addresses and local account number ( Figure 5). The ISSN number automatically inserted in the comment field helps the library staff dealing with the loan request. (An in-house electronic document delivery service is planned in the future.)
Discussion
PMD2HD is an easy-to-use web-based tool offering a usable interface and direct integration into the workflow of a typical PubMed search task. PMD2HD has also directed the Z39.50 search protocol for citation verification purposes directly at the library catalogue. PMD2HD unifies two different technical principles of data transmission, the connection-oriented Z39.50 and the non-connection client-server hypertext transmission (HTTP) data exchange. Figure 3 Data processing. The figure shows the order of the different steps. First the user selects his articles and then the data processing is triggered. The web tool asks the PubMed utilities (Entrez Programming Utilities, http:// eutils.ncbi.nlm.nih.gov/entrez/query/static/eutils_help.html) to retrieve the ISSN. With this item the Horizon system is queried. If the journal is available, the system is called again to find out if an ejournal entry is set. With all this information a results page is generated in a last step. Results page of the web tool Figure 4 Results page of the web tool. The results page lists next to the article data the information on how to access it.
Data processing
PMD2HD provides data storage that connectionless protocols cannot in terms of storing session parameters or preliminary results. Using the dedicated protocol Z39.50 overcomes the problem of inconsistencies in query languages, data format, and display styles that would ordinarily hinder a web tool's interactivity with a local library management system. PMD2HD, with small modifications, could be further adapted for library systems with support for the Z39.50 interface, particularly focused scientific libraries and collections. In fact, the PMD2HD application could expand beyond local application to perform a cascading sequence of subsequent transactions, scalable either to a formal network of libraries or the individual needs of scientists who have access to several libraries. The common denominator requirement for alternative implementation is Z.39.50 support. Preliminary investigation in Germany demonstrated that there are two large library networks and four important libraries, including 'Die Deutsche Bibliothek', offering access via Z39.50. Examples all over the world include BIBSYS, the Library of Congress and numerous university libraries, and CURL (Consortium of University Research Libraries) in UK [11]. A test of some of the mentioned libraries showed that access was possible. An additional advantage of Z39.50 is that it does not require any commercial software investment, other than the library system already in use. Development and testing of Z39.50-based routines can be performed using YAZ, a Z39.50 client that is freely available from Index Data, Copenhagen [12].
How does PMD2HD [5] compare with existing tools such as LinkOut [3] or Outside Tool [4]? As a free PubMed Web form for the loan request Figure 5 Web form for the loan request. The article data are filled automatically into the different form fields. The user only needs to enter his personal data like name or email.
Icon gallery at PubMed abstract site with an activated LinkOut and OutSide Tool Figure 6 Icon gallery at PubMed abstract site with an activated LinkOut and OutSide Tool.
Outside Tool LinkOut Account Article Resolver from PubMed service, LinkOut requires that a library uploads a list of the subscribed journal titles. The user of a certain library selects in his myNCBI the library (by using a so-called filter). Even more than one library can be selected. Now the user can perform a search, and on displaying the abstract page the icons of subscribed journals appear. PubMed's Outside Tool, in contrast to LinkOut, shows an icon that provides a link to an external program to which the PubMed ID is submitted. The program can fetch information about the article by using the PubMed ID. The service of keeping holdings up-to-date with Outside Tool is normally offered by a company, and the library has to maintain its subscription list to the company. The Outside Tool service of PubMed is free, but the service of the companies is not. Both services do not interactively check the local library system for holdings but instead rely on an uploaded list of local holdings. The PMD2HD service is independent from PubMed, yet dependent on the immediacy that a z39.50 interface to the library system pro-vides. PMD2HD's greatest advantage is the one-page condensed listing and the inherent simplicity of that format. In contrast, both the LinkOut and Outside Tool services do not integrate the holding result of multiple records on to a single result page. The PMD2HD integrated holdings output list can be printed as a to-do list for when a library visit is necessary.
Conclusion
Optimal benefit from PubMed literature retrieval can only be achieved when as many relevant articles as possible can be acquired in full text form as quickly as possible. PMD2HD has been created as a Web-accessible tool to perform this task on the library collection of the German Cancer Research Centre. PMD2HD helps scientists to obtain literature as fast and effortlessly as possible, and besides it saves time and money for unnecessary interlibrary loan transactions. PMD2HD can be implemented with other integrated library systems or used for retrieval PubMed PubMed LinkOut Dataflow Figure 7 PubMed PubMed LinkOut Dataflow.
Library uploads a list of subscribed journals to
PubMed.
PubMed Search
Result shows information if journal is subscribed Fulltext link (if available) for subscribed journals and library information In order to use the LinkOut function, data about subscribed journals have to be uploaded by the library. The user can select his appropriate libray by setting a filter.
within a network of federated libraries, as its underlying protocol Z39.50 is very widespread among scientific libraries. Use of Z39.50 prevents retrieval failure due to incompatibilities of query syntax and result display forms. In order to use the Outside Tool the library has to register the Outside tool at PubMed. | 2014-10-01T00:00:00.000Z | 2005-06-27T00:00:00.000 | {
"year": 2005,
"sha1": "85e01ae9d29e6f036b39556af19ec3ea990ec90b",
"oa_license": "CCBY",
"oa_url": "https://bio-diglib.biomedcentral.com/track/pdf/10.1186/1742-5581-2-4",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "471361fbe63501e603e1f8ee6da43aa6ce6aa268",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
269360821 | pes2o/s2orc | v3-fos-license | Potential Use of L-Arginine Amino Acids towards Proliferation and Migratory Speed Rate of Human Dental Pulp Stem Cells
Objective L-arginine is a semi-essential amino acid produced by the body which has an important role in the process of stem cell regeneration. However, under inflammatory conditions, denaturation of pulp amino acids and proteins occurr resulting in a decrease in the ability of stem cells to self-renew. Therefore, in this study, L-arginine was added in vitro to the culture media Dulbecco’s Modified Eagle Medium – (DMEM) of human dental pulp stem cells (hDPSCs) to analyse the potential of L-arginine on migration and proliferation by comparing between 3 concentrations, namely 300, 400, 500 μmol/L and control group (DMEM), to obtain the most optimal concentration for proliferation and migration. Methods Serum-starved hDPSCs were divided into four groups: control: hDPSCs in DMEM; hDPSCs in 300 μmol/L of the L-Arginine based culture media group; hDPSCs in 400 μmol/L of the L-Arginine based culture media group; and hDPSCs in 500 μmol/L of the L-Arginine based culture media group, which were added in two separate 24-well-plates (5×104 cell/well) for proliferation and migration evaluation. The proliferation of all groups was measured by using a cell count test (haemacytometer and manual checker) after 24 h. The migratory speed rate of all groups was measured by using cell migration assay (scratch wound assay) after 24 h. Cell characteristics were evaluated under microscope that was then evaluated using image-J® interpretation. This image J represented the measurement of migratory speed rate (nm/h) data. Statistical analysis was conducted using one-way ANOVA and post hoc Bonferroni (p<0.05) for proliferation and post hoc LSD (p<0.05) for migration. Results There was a statistically significant difference in hDPSCs proliferation among various concentration groups of the L-Arginine based solution (300, 400 and 500 μmol/L) compared to the control group (p<0.05). There was a statistically significant difference in the migratory speed rate of hDPSCs at 500 μmol/L of the L-Arginine based solution group compared to lower concentrations and control group (p<0.05). Conclusion All three concentrations of L-arginine can induce proliferation of hDPSCs. L-arginine at 500 μmol/L can induce higher hDPSCs proliferation and faster migration at 24 hours compared to lower concentrations and control.
INTRODUCTION
The pulp responds to stimuli by generating reactive oxygen species (ROS) and inducing inflammatory response.Oxidative stress induced by various stimuli can increase expression of pro-inflammatory factors such as tumor necrosis factor alpha (TNF-α) and interleukin-1 beta (IL-1β) (1).The presence of oxidative stress can cause tissue regeneration or necrosis, depending on the duration and intensity.ROS regulate cellular homeostasis and act as major modulators of cellular dysfunction that contributes to the pathophysiology of the disease (1).ROS play a role as a center for regulation of inflammatory signaling, especially nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κβ) (2).At lower concentrations, ROS has positive effects, namely tissue repair and cell differentiation.However, in higher concentrations, ROS induces uncontrolled cell activity, leading to mitochondrial dysfunction, mutagenesis, and apoptosis (3).
A total of 123 proteins have been identified in the dental pulp: 66 in healthy pulp, 66 in inflamed pulp, and 91 in necrotic pulp (4).Amino acids are the building blocks of protein, and also play an important role in pulp biological processes, such as cell communication, growth and maintenance, immune responses, metabolism, and transportation (5).The number of proteins and amino acids decrease in inflamed pulp, associated with amino acid sequence alteration due to the activation of the mitogen-activated protein kinase (MAPK) pathway by ROS (6).
L-Arginine is a semi-essential amino acid, the production of which is insufficient under oxidative stress and inflammation.It is mainly found in human cells, organs, and body fluids, including saliva (7).L-Arginine based medications had been developed as wound dressing and food supplement (8).L-Arginine can be obtained endogenously from protein synthesis of citrulline and exogenously from food, such as meat and beans (7).L-Arginine is cationic in nature; hence, it has antibacterial properties and supports wound healing bioactivity by modulating inflammation reactions, collagen formation, angiogenesis, and tissue remodeling (9).It is the sole substrate for nitric oxide synthase (NOS) to produce nitric oxide (NO) (10).NO is known to have an important role in cell proliferation and migration of fibroblasts (11).
Wound healing is initiated by hemostasis and followed by cell migration to wounded area to begin proliferation (12).Cell migration processes can be quantitatively measured by the migration rate through comparing the difference of initial and final wound length to time (13).Rhoads et al. (14) showed that the addition of 400 μmol/L of extracellular L-Arginine in fetal bovine serum (FBS) increased the migration rate by 1.7 fold.A significant increase in intestinal stem cell migration was observed in a minimum concentration of 200 μmol/L; the maximum effect was observed at 400 μmol/L (15).
Cell proliferation is a fundamental, complex, and organized biological process, as two daughter cells result from one cell (15).It is an increase in the number of cells after cell division and can be measured by different methods, such as cell counts, BrdU, and MTT-assay (15,16).Tan et al. (17) reported that the addition of 100 μmol/L and 350 μmol/L of L-Arginine in-creased intestinal stem cell proliferation after 48 and 96 hours.Kim et al. (18) showed that the most effective trophectoderm cell proliferation occurred using 200 μmol/L of L-Arginine after 4 days.Fujiwara et al. (19) studied L-Arginine supplementation toward skin fibroblast proliferation after 6, 12, and 24 hours.Maximum proliferation was observed at 600 μmol/L of the L-Arginine group after 12 and 24 hours (19).
The present study aimed to analyze the potency of L-Arginine in different concentrations that had been added to culture media to evaluate the proliferation and migration of hDPSCs.
MATERIALS AND METHODS
This study was approved by the Ethics Committee on March 7 th , 2022, with number as followed No. 3/Ethical Exempted/ FKGUI/III/2022; No. Protocol: 050190222.hDPSCs were cultured from previous research, with number as followed No.49/ Ethical Approval/FKGUI/IX/2020 (amendment); No. Protocol: 070260820 and conducted in accordance with The Declaration of Helsinki.This study was conducted at Prodia Stem Cell (ProStem) Laboratory, Jakarta, Indonesia.This in vitro study consisted of several stages; hDPSCs culture and serum starvation induction, L-arginine based solution dilution, then the hDPSCs were divided for migration test and proliferation test.
Human Dental Pulp Stem Cell Culture
The hDPSCs at passage 5 (cultured from biological stored raw material from previous study) and were incubated in a humidified atmosphere of 5% CO 2 , at 37°C until reaching 80% confluence.Cells were then starved by replacing the cell culture supplement with DMEM (Gibco, Thermo Fisher Scientific, Massachusetts, USA) with 1% FBS (Gibco, Thermo Fisher Scientific, Massachusetts, USA) for 24 hours.
Migration of hDPSCs
Migration test procedure was based on previous study on human dental pulp stromal cells migration.hDPSCs cells were divided into 24 well plate (NUNC, Thermo Fisher Scientific, Massachusetts, USA) each containing 2×10 5 cells.A single continuous scratch was made on confluent cell surfaces using a P200 pipette tip (Dragon Lab, DLAB, Beijing, China) to create a 'wound' .Initial wound lengths were observed under microscope magnification and measured using ImageJ Software (ImageJ, U.S. National Institutes of Health, Maryland, USA).After that, 300, 400 and 500 μmol/L of L-Arginine solution was added and then incubated in a humidified atmosphere of 5% CO 2 , 5% CO 2 at 37°C.After 24 hours, final wound lengths were measured using the same software.After that, migration rates were calculated by dividing the differences of final and initial lengths to time (24 hours).
Human Dental Pulp Stem Cell Proliferation
hDPSCs cells were divided into 24 well plate each containing 2×10 5 cells.300 μmol/L, 400 μmol/L, and 500 μmol/L of L-Arginine solution was added and incubated in a humidified atmosphere of 5% CO 2 , at 37°C.After 24 hours, quantitative analysis was done using cell count test with hemocytometer (Blaubrand™ Neubauer Counting Chamber, Thermo Fisher Scientific, Massachusetts, USA) and manual checker under microscope magnification (Carl Zeiss, Zeiss, Oberkochen, Germany).
Statistical Analysis
Data analysis was done using IBM SPSS Statistics Software (version 26.0,IBM Corp., New York, USA).This study used two different statistical analysis.hDPSCs migration data was analysed using one-way ANOVA, followed by LSD post hoc to compare between study groups.hDPSCs proliferation data was analysed using one-way ANOVA, followed by Bonferroni post hoc to identify comparisons between groups.The level of significance was set at 0.05 (p=0.05).
Human Dental Pulp Stem Cells Migration
For migration analysis, the measurement was done by a single observer.Inter-rater reliability test was done with intraclass correlation coefficient (ICC) of 0.946.(IBM SPSS Statistics Software, version 26.0,IBM Corp., New York, USA).In this study, wound lengths were measured shortly after scratching (initial wound length) and after 24 hours (final wound length).Microscopic images (Fig. 1) were obtained during observation, and then processed and measured using ImageJ Software.(ImageJ, US National Institutes of Health, Maryland, USA) The narrowest final wound length was observed in 500 μmol/L group, which was 1.09×10 -3 μm.According to one-way ANOVA analysis there were significant differences between test groups (p<0.05).LSD post-hoc analysis showed that there was a significant difference between final and initial wound length in 500 μmol/L group (p<0.05).Migration speed rate was calculated by dividing the difference between final and initial wound length to time (24 hours).According to migration speed rate formula, the rate of hDPSCs migration was 7.7 times faster in 500 μmol/L group compared to control.However, 300 and 400 μmol/L group were not significantly different with control group (Table 1 and Fig. 2).
Human Dental Pulp Stem Cells Proliferation
For proliferation analysis, the measurement was done by a single observer.Inter-rater reliability test was done with intraclass correlation coefficient (ICC) of 0.996.There were significant differences (p<0.05) in the potency of various concentrations of the amino acid L-Arginine on the proliferation of hDPSCs (Table 2).The highest average yield was obtained at the concentration of L-Arginine amino acid culture medium 500 μmol/L.
DISCUSSION
This study examined the potency of the amino acid L-Arginine against hDPSCs.L-Arginine acts as a central metabolic module where L-Arginine metabolism directly and indirectly participates in many biological phenomena such as vasodilation, calcium release, adenosine triphosphate (ATP) regeneration, neurotransmission, cell proliferation and the immune system (9) The purpose of this study was to determine the potential differences of various concentrations of L-Arginine amino acid solution on the proliferation and migration of hDPSCs.hDPSCs were used in this study because of the important role in tissue regeneration and immunomodulatory functions that control inflammation to achieve tissue homeostasis (20).hDPSCs have a rapid proliferation rate, non-specialized cells and can differentiate into other types such as osteoblasts, odontoblasts, chondrocytes, neurons and adipocytes (21).hDPSCs showed greater proliferation than other stem cells such as bone marrow stem cells (BMSCs) and adipose-derived stem cells (ASCs) (22).In this study, serum starvation was also performed on hDPSCs to conform to the biological niche of pulpal inflammation (reversible pulpitis) so that the proliferation potential of hD-PSCs could be determined after treatment with L-Arginine.
Migration of hDPSCs towards wound area involved a complex process which was not fully known.Two of many signaling pathways that expected to have a role in hDPSCs migration are stromal cell-derived factor-1/ CXC chemokine receptor-4 (SDF-1/CXCR4) axis and focal adhesion kinase (FAK) (23).L-Arginine was the sole substrate of nitric oxide synthase (NOS) for secreting nitric oxide (NO) (9).NO is a reactive free radical.24), the addition of 50-500 μmol/L extracellular L-Arginine could increase NO synthesis in various cells.NO could activate focal adhesion kinase (FAK) signalling pathway, which internalises integrin and regulates focal adhesion assembly and disassembly (23).For a cell to migrate, there should be an interaction between focal adhesion (FA) molecule and acin molecule on cytoskeleton.Meanwhile, cell migration would be disrupted, if FAK pathway was inhibited (8).
Polyamine, which is one of L-Arginine metabolites, is needed during migration process, due to its ability to create lamellipodia, stress fibre and integrin subunit (11).
In this study, hDPSCs migration rate treated with 500 μmol/L of L-Arginine was significantly faster than control.Mean- while, 300 and 400 μmol/L group showed no statistical differences to control.According to Rhoads (2004), at minimum concentration of 200 μmol/L, migration of intestinal stem cells were significantly risen, than lowered at concentration of 2000 μmol/L ( 14).This might be influenced by L-Arginine absorption from various concentrations into cells.Intracellular L-Arginine concentration was known to be higher than micro-extracellular environment, ranged from 100-1000 μmol/L (9).Whereas the concentration of L-Arginine in plasma varied from 72.4 to 113.7 μmol/L (25).Cellular absorption of L-Arginine could not occur passively since the lipid membrane not permeable to cationic L-Arginine molecule.Hence, active cationic amino acids transport system is needed, such as cationic amino acid transporters-1 and -2 (CAT-1 and -2) (9).There was no previous studies that reported the minimum concentration of L-Arginine needed in stimulating hDPSCs migration.Therefore, further study should be done to find the optimum dose.
The choice of concentrations of 300, 400 and 500 mol/L was based on previous studies, although studies were conducted on cells other than hDPSCs.Tan et al. (17) tested the addition of L-Arginine concentrations of 100 and 350 mol/L (physiological concentrations in plasma) resulted in an increase in intestinal cell proliferation at 48 hours and 96 hours.Fujiwara et al. (19) conducted a study using L-Arginine supplementation with a concentration of 0-7 milliMol (700 mol/L) on the proliferation of skin fibroblasts at 6, 12 and 24 hours, with the results of L-Arginine at a concentration of 6 milliMol (600 mol/L) resulted in maximum stimulation of proliferation and an increase in the number of fibroblast cells at 12 and 24 hours by MTT-assay and cell counting with trypan blue.Greene et al. (26) studied L-Arginine 200 mol/L (physiological) and 800 mol/L (supraphysiological) on human endometrial carcinoma cell line (RL95-2) for 2 and 4 days and showed higher cell proliferation at both concentrations especially at L-Arginine 800 mol/L.In this study, the use of a concentration of 200 mol/L due to the plasma concentration of L-Arginine in humans physiologically so the researchers determined the use of L-Arginine concentrations between 200 mol/L and 800 mol/L (26).
From this study, the highest proliferation of hDPSCs average yield was obtained at the concentration of L-Arginine amino acid culture medium 500 mol/L.This shows that the amino acid L-Arginine supports the potential for cell proliferation.In this case, hDPSCs were observed at 24 hours which is the time for cells to cycle through the G1-S-G2-M phase (27).The cell cycle is the process by which all cells reproduce.The speed of the cell cycle depends on the stage of development and the type of cell.The cell cycle is divided into 2 basic phases, namely Interphase and M phase (mitosis), with the full cell cycle in about 24 hours.Duration of G1 lasts for 8-10 hours, S phase is 6-8 hours, G2 usually lasts for 4-6 hours and M phase less than an hour (28).
In proliferation and migration evaluation, the control group (DMEM) and the L-Arginine group of 300, 400 and 500 mol/L showed significant differences.This can occur because L-Arginine functions as a precursor to various biologically active compounds such as NO, ornithine, proline and polyamine, cre-atine to phosphocreatine, and agmatine.Ornithine is a stimulator of cell growth and differentiation.L-Arginine is the only substrate used for NO synthesis and NO is synthesized during inflammatory conditions (10).L-Arginine is converted to NO through the action of NOS, while polyamines are produced through the conversion of L-Arginine to ornithine via arginase (12).These polyamines and NO have important roles in cellular processes and cell signaling.NO and polyamines stimulated cell proliferation and had a positive effect on progression through the cell cycle (26).Thus, the proliferation potential of hDPSCs in the L-Arginine group was significantly different from that of the control group.
L-arginine supplementation is important in wound healing, facilitated by NO synthesized from L-arginine through the action of NOS during the wound healing process (29).L-arginine stimulates fibroblast proliferation through G protein-coupled receptor family C group 6 member A -extracellular signal-regulated kinase 1/2 (GPRC6A-ERK1/2) pathway and phosphoinositide 3 kinase -protein kinase B (PI3K/AKT) pathway, regulates the cytokine environment in the wound area, produces an anti-apoptotic effect, and reduces interleukin 6 (IL-6) present in the wound area so that L-arginine reduces the harmful effects of increased inflammation in the wound.wound area (19).In addition, L-arginine supplementation increases protein synthesis, decreases protein degradation and has a protective effect against lipopolysaccharides (LPS) induced damage through the mechanic target of rapamycin (mTOR) and toll-like receptor 4 (TLR4) signaling pathways (17).
CONCLUSION
This study shows that 500 μmol/L of L-Arginine can induce higher hDPSCs proliferation and migration in 24 hours compared to lower concentrations and control. | 2024-04-26T06:18:23.369Z | 2024-04-25T00:00:00.000 | {
"year": 2024,
"sha1": "e57e001557f58f0e1fe01e95fd94485370cc4278",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14744/eej.2023.54376",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bda211325e0aa41b81074c4db2f4bddd68a1a6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242844588 | pes2o/s2orc | v3-fos-license | Construction and Analysis for Dysregulated lncRNAs and mRNAs in Lipopolysaccharide-Induced Porcine Peripheral Blood Mononuclear Cells
Background: Lipopolysaccharide (LPS) in the outer membrane of Gram-negative bacteria induces an intense inammatory response in pigs. Long noncoding RNAs (lncRNAs) are emerging as key regulators in inammation and immunity. However, their functions and proles in LPS-induced inammation in pigs are largely unknown. In this study, we proled global lncRNA and mRNA expression changes in porcine peripheral blood mononuclear cells (PBMCs) treated with lipopolysaccharide (LPS) using lncRNA-seq technique. Result: Totally 43 differentially expressed (DE) lncRNAs and 1082 DE mRNAs were identied in the porcine PBMCs after LPS stimulation. Functional enrichment analysis on DE mRNAs indicated that these genes were involved in inammation-related signaling pathways, including cytokine-cytokine receptor interaction, TNF, NF-κB, Jak-STAT and TLR signaling pathways. Additionally, co-expression network and function analysis identied the potential lncRNAs related to inammatory response and immune response. The expression of eight lncRNAs (ENSSSCT00000045208, ENSSSCT00000051636, ENSSSCT00000049770, ENSSSCT00000050966, ENSSSCT00000047491, ENSSSCT00000049750, ENSSSCT00000054262 and ENSSSCT00000044651) was validated in the LPS- treated and non-treated PBMCs by quantitative real-time PCR (qRT-PCR). In LPS-challenged piglets, we identied that three lncRNAs (ENSSSCT00000051636, ENSSSCT00000049770, and ENSSSCT00000047491) were signicantly up-regulated in liver, spleen and jejunum tissues after LPS challenge, which revealed that these lncRNAs might be important regulators for inammation . Conclusion: This study provides the rst lncRNA and mRNA transcriptomic landscape of LPS-mediated
LPS-stimulated cells, and aggravates LPS-induced injury possibly via sponging miR-34a [7]. LncRNA-MALAT1 is induced by IL-6 in LPS-treated cardiomyocytes and its overexpression can enhance TNFexpression via activation of SAA3 [8]. The LPS-induced lncRNA Mirt2 functions as a repressor of in ammation through inhibition of TRAF6 oligomerization and auto-ubiquitination [9]. LncRNA GAS5 reverses LPS-induced in ammatory injury in ATDC5 chondrocytes by inhibiting the NF-κB and Notch signaling pathways [10]. However, the de nition of functional lncRNA in pigs are still limited, partly due to their low sequence conservation and lack of identi ed shared properties across species [11].
Considering the role of lncRNAs in the in ammatory response, the present study was designed to discover and explore lncRNA expression pro le of porcine PBMCs in response to LPS. To date, there has been no systematic attempt to identify the lncRNAs whose expression is changed after the induction of the innate immune response in pigs. This analysis of lncRNA expression changes in the porcine PBMCs after LPS stimulation would contribute to the current knowledge of lncRNA functions in gram-negative bacterial infection disease pathogenesis in pigs.
Quantitative RT-PCR Total RNA was isolated from cell and tissues samples using Trizol (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. cDNA synthesis and quantitative real-time PCR (qRT-PCR) were carried out as previously descried [13]. Expression of mRNA and lncRNA was analyzed using the Applied Biosystems 7500 Real-Time PCR system (Applied Biosystems, Foster City, CA, USA). Glyceraldehyde 3phosphate dehydrogenase (GAPDH) was used as an internal normalization control. All data were analyzed using the 2 −∆∆CT method [14]. Sequences of speci c primers are shown in Additional le 1.
Cytokine TNF-measurement
The concentration of TNF-in the supernatants of PBMCs were measured using commercially available porcine ELISA kits (R&D Systems, Inc., Minneapolis, MN) according to the manufacturer's instruction.
High-throughput sequencing RNA quality was examined by gel electrophoresis and only paired RNA with high quality was used for lncRNA-sEq. LncRNA-seq libraries were prepared according to the manufacturer's instructions and then applied to sequencing on Illumina HiSeq 3000 in Shanghai Genergy Co. Ltd (Shanghai, China). The original reads were harvested from the Illumina HiSeq sequencer. 3' adaptor-trimming and low-quality removal was performed with Trim Galore software (http://www.bioinformatics.babraham.ac.uk/projects/trim_galore/), after which the resulting clean reads were used for lncRNA analysis. Clean reads were aligned to the pig reference genome (Sscrofa11.1) using STAR software (https://github.com/alexdobin/STAR).
LncRNA and mRNA differential expression analysis
To determine lncRNA and mRNA differential expression between the LPS-stimulated and unstimulated groups, the expression of each transcript was normalized to the total number of reads in the samples using the following formula: FPKM = total fragments / mapped reads (millions) * exon length (KB). The fold change in lncRNA or mRNA reads was presented as log2 transformation using the following formula: Fold change = log2 (LPS/Control). A adjust P-value less than 0.05 is considered signi cant.
Functional enrichment analysis
GO enrichment and KEGG analysis of DE mRNA was performed with DAVID 6.8 database (https://david.ncifcrf.gov/). Protein-protein interaction network between DE mRNA were analyzed by Ingenuity Pathway Analysis (IPA).
Target gene prediction and co-expression network construction
The predicted potential target genes whose loci were within a 10-kb window upstream or downstream of the lncRNA was considered cis-regulated genes. To determine the trans-regulated genes of the DE lncRNAs, the lncRNA and mRNA co-expression analysis were performed using Pearson correlation coe cient (PCC) method. The Pearson correlation coe cient was ≥ 0.9. The common genes between the potential targets of DE lncRNAs and DE mRNAs was analyzed using Venn analysis.
The network of coding-noncoding coexpressed genes was constructed with the biological functions to recognize the novel and signi cant lncRNA. Correlations between lncRNAs and their corresponding mRNAs were calculated with Pearson's correlation (|correlation|≥0.9) were used to draw the coexpression network through Cytoscape v 3.7.1.
Statistical analyses
The data are shown as means ± S.D. Differences were tested using ANOVA and the Student's paired t test. The level of signi cance was set at P < 0.05 for all data analysis.
Results
The expression of TNF-α, IL-1β and IL-6 in LPS induced PBMCs In order to identify principal LPS-responsive lncRNAs, PBMCs isolated from the whole blood of the three healthy pigs were stimulated with LPS for 4, 8, 12 and 24 h to induce in ammatory response. The mRNA of TNF-increased signi cantly in porcine PMBCs within 12 h and peaked at 4 h after LPS stimulation ( Fig. 1a). IL-6 and IL-1β were signi cantly increased at each time point and peaked at 8 h and 4 h, respectively ( Fig. 1b-c). We also applied ELISA to determine TNF-protein level in the cell culture supernatant. The concentration of TNF-was obviously increased by LPS stimulation compared to the control at each time point (Fig. 1d), and it was undetected in controls at 12 and 24 h. These results suggested an acute in ammation was induced by LPS stimulation in PBMCs. Then, the RNA samples isolated from PBMCs treated for 8 hours were used for further high-throughput sequencing.
Basic characteristics of lncRNAs in the PBMCs
The basic characteristics of all DE lncRNAs and DE mRNAs in PBMCs, which are widely distributed in all chromosomes except Y chromosome, are shown in the Circos plot (Fig. 3A). Next, we classi ed the PBMC lncRNAs into ve categories according to the genomic loci of their neighbouring genes (Fig. 3B). Although 10% of lncRNAs were not successfully categorized, the well-annotated lncRNAs were classifed into the following categories: intergenic (22%), antisense (9%), intronic (57%) and bidirectional (2%), and sense (0%).
Function analysis of DE mRNAs
The functional enrichment analysis of the 636 signi cantly up-regulated genes was performed. Our GO analysis included three parts: biological process (BP), cellular component (CC) and molecular function (MF). The top 15 GO enrichment for biological process for the up-regulated mRNAs was illustrated in Fig. 4a, including in ammatory response, immune response, positive regulation of in ammatory response and chemokine-mediated signaling pathway. The signi cantly enriched GO terms for cellular component and molecular function were identi ed, such as extracellular space, external side of plasma membrane, integral component of plasma membrane, cytoplasm, I-κB/NF-κB complex, cytokine activity, chemokine activity and growth factor activity.
The 446 signi cantly down-regulated genes were also selected to carry out the functional enrichment analysis. Twenty four GO terms, including protein localization to plasma membrane, regulation of Gprotein coupled receptor protein signaling pathway, cell surface, cell-cell junction, and kinase activity, were shown in Fig. 4b.
Meanwhile, KEGG results form the signi cantly up-and down-regulated genes indicated that the top 30 signi cantly signaling pathways were enriched (Fig. 4c), such as cytokine-cytokine receptor interaction, TNF signaling pathway, NF-κB signaling pathway, Jak-STAT signaling pathway, Chemokine signaling pathway and Toll-like receptor signaling pathway.
In addition, the interaction network between proteins was produced using IPA. The in ammatory immune network was shown in Fig. 4d. Seven candidate genes (NTRK1, S100A8, S100A9, TNIP1, TNFAIP3, TAX1BP1 and NOD2) were screened out as Hub genes.
Function analysis of DE lncRNA and the lncRNA-mRNA co-expression network A total of 1053 potential target genes of DE lncRNAs were selected to carry out functional enrichment analysis. Our results showed that the top 20 GO enrichment for biological process focused on immune response, in ammatory response, neutrophil chemotaxis, positive regulation of in ammatory response, chemokine-mediated signaling pathway, positive regulation of ERK1 and ERK2 cascade, lymphocyte chemotaxis, positive regulation of NF-κB import into nucleus, positive regulation of IL-6 production, necroptotic signaling pathway, monocyte chemotaxis, cell chemotaxis, positive regulation of NF-kappaB transcription factor activity, negative regulation of IL-10 production and lipopolysaccharide-mediated signaling pathway, etc. (Fig. 5a). Meanwhile, the KEGG results from these target genes of DE lncRNAs were mainly involved in immune response (Fig. 5b).
To identify the key lncRNAs related to the regulation of in ammatory response and immune response, 54 DE mRNAs associated with these two biological processes and 31 DE lncRNAs targeting them were chosen to build the mRNA-lncRNA co-expression network. The co-expression network comprised 1241 connections and each lncRNA might correlated with multiple mRNAs (Fig. 5c and Additional le 8). More importantly, a total of 29 lncRNAs were found to be co-expressed with chemokines (CCL2, CCL3L1, CCL11, CCL17, CCL20, CCL22, CXCL2, CXCL8, CXCL10 and CXCL13) and cytokines (IL-1A, IL-6, IL-7, IL-10, IL-12B, IL-13, IL-18, IL-19, IL-20, IL23A, IL-27). While, a total of 30 lncRNAs were shown to be co-expressed with other in ammatory-related genes such as Toll-like receptors (TLR2 and TLR3), the Rel/NF-kB family members (REL, RELB and NFKB2), and NFKBIZ encoding the NF-κB inhibitor. Three lncRNAs (ENSSSCT00000047491, ENSSSCT00000049770 and ENSSSCT00000051636) expressions were increased more than 2.5-fold in LPS-treated PBMCs at 4 h. Therefore, we further identi ed the expression changes of the three lncRNAs in various tissues of the piglets challenged with LPS at different time points.
Expression of lncRNAs (ENSSSCT00000047491, ENSSSCT00000049770 and ENSSSCT00000051636) in various tissues of piglets challenge with LPS As shown in Fig. 7a, in the liver tissue, lncRNA ENSSSCT00000047491 were dramatically up-regulated at least 4-fold by LPS from 2 h to 24 h, while the upregulation (> 9-fold) of lncRNA ENSSSCT00000051636 was observed after LPS challenge for 8 hours. LncRNA ENSSSCT00000049770 was dramatically (4 to 12-fold) up-regulated within 8 h but it was down-regulated at 24 h.
As shown in Fig. 7b, in the jejunum tissue, lncRNA ENSSSCT00000049770 was increased signi cantly at least 1.9-fold from 1 h until 24 h after LPS challenge, while the expression of lncRNA ENSSSCT00000051636 was signi cantly increased by > 2-fold from 1 h to 4 h. The expression of lncRNA ENSSSCT00000047491 was signi cantly up-regulated by LPS challenge at 4, 12 and 24 h.
As shown in Fig. 7c, in the spleen tissue, the expression of the three lncRNAs was signi cantly increased within 8 h and reached the peak at 2 h, but their expression was decreased at 24 h.
As shown in Fig. 7d, in the thymus tissue, the expression of the three lncRNAs showed no signi cant difference.
Tissue expression pattern analysis of lncRNA ENSSSCT00000047491, ENSSSCT00000049770 and ENSSSCT00000051636 We further detected the differences of the three lncRNAs (ENSSSCT00000049770, ENSSSCT00000051636 and ENSSSCT00000047491) expression levels in piglets varieties tissues by qRT-PCR. The results showed that the three lncRNAs were expressed in all of the ten tissues: skeletal muscle, heart, liver, spleen, lung, kidney, jejunum, stomach, brain and thymus. We also found that the expression levels of lncRNA ENSSSCT00000047491 and ENSSSCT00000051636 were higher in liver and spleen than in other tissues, while the expression level of lncRNA ENSSSCT00000049770 were higher in spleen, lung and jejunum than in other tissues (Fig. 8).
Discussion
Accumulating evidence has indicated that lncRNAs play roles in immune/in ammatory processes [15,16]. respectively [17]. However, to date there has been no systematic attempt to identify LPS-associated lncRNAs in porcine PBMCs.
In this study, we provides the rst lncRNA and mRNA transcriptomic landscape of LPS-mediated changes in porcine PBMCs. Using a sequencing approach, we obtained 1074 lncRNAs and 27430 mRNAs in porcine PBMCs. Of the 1074 lncRNAs, we identi ed 31 up-regulated and 12 down-regulated lncRNAs in LPS-treated PBMCs compared with the control. Of the 27430 mRNAs, 636 mRNAs were up-regulated and 446 mRNAs were down-regulated by LPS stimulation. Similarly to the results described by Zhang et al. [17] in human PBMCs, we found more mRNAs than lncRNAs were dysregulated in response to LPS.
Moreover, more transcripts of the dysregulated mRNA or lncRNAs were observed to be up-regulated after LPS stimulation in human and porcine PBMCs. In our recent study, we had investigated the changes in miRNA expression in porcine PBMCs after in vitro stimulation with LPS and identi ed only 15 DE miRNA in response to LPS by using small RNA sequencing [12]. Therefore, we thought LPS stimulation might have more in uence on protein coding RNA more than non-coding RNA in PBMCs.
It was well recognized that a series of genes involved in LPS-induced in ammation [18][19][20]. In this study, the DE mRNAs functional enrichment results showed that these genes were related to some biological processes, including in ammatory response, immune response, cytokine activity, chemokine activity, cytokine-cytokine receptor interaction, TNF signal pathway, NF-κB signal pathway, JAK-STAT signal pathway, NOD-like receptor signal pathway and TLR signal pathway, which were closely associated with LPS-induced in ammation [21][22][23][24][25]. In addition, though the IPA network analysis, we identi ed some genes might play key roles in LPS-induced in ammation, including NTRK1, S100A8, S100A9, TNFAIP3, TNIP1, TAX1BP1 and NOD2. As a high-a nity receptor for nerve growth factor (NGF), NTRK1 is expressed on various structural and hematopoietic cells including basophils and eosinophils [26,27]. Rochman et al. [28] demonstrated IL-13 confers epithelial cell responsiveness to NGF by regulating NTRK1 levels by a transcriptional and epigenetic mechanism and this process likely contributes to allergic in ammation. Calcium-binding proteins S100A8 and S100A9 have been identi ed as important DAMPs and recognized by TLR4 on monocytes, which function as innate ampli er of infection, autoimmunity, and cancer [29,30]. The TNFAIP3 gene, encoding the ubiquitin-modifying enzyme A20, that restricts NF-κB-dependent signaling and prevents in ammation via its deubiquitinase activity [31]. TNIP1 is increasingly being recognized as a key anti-in ammatory protein by negatively regulating TANK-binding kinase 1 (TBK1), receptor-interacting serine/threonine kinase 1 (RIP1 or RIPK1), and interleukin-1 receptor-associated kinase 1 (IRAK1) [32][33][34][35]. TAX1BP1 is a negative regulator of NF-κB activation induced by TNF-α and IL-1β [36]. It inhibits RIP1 and TRAF6 polyubiquitination and recruits A20 to these molecules in order to in uence NF-κB activation [37]. NOD2 is a macrophage-speci c protein containing two CARD domains and can directly bind bacterial lipopolysaccharide and subsequently act as an activator of NF-κB via the association of the CARD domains with Rip2/RICK/CARDIAK [38].
LncRNAs can be categorized into ve broad subcategories: antisense, sense, intergenic, intronic, and bidirectional [39]. Approximately half of lncRNAs in porcine PBMCs belonged to the intronic subcategory, which describes lncRNAs that are located within protein-coding genes and could regulate functional gene expression. Based on their mode of action on gene expression, lncRNAs can be classi ed as either cis-or trans-acting. Cis-acting lncRNAs affect the expression of genes located near their site of transcription on the same chromosome. Trans-acting lncRNAs can control gene expression at independent loci on other chromosomes [40]. Then, we predicted the cis and trans potential targets of DE lncRNAs and compared these with our mRNAs sequencing results. The DE lncRNA-associated DE mRNAs were further analyzed for GO category and KEGG pathway annotation to investigate the potential regulatory roles of LPSmediated DE lncRNAs. Bio-informatics analysis of DE lncRNAs target genes showed that these genes played important roles in immune response, in ammation, positive regulation of ERK1 and ERK2 cascade, positive regulation of NF-κB import into nucleus, positive regulation of IL-6 production and immune cell chemotaxis. ERK1 and ERK2 are reported to be required for LPS-induced production of cytokines and chemokines by macrophages [41]. KEGG analysis also indicated that the DE lncRNAs were predominantly associated with the regulation of multiple in ammatory associated genes. In addition, the lncRNA-mRNA co-expression analysis showed revealed that 29 DE lncRNAs targeted CCL and CXCL chemokines, indicating that these lncRNAs might participate in immune cell chemotaxis. The coexpressed network showed that multiple lncRNAs could interact with NFKBIA, which conjugated with NF-κB resulting in its cytoplasmic sequestration and inhibition of transcriptional activation [42]. Consequently, our results provide new evidence that lncRNAs are involved in LPS-indcued in ammation in porcine PBMCs.
Subsequently, 9 DE lncRNAs were selected for validation through qRT-PCR. For the nine lncRNAs, 8 selected lncRNs were validated to be signi cantly up-regulated by LPS stimulation at different times. The three lncRNAs (ENSSSCT0000004749, ENSSSCT00000049770 and ENSSSCT00000051636) was increased more than 2.5-fold after LPS stimulation in porcine PBMCs. Then, we further con rmed the expression changes of the three lncRNAs in liver, spleen, jejunum and thymus of the piglets challenged with LPS at different times (0, 1, 2, 4, 8, 12 and 24 h). The previous studies demonstrated LPS challenge can induce sever in ammation in the piglet model, causing liver injury, intestinal damage and histological changes of spleen [43][44][45][46][47]. Although the three lncRNAs displayed different tissue expression patterns in the piglets, they all shown abundant expression in liver, spleen, jejunum and thymus. In liver tissues, lncRNA ENSSSCT00000047491 expression increased approximately 4-to 33-fold, lncRNA ENSSSCT00000049770 increased 4-to 12-fold and lncRNA ENSSSCT00000051636 increased 9-to 64fold in response to LPS challenge. In the spleen and jejunum, LPS challenge induced an increase in these lncRNAs expression with a maximum response of 4-fold increase. However, in the thymus, LPS challenge had no effect on the expressions of the three lncRNAs. These results indicated the three lncRNAs might have different function in different tissues. We predicated only FLYWCH2 gene was the cis-target of lncRNA ENSSSCT00000049770 and also co-expressed with this lncRNA. However, there is very little speci c information on the expression and function of FLYWCH2 gene in in ammation.
Conclusions
In the current study, we have provided the changes of lncRNA expression in porcine PBMCs after LPS stimulation, which provided a novel foundation for improving our understanding of the association between PBMC lncRNA homeostasis and in ammatory response in pigs. Further investigations are still required to evaluate the biological functions of these identi ed lncRNAs and these signaling pathways with regard to their roles in immunity and disease. The expression pattern of the three lncRNAs (ENSSSCT0000004749, ENSSSCT00000049770 and ENSSSCT00000051636) in porcine various tissues. The data represent the mean ± SD. N=2. | 2020-08-20T10:12:37.419Z | 2020-08-19T00:00:00.000 | {
"year": 2020,
"sha1": "7d462a99fea2aa4aa3325ee0369c5a70f8c5c6eb",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-58889/v1.pdf?c=1631867729000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6060c260a5213c14426770b2992d8f14cbebab41",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
18907430 | pes2o/s2orc | v3-fos-license | Development of Microsatellite Markers in Pungtungia herzi Using Next-Generation Sequencing and Cross-Species Amplification in the Genus Pseudopungtungia
Nuclear microsatellite markers for Pungtungia herzi were developed using a combination of next-generation sequencing and Sanger sequencing. One hundred primer sets in the flanking region of dinucleotide and trinucleotide repeat motifs were designed and tested for efficiency in polymerase chain reaction amplification. Of these primer sets, 16 new markers (16%) were successfully amplified with unambiguous polymorphic alleles in 16 individuals of Pungtungia herzi. Cross-species amplification with these markers was then examined in two related species, Pseudopungtungia nigra and Pseudopungtungia tenuicorpa. Fifteen and 11 primer pairs resulted in successful amplification in Pseudopungtungia nigra and Pseudopungtungia tenuicorpa, respectively, with various polymorphisms, ranging from one allele (monomorphic) to 11 alleles per marker. These results indicated that developing microsatellite markers for cross-amplification from a species that is abundant and phylogenetically close to the species of interest is a good alternative when tissue samples of an endangered species are insufficient to develop microsatellites.
A good understanding of the genetics of endangered species can assist in developing strategies for their conservation [6]; however, to date, no population genetic studies of these species have been performed and no high-resolution molecular markers have been developed for them. Insufficient genomic DNA and the lack of an appropriate number of individuals to screen polymorphic markers hinder further research progress. To overcome this difficulty, we used transferable microsatellite markers from a phylogenetically close species. Recent studies have demonstrated the value of the cross-species microsatellite amplification approach in population studies of species for which microsatellites have not yet been developed [7,8]. The limitations of cross-species microsatellite amplification are reduced levels of diversity in the target species [9], increased frequency of null alleles and homoplasy [10][11][12], and limited applicability to very closely related species belonging to the same genus or to recently separated genera [13,14].
Next-generation sequencing (NGS) methods are revolutionizing molecular ecology by simplifying the development of molecular genetic markers, including microsatellites [21]. In this study, we developed a set of microsatellite markers from Pungtungia herzi using a NGS protocol [22]. We then examined the polymorphic markers from Pungtungia herzi in the two related endemic species, Pseudopungtungia nigra and Pseudopungtungia tenuicorpa.
Results of DNA Sequencing with A Roche 454 GS-FLX Titanium Run
We obtained 243,749 total reads (91.87 Mb) for Pungtungia herzi. The total number of contigs was 9708 and the number of singletons was 146,093. A total of 1981 regions containing perfect dinucleotide/trinucleotide repeats were identified. The most frequent dinucleotide type in the Pungtungia herzi genome was CA repeats (51.66%), followed by AT (25.10%), CT (21.77%), and GC (1.47%). These results are consistent with previous findings that the CA repeat is the most frequent in 353 species of vertebrata genomes and is generally 2.3-fold more frequent than the second most common microsatellite, AT [23]. Of the trinucleotide repeats, the AAT/ATA/TAA motif was the most frequent (59.87%), and the CGG/GCG/GGC repeat motif was the least common. Among the isolated dinucleotide microsatellites, the highest number of repeat-unit iterations was 27 (54 bp); among the trinucleotide microsatellites, the highest number of repeat-unit iterations was 19 (57 bp).
Microsatellite Marker Development
We chose 100 microsatellites with the largest number of repeat motifs to test amplification efficiency and assess polymorphism. Of these, 16 new markers (16%) produced strongly amplified polymerase chain reaction (PCR) products in all specimens with unambiguous alleles in Pungtungia herzi. The nucleotide sequences of the 16 new microsatellite markers were deposited in the GenBank database under accession numbers JQ889798-JX915813 (Table 1).
Among three populations of Pungtungia herzi, these 16 markers were polymorphic ( Table 2). The Gyeongho River population yielded 5-15 alleles, and the H O and H E ranged from 0.469 to 0.969 and from 0.580 to 0.913, respectively. The Imjin River population produced 3-15 alleles, and the Geum River population yielded 1-10 alleles.
Hardy-Weinberg equilibrium (HWE) was tested in the Gyeongho River population. In this population, PH_CA36, PH_CA38, PH_ATC01 (p < 0.01), and PH_CT03, PH_CA37, PH_ACT01 (p < 0.05) departed from HWE (Table 2). Of the markers, PH_CA38 and PH_ATC01 departed significantly from HWE and showed homozygote excess by analysis with MICRO-CHECKER. In general, the presence of null alleles or large allele dropout can explain observed deviation from HWE [9]. Finally, 10 new markers satisfied all the criteria (i.e., moderate to high polymorphism, no evidence of null alleles) and are recommended for use in future population genetics studies of Pungtungia herzi (Table 3).
Cross-Species Amplification
Cross-species amplification was further examined in Pseudopungtungia nigra and Pseudopungtungia tenuicorpa. Of the 16 new makers, the 10 markers that were strongly amplified in both Pseudopungtungia nigra and Pseudopungtungia tenuicorpa, 5 markers were amplified only in Pseudopungtungia nigra, while 1 marker (PH_ATC01) was amplified only in Pseudopungtungia tenuicorpa. The number of alleles for the 15 markers amplified in Pseudopungtungia nigra ranged from 1 to 11. The number of alleles on the 11 markers from Pseudopungtungia tenuicorpa ranged from 1 to 6 ( Table 3). PH_CA41 was monomorphic in Pseudopungtungia nigra, whereas PH_CA52 and PH_TCA01 were monomorphic in Pseudopungtungia tenuicorpa. Consequently, 14 markers for Pseudopungtungia nigra and 9 markers for Pseudopungtungia tenuicorpa were considered to be good choices for future research in this area. The "expected size" included 18 bp of the M13 tail. Pungtungia herzi in the Gyeongho River population. HW analysis of the Imjin and Geum Rivers were not performed because the sample size was too small. The size ranges include 18 bp of the M13 tail. Table 3. Cross-species amplification of 16 microsatellite markers in Pseudopungtungia nigra (n = 6) and Pseudopungtungia tenuicorpa (n = 3).
Roche 454 GS-FLX Titanium Sequencing
Genomic DNA was extracted from muscle tissue using a DNeasy Blood and Tissue kit (QIAGEN, Hilden, Germany) following the manufacturer's protocol and stored at −20 °C. The quality of genomic DNA was checked on 1% agarose gels with a spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). The NGS libraries were generated with aliquots of approximately 10 μg of genomic DNA from a single Pungtungia herzi individual and then sequenced using a 1/4 plate on the Roche 454 GS-FLX titanium platform at the National Instrumentation Center for Environmental Management of Seoul National University.
Microsatellite Marker Development
The sequence reads from the 454 GS-FLX were assembled using Newbler 2.6 (Roche Diagnostics, 454 Life Science, Mannheim, Germany) with a 96% minimum overlap identity. The dinucleotide and trinucleotide repeats of more than four iterations were searched using the perl program "ssr_finder.pl" [22,24]. These repeats were sorted and a pair of primers flanking each repeat was designed using Primer3 [25]. The optimal primer size was set to a range of 18-26 bases and the optimal melting temperature was set to 55-59 °C. The optimal product size was set to 130-270 bp and the remaining parameters were left at the default settings. For each primer pair, a 5'-M13 tail (5'-TGTAAAACGACGGCCAGT-3') was added to the forward primer to allow fluorescent labeling during the amplification reactions [26].
PCR and Genotyping
The PCR mix contained 10 ng of genomic DNA, 8 pmol each of the reverse primer and fluorescent labeled (6-FAM, NED, PET, and VIC) M13 primer, and 4 pmol of the forward primer, Premix Taq (TaKaRa Ex Taq ® version 2.0; TaKaRa Bio, Otsu, Japan) in a final reaction volume of 20 μL. The PCR amplification protocol was 94 °C for 5 min, followed by 30 cycles of 94 °C for 30 s, 56 °C for 45 s, and 72 °C for 45 s, then 12 cycles of 94 °C for 30 s, 53 °C for 45 s, and 72 °C for 45 s, and a final extension at 72 °C for 10 min. Subsequently, the PCR product was added to Hi-Di™ formamide with 500 LIZ ® size standard (Applied Biosystems, Foster City, CA, USA) and run on an ABI 3730xl automatic sequencer (Applied Biosystems, Foster City, CA, USA). The microsatellite profiles were examined using GeneMarker ver. 1.85 (Softgenetics, State College, PA, USA). To verify the flanking sequences of the microsatellite repeats and to confirm whether target fragments of interest were amplified, PCR was performed with 8 pmol of forward and reverse primers without fluorescent-labeled M13 primer. The amplified products were sequenced using an ABI 3730XL and the targeted sequences were checked for correct amplification.
Statistical Analysis
We estimated the proportion of polymorphic markers and the average number of alleles per marker and calculated H O and H E using Arlequin ver. 3.11 [27]. Hardy-Weinberg equilibrium (HWE) in the Gyeongho River population of Pungtungia herzi was analyzed and the program MICRO-CHECKER ver. 2.2.3 [28] was used to check for null alleles and scoring errors due to stuttering or large allele dropout.
Conclusions
We developed 16 new microsatellite markers in Pungtungia herzi. Of these, 15 and 11 markers were applied successfully in the endangered species Pseudopungtungia nigra and Pseudopungtungia tenuicorpa, respectively. This achievement demonstrates that the microsatellite markers developed using the NGS method from a related species can be applied effectively to an endangered species. Developing these microsatellite primers will assist work to determine genetic relationships among the species, as well as improving our understanding of the population genetic structure within each species. | 2018-04-03T05:49:58.263Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "b12b63b7bb96917373291776301001a49549c9f6",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1422-0067/14/10/19923/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b12b63b7bb96917373291776301001a49549c9f6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268441235 | pes2o/s2orc | v3-fos-license | Diagnosis of benign and malignant peripheral lung lesions based on a feature model constructed by the random forest algorithm for grayscale and contrast-enhanced ultrasound
Rationale and objectives To construct a predictive model for benign and malignant peripheral pulmonary lesions (PPLs) using a random forest algorithm based on grayscale ultrasound and ultrasound contrast, and to evaluate its diagnostic value. Materials and methods We selected 254 patients with PPLs detected using chest lung computed tomography between October 2021 and July 2023, including 161 malignant and 93 benign lesions. Relevant variables for judging benign and malignant PPLs were screened using logistic regression analysis. A model was constructed using the random forest algorithm, and the test set was verified. Correlations between these relevant variables and the diagnosis of benign and malignant PPLs were evaluated. Results Age, lesion shape, size, angle between the lesion border and chest wall, boundary clarity, edge regularity, air bronchogram, vascular signs, enhancement patterns, enhancement intensity, homogeneity of enhancement, number of non-enhancing regions, non-enhancing region type, arrival time (AT) of the lesion, lesion-lung AT difference, AT difference ratio, and time to peak were the relevant variables for judging benign and malignant PPLs. Consequently, a model and receiver operating characteristic curve were constructed with an AUC of 0.92 and an accuracy of 88.2%. The test set results showed that the model had good predictive ability. The index with the highest correlation for judging benign and malignant PPLs was the AT difference ratio. Other important factors were lesion size, patient age, and lesion morphology. Conclusion The random forest algorithm model constructed based on clinical data and ultrasound imaging features has clinical application value for predicting benign and malignant PPLs.
Introduction
Lung cancer conspicuously dominates as the principal male malignancy in terms of incidence and mortality rates, and its trajectory in female cancer incidence is on an unmistakable upward trend (1).Globally, approximately 2.2 million lung cancer cases emerged in 2020, culminating in roughly 1.8 million fatalities (1,2).Thus, the early detection of malignant lung tumors and timely clinical intervention can effectively reduce mortality and improve patient prognosis.Low-dose computed tomography (CT) has garnered recognition as a potent tool for early lung cancer detection and screening, presenting a statistically significant reduction in lung cancer mortality of approximately 20% when juxtaposed with conventional chest X-ray examinations (3).Nonetheless, the propensity for false positives in CT-based lung cancer diagnosis remains significant (4), with histopathological examination retaining its gold standard status for the diagnosis of malignant lung lesions.
Considering that ultrasound can provide characteristic insights into lesions near the pleura, and the emergence and maturity of contrast-enhanced ultrasound (CEUS) technology, researchers have recently gone beyond simple grayscale ultrasound to deeply study the microvascular supply distribution and dynamic perfusion process of peripheral pulmonary lesions (PPLs) to clarify their vascular characteristics (5)(6)(7).Although some studies have used quantitative and qualitative parameters to effectively diagnose benign and malignant PPLs (8)(9)(10), few have comprehensively analyzed and evaluated the weight of multiple features of PPLs on grayscale ultrasound and CEUS.
Random forest is a machine learning algorithm that is an ensemble learning model composed of multiple decision trees.Compared with logistic regression analysis, random forest has significant advantages in dealing with feature correlation, feature selection, and handling nonlinear data (11).Random forest models are more appropriate for datasets with many features.
In this study, the clinical data and features of patients with peripheral lung lesions on grayscale ultrasound and CEUS were prospectively collected.A model was established and validated using the random forest algorithm to evaluate the correlation between the relevant variables and the diagnosis of benign and malignant PPLs, thereby revealing the significant factors that distinguish them.
Patients
This study was approved by the Ethics Committee of the Ethics Committee of the Second Affiliated Hospital of Harbin Medical University.We selected 272 lesions from 272 patients detected by CT between October 2021 and July 2023, all of which could be clearly identified using ultrasound.These patients underwent CEUS-and ultrasound-guided percutaneous biopsies in our department.The exclusion criteria were as follows: 1. patients who could not tolerate or cooperate with surgery; 2. severe cardiovascular diseases, such as myocardial infarction, advanced hypertension, and circulatory failure; 3. coagulation disorders or severe bleeding tendencies; and 4. allergies to contrast agents.Before undergoing CEUS and percutaneous biopsy, each patient or a family member signed an informed consent form.Five patients who did not undergo CEUS or biopsy were excluded.Ten patients were excluded because of poor ultrasound image quality and inability to undergo TIC curve analysis.Three patients were excluded because of the lack of pathological results.During the study period, we closely followed up patients with negative biopsy results for 3 to 6 months.If the lesion decreased or disappeared in subsequent CT scans, it was considered benign.If the lesion remained unchanged or increased in size in CT scans, we performed another biopsy to confirm.We obtained the final pathological results of the lesions, with one patient excluded because the pathological results showed a borderline hemangiopericytoma-like fibrohistiocytic tumor.In total, 254 patients (254 lesions) were included in this study (Figure 1).The sex, age, and smoking history of the patients were also recorded.
Image acquisition and analysis
Routine ultrasonography and CEUS were performed using the Aixplorer Color Doppler Ultrasonic Diagnostic Device (SuperSonic Imaging, Aix-en-Provence, France) and the Aplio i900 Color Doppler Ultrasonic Diagnostic Device (Canon Medical, Japan).The devices were equipped with XC6-1 and i8CX1 convex probes.Depending on the location of the lesion, an appropriate position was selected for scanning through the intercostal space.Initial grayscale ultrasonography was performed to observe the morphological characteristics of the lesion such as size (the sum of the maximum diameter parallel to the chest wall and the maximum diameter perpendicular to the chest wall), shape (wedge/circular/fusiform/hemisphere/irregular), angle between the lesion border and chest wall (as long as one angle is obtuse, the parameter is classified as obtuse), boundary clarity (clear or unclear), edge regularity (regular or irregular), and the presence of an air bronchogram sign.The ultrasound contrast agent (UCA) used was SonoVue (Bracco Company).After adjusting the optimal section, the mode was switched to contrast enhancement (mechanical index, <0.1).A rapid injection of 2 mL of UCA was administered into the elbow vein, followed by a flush with 5 mL of saline, during which timing and dynamic image storage began.
The enhancement characteristics of the lesion included vascular signs (morphological characteristics of the earliest enhanced blood vessels in the lesion: tree-like/dot-like/curl-like/mixed), enhancement patterns (the way the UCA enters the lesion: unidirectional emitting/converging/centrifugal/diffuse/mixed), and enhancement intensity (the enhanced degree of air-filled lung tissues was defined as hyper-enhancement, and the enhanced degree of thoracic wall muscle was defined as hypo-enhancement: hyper-, iso-, or hypo-enhancement), homogeneity of enhancement (homogeneous or inhomogeneous), number of non-enhancing regions (none/single/multiple), and non-enhancing region type (the morphological characteristics of the non-enhancing region: small piece/large piece/mesh).The definitions of the qualitative parameters for grayscale ultrasound and ultrasound contrast imaging are shown in Figure 2.
Using the ultrasound contrast quantitative analysis software SonoLiver, the contrast film segments were analyzed by placing the regions of interest in the earliest enhanced inflated lung tissue, lesion, and chest wall area, obtaining TIC curves, and finally obtaining the arrival time (AT) of the lung tissue (the time it takes for the contrast agent to enter the normally inflated lung tissue from injection), lesion AT (the time it takes for the contrast agent to reach the lesion), chest wall AT (the time it takes for the contrast agent to reach the chest wall after injection), lesion-lung AT difference (the AT difference between the lesion and the normally inflated lung tissue), AT difference ratio (the ratio of "AT difference between the lesion and the normally inflated lung tissue" to "AT difference between thoracic wall and the normally inflated lung tissue"), time to peak (TTP) (the time it takes for the contrast agent to reach its maximum enhancement at the lesion site from injection), and rising time (RT) (the difference is between the time it takes for the contrast agent to reach its maximum enhancement at the lesion and when the lesion starts to enhance) (Figures 3, 4).
Reference standards
Under the circumstance where the pathological results for the lesion were unclear, a young physician with five years of diagnostic experience reached a consensus with two chief physicians, each with over ten years of diagnostic experience, on the placement of the area of interest and observation of the imaging characteristics of the lesion.
Ultrasound-guided percutaneous biopsy
Two physicians with ten years of interventional experience performed lung lesion biopsies.When examining conventional ultrasound and CEUS, physicians accurately determined the optimal location, area, needle entry point, and path for biopsy.The area was disinfected with iodine and sterilized, followed by local anesthesia with 2% lidocaine at the relevant site.Under ultrasound guidance, the physicians avoided the necrotic area shown on CEUS, ensuring that the biopsy needle accurately reached the lesion site.Multiple punctures were made on the lesion, and up to 2-3 tissue samples were collected.All tissue samples were fixed in 4% formalin and sent to the pathology department for analysis.
Statistical analyses
This study utilized SPSS 27.0 and PyCharm software for statistical analysis.Count data were first subjected to the Shapiro-Wilk test for normality: if the data followed a normal distribution, they were presented as mean ± standard deviation and subjected to an independent samples t-test; if the data did not follow a normal distribution, they were presented as the median and interquartile range [M (Q1, Q3)] and subjected to the independent samples Mann-Whitney test.Categorical variables are presented as frequencies or percentages and were tested using the chi-squared test.Statistical significance was set at P < 0.05.
Next, the statistical data of the benign and malignant lesions were analyzed using univariate logistic regression, selecting ultrasonic parameters with significant differences between benign and malignant changes as candidate variables.
Finally, the random forest algorithm was used for multivariate analysis to select candidate variables.The random forest analysis
Patient general information characteristics
Of the 254 participants in our study, 148 were male and 106 were female, with ages ranging from 16 to 89 years.The final diagnostic results are shown in Table 1.
When comparing sex distribution (P=0.315) and smoking history (P=0.904) between the malignant and benign groups, we found no statistically significant differences.However, the average age in the malignant group was notably higher than in the benign group [66.0 (60.00~74.00)versus 62.00 (53.00~68.50),P < 0.001] (Table 2).Moreover, age as a diagnostic criterion yielded an area under the curve (AUC) of 0.64, pinpointing 62.5 years as the optimal age cutoff.Using this cutoff, the sensitivity was 0.69, and the specificity was 0.54.
Benign and malignant group gray-scale ultrasonic features
Univariate logistic regression analysis showed significant differences between the benign and malignant groups in lesion size, shape, angle between the lesion border and chest wall, boundary clarity, edge regularity, and air bronchogram sign (P < 0.05).The median lesion size in the malignant group was 11.04 cm, and the median lesion size in the benign group was 8.45 cm.The AUC of lesion size was 0.68, the optimal cut-off value was 10.19 cm, sensitivity was 0.58, and specificity was 0.71.Malignant lesions were mostly irregular, circular, and fusiform, whereas benign lesions were mostly wedge-shaped.Benign lesions were mostly at acute angles with the chest wall, whereas malignant lesions were equally frequent at acute and obtuse angles with the chest wall.The boundaries of malignant lesions were mostly clear, whereas the frequencies of clear and unclear boundaries of benign lesions were similar.The boundaries of benign lesions were mostly irregular, whereas the regular and irregular frequencies of the boundaries of malignant lesions were similar.Malignant lesions mostly did not have air bronchograms, whereas the frequencies of benign lesions with or without air bronchograms were similar (Table 2).
Benign and malignant group ultrasound contrast-enhanced features
In the CEUS imaging of benign lesions, imaging is mostly mottled and arborescent, while malignant lesions are mostly curled and mottled (P < 0.05).Malignant lesions often showed convergent enhancement, whereas benign lesions often showed unidirectional radiative enhancement and convergent enhancement (P < 0.05).Benign lesions usually have iso-enhancement or hyperenhancement effects, whereas malignant lesions usually have isoenhancement effects.However, the proportion of malignant lesions was significantly higher in the hypoenhancement group (P < 0.05).Malignant lesions often exhibit inhomogeneous enhancement, whereas benign lesions often exhibit homogeneous enhancement.Benign lesions usually had a single or no non-enhancing area, whereas malignant lesions usually had multiple non-enhancing areas (P < 0.05).In addition, malignant lesions typically had a reticular distribution in non-enhancing areas (P < 0.05).Significant statistical differences in the quantitative features on enhanced ultrasound were observed between the benign and malignant lesion groups, such as lesion AT, lesion-lung AT difference, AT difference ratio, and TTP (P<0.05)(Table 2).
The AUC of lesion AT was 0.64, and the optimal cutoff value was 6.05 s.Meanwhile, the sensitivity was 0.81, and the specificity was 0.48.The AUC of lesion-lung AT was 0.78, and the optimal cutoff value was 2.45 s.Meanwhile, the sensitivity was 0.75, and the Frontiers in Oncology frontiersin.orgspecificity was 0.71.The AUC of the AT difference ratio was 0.77, and the optimal cut-off value was 0.45.Meanwhile, the sensitivity was 0.75, and specificity was 0.75.The AUC of TTP was 0.60, and the optimal cutoff value was 12.55 s, while the sensitivity and specificity were 0.70, and 0.53, respectively.
Random forest modeling results
The following parameters were set for the random forest model: the random seed number was 1000, and 49 total models were constructed.The relationship between the error and number of trees was then plotted, and the error in the test set was minimized when 15 trees were present (Figure 5).Therefore, the random forest model was selected.The area under the receiver operating characteristic (ROC) curve of this model in the training set was 1 and the prediction consistency rate was 0.985.In the test set, the area under the ROC curve was 0.921 and the accuracy rate was 0.882 (Figure 6).According to the feature importance coefficient, the importance score of each variable was measured, and the most important factor affecting the discrimination between benign and malignant PPLs was determined to be the AT difference ratio, followed by size, age, and shape (Figure 7).
Discussion
In comparison to previous research methods, we have collected a relatively large amount of general clinical data and ultrasound manifestations of 22 features of PPLs, and extracted 17 features with discriminatory significance between benign and malignant PPLs using logistic regression (9).This increases the likelihood of selecting the optimal features.In comparison to establishing a common multivariate logistic regression model, the advantages of random forests are less susceptible to the influence of multicollinearity between features and can reduce the model's dependence on a specific training set.Most importantly, random forests are highly useful for situations with a large number of features, helping us to identify the most important features (11).Based on these features, we can comprehensively judge the nature of the lesion, providing a reference for differential diagnosis.We constructed a random forest model in this study based on these 17 indicators to predict the nature of PPLs, with an AUC of 0.92 and an accuracy rate of 88.2% in the test set.
Owing to the dual blood supply characteristics of the lungs, the contrast agent sequentially passes through the right heart, pulmonary circulation, left heart, and systemic circulation after injection into the elbow vein.Consequently, pulmonary artery enhancement occurs earlier than bronchial artery enhancement.Therefore, judging the blood supply source of the lesion by the beginning of the enhancement time can differentiate benign PPLs from malignancies (12).The 2017 ultrasound contrast imaging guidelines released by the European Society of Ultrasound in Medicine and Biology suggest that an initial enhancement time of less than 10 s indicates that the lesion is mainly supplied by the pulmonary artery, which is more common in patients with pneumonia.An enhancement time greater than 7.5 s indicates the possible presence of lung cancer (13).Although benign and malignant PPLs can exhibit different initial enhancement times, an overlap in time between the two may exist, making it impossible to determine the benign or malignant nature of PPLs.In our study, when the lesion AT was greater than 6.05 s, the likelihood of malignant lesions increased.However, due to the different physiological or pathological conditions of patients (such as examination location, chronic heart failure, hyperthyroidism or hypothyroidism, and chronic lung diseases), inconsistent injection speeds of the contrast agent by operators, or even uncoordinated Relationship between model error and number of decision trees.timing by operators, the beginning enhancement times of pulmonary artery and bronchial artery enhancement in patients can be variable (14).Therefore, relying solely on lesion AT for judgment carries risks; thus, the importance ranking in the random forest variables is not high.Therefore, we introduced the difference in the time of arrival of the lesion.When the lesion-lung AT difference was greater than 2.45 s, malignant lesions can be diagnosed.However, in some patients, the difference between the lesion and lung AT values was lower than the critical value.In contrast, the difference between the lesion and chest wall AT values was very close.In our study, such lesions were classified as benign; however, their actual blood supply was closer to that of the chest wall.In this study, we found that the variable importance ranking in the random forest was the first AT difference ratio, which is consistent with the findings of Bi et al. (10,15).The AT difference ratio was more effective in reducing individual differences than the lesion-lung AT difference.When the AT difference ratio was > 0.45, the likelihood of malignant tumors increased.In addition, statistically significant differences were observed in TTP between the benign and malignant groups, whereas no statistically significant differences existed in RT between the malignant and benign groups, which is consistent with the studies by Bi et al. (15) and Li et al. (9).However, the TTP may also be affected by the lesion AT.This study was limited to distinguishing between benign and malignant lesions only through quantitative indicators, and we also need to analyze the enhancement patterns of the lesions.In benign lesions, the vascular structure is usually tree-like, the perfusion is unidirectional and radiating, and it shows uniform and significant enhancement.In malignant lesions, the vascular appearance is mostly dotted and curly, and it shows uneven and low enhancement, which is similar to the findings of Li et al. ( 9) and Shen et al. (16).In benign lesions, the supplying pulmonary artery usually maintains its normal structure; therefore, it presents a tree-like structure radiating from the hilum to the pleural direction on CEUS.Due to inflammatory reactions, vascular dilation, and acceleration of blood flow, the local blood flow increases, resulting in the lesion mostly showing high enhancement.Malignant tumor cells destroy the normal structure of the lung tissue, and the small bronchial arteries gradually replace the hypoxic pulmonary arteries as the tumor grows.The structure of the disordered new vessels produces a large number of anastomotic branches and collaterals, and the distribution is tortuous and uneven, resulting in the enhanced vessels in the lesion showing a dotted or curly appearance and low enhancement (17).However, some studies have shown that the degree of enhancement of malignant lesions is related to pathological results.According to the findings of Findeisen et al. (18) and Wang et al. (19), compared with squamous cell carcinoma, adenocarcinoma has a higher expression of vascular endothelial growth factor, resulting in a higher microvascular density in adenocarcinoma and a higher degree of enhancement.In our study, adenocarcinoma accounted for a higher proportion of tumors in the malignant group, which may have led to malignant lesions showing isoenhancement rather than low enhancement.In addition, we found that the convergent perfusion pattern was more common in malignant lesions, which is consistent with the research of Bi et al. (15) and Fu et al. (20), whereas a local-to-global perfusion pattern was more common in the study by Shen et al. (16).However, these perfusion patterns are related to tumor damage to the blood vessels and the formation of new blood vessels.Moreover, the number and morphology of necrotic areas were significantly different between the benign and malignant groups.However, these qualitative imaging indicators were not highly ranked in the random forest models used in the present study.
The size and shape of the lesion ranked second and fourth, respectively, in terms of the variable importance of the random forests.Unlike previous studies, our clinical experience led to a more detailed description of the shape, which is similar to the results reported by Li et al.In the present study, wedge-shaped lesions were more common among benign lesions, whereas round, spindle, and irregular shapes were more common among malignant lesions.Pneumonia can cause inflammatory cell infiltration and tissue necrosis in local tissues, resulting in vascular occlusion and ischemia of lung tissue in the affected area, thus forming wedgeshaped lesions with blurred boundaries.Malignant cells grow rapidly and spread through blood vessels and bronchi, resulting in increased longitudinal diameter.During growth, surrounding tissues can exert a certain tension, resulting in larger malignant tumors in a ball, spindle, or irregular shape, with clear lesion boundaries (20,21).The research by Gould et al. (22) shows that the size of the lesion is a key factor, and that the larger the diameter of the lesion, the higher the possibility of malignant lesions.However, this is inconsistent with the studies of Bai et al. (23) and Bi et al. (10), which may be related to small sample datasets or different indicators used to describe the size of the lesion.In this study, we used the sum of the horizontal and vertical diameters to reflect lesion size, considering the size of the lesion in two directions, which can more accurately assess the overall size of the lesion compared with only describing the size of the lesion using the horizontal, vertical, or maximum diameters.
In our study, age ranked third in terms of variable importance in the random forest model, which is in line with the increasing trend in cancer incidence with age.At the same time, smoking is an important risk factor for cancer (1), but no significant correlation was found between smoking and benign and malignant tumors in this study.This may be due to the small sample size or the different proportions of pathological results in the collected cases.According to the study by Loeb et al. (24), smoking is closely related to lung cancer (especially squamous cell carcinoma), but not adenocarcinoma.Importantly, adenocarcinoma accounted for approximately half of the malignant tumors in the present study, which could explain the lack of a significant difference between smoking and benign and malignant tumors.
The limitations of this study are as follows: (1) the number of cases included in this study was small, and the ability of grayscale ultrasound and CEUS to differentiate benign and malignant PPLs still needs further validation with multicenter trials with large sample sizes; (2) the placement of the CEUS region of interest is subjective; (3) this study used ultrasound-guided biopsy, which may lead to sampling errors or insufficiencies in the lesion; and (4) this study did not further analyze the lesion subtypes.
Conclusions
The random forest algorithm model, based on clinical data and ultrasound imaging features, has clinical application value in predicting benign and malignant PPLs.Combining indicators such as AT difference ratio, size, age, and shape of the lesion can provide a diagnostic basis for differentiating benign and malignant PPLs.
FIGURE 3
FIGURE 3 Ultrasound images of an 81-year-old female patient with pneumonia.(A) A hypoechoic solid lesion with an irregular shape and blurred edges, measuring 5.4 × 1.9 cm, can be seen in the right lung.(B) At 4.1 s, the inflated lung tissue begins to enhance.(C) At 5.0 s, the lesion begins to enhance unidirectionally in a branching radicular pattern.(D) At 12.6 s, the chest wall begins to enhance.(E) The lesion shows uniform isointense enhancement, with no necrosis inside.(F) A 16G needle is used for biopsy of the lesion.
FIGURE 4
FIGURE 4Ultrasound manifestations of a 64-year-old male patient with squamous cell carcinoma.(A) A low-echo solid mass of 10.1 × 8.3 cm can be seen in the right lung, with a circular shape and clear boundaries.(B) At 2.8 s, the lung tissue begins to enhance with inflation.(C) At 7.6 s, the lesion begins to centrifugally enhance in a patchy manner.(D) At 8.8 s, the chest wall begins to enhance.(E) The lesion shows heterogeneous low enhancement, with multiple necrotic areas visible inside, and the enhanced morphology is sieve-like.(F) A 16G puncture needle is used to puncture and sample the central non-necrotic area of the lesion.
FIGURE 6 ROC
FIGURE 6 ROC curves of the random forest model in the training and testing sets.(A) ROC curves of the random forest model in the training set.(B) ROC curves of the random forest model in the testing set.ROC, receiver operating characteristic.
FIGURE 7
FIGURE 7Importance score of predictive variables in the random forest model.AT, arrival time; TTP, time to peak.
TABLE 1
Final diagnosis results.
TABLE 2
Univariate regression analysis of each parameter between the benign and malignant groups. | 2024-03-17T17:28:54.713Z | 2024-03-11T00:00:00.000 | {
"year": 2024,
"sha1": "91dd90e7b7ce4c4a4204214ed975f977495f4858",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2024.1352028/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5df48c505918b35d18e31b8f2619242ddf1cd31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
246507837 | pes2o/s2orc | v3-fos-license | Environmental risk factors of systemic lupus erythematosus: a case–control study
Systemic lupus erythematosus (SLE) is a complicated chronic autoimmune disorder. Several genetic and environmental factors were suggested to be implicated in its pathogenesis. The main objective of this study was to examine how exposure to selected environmental factors was associated with SLE risk to support the development of disease preventive strategies. A case–control study was conducted at the Rheumatology outpatient clinic of Alexandria Main University Hospital, in Alexandria, Egypt. The study sample consisted of 29 female SLE patients, and 27 healthy female controls, who matched the cases on age and parity. Data were collected by a structured interviewing questionnaire. Blood levels of lead, cadmium, and zinc of all participants were assessed by flame atomic absorption spectrometry. The multivariate stepwise logistic regression model revealed that five factors showed significant association with SLE, namely living near agricultural areas, passive smoking, blood lead levels ≥ 0.075 mg/L, and exposure to sunlight (odds ratio (OR) 58.556, 95% confidence interval (CI) 1.897–1807.759, OR 24.116, 95% CI 1.763–329.799, OR 18.981, 95% CI 1.228–293.364, OR 9.549, 95% CI 1.299–70.224, respectively). Whereas walking or doing exercise were significantly protective factors (P = 0.006). The findings of this study add to the evidence that SLE can be environmentally induced. Preventive measures should be taken to address the environmental risk factors of SLE.
Systemic lupus erythematosus (SLE) is a chronic rheumatic autoimmune disorder that could be manifested by many symptoms. It is a multi-system disease that may involve nearly any organ resulting in serious organ complications and even death 1,2 . It predominantly affects women in the child-bearing ages 3 . SLE was suggested to be a condition of multifactorial etiology; including genetics, hormones, and environmental exposures.
It is currently known that different environmental factors could trigger SLE onset and flares in genetically susceptible individuals 4,5 . Owing to the increasing prevalence and overall SLE burden, efforts have been made to recognize these genetic and non-genetic factors 6 .
The role of the environment was more prominent by the fact that SLE concordance among identical monozygotic twins is below 25% 7 . In addition, the contribution of environmental factors to SLE risk has been evaluated to constitute 56% 8 .
Smoking, silica dust, UV radiation, infections, stress, air pollution, pesticides, and heavy metals are the major environmental risk factors having some evidence for association with SLE 9 .
In recent years, heavy metal pollution has become a significant health issue; continuous exposure to low levels of these toxic trace elements may result in bioaccumulation and produce a wide variety of biological effects on human beings 10 .
Lead and cadmium are known to pose serious risks to human health. Toxicity of these agents is evidenced by being identified in the top 10 environmental hazards by the Agency for Toxic Substances and Disease Registry 11 .
Environmental sources of lead include inhalation of airborne dusts containing lead and ingestion through food or water contaminated by lead. Old deteriorating household paints, and lead use in some traditional medicines and cosmetics can also be a source of lead exposure 12 . In addition, active and passive smoking were found to be associated with increased blood lead levels 13 .
Cadmium is present in cigarette smoke, air, food, and water. It can enter human bodies through inhalation, ingestion and dermal contact 14 .
Experimental studies of lead and cadmium exposure in rodent models proposed that metals may play a causative role in SLE 15,16 . There is relatively little data pertaining to lead or cadmium exposure with the risk of SLE in humans; exposure to stained or leaded glass as a hobby was found to be more common among SLE cases than controls 17 .
Trace elements such as zinc play a crucial role in growth and development of all organisms. Zinc is the second most abundant trace metal in the human body after iron, but zinc cannot be stored and has to be taken up daily via food to guarantee sufficient supply. However, it was stated that zinc excess as well as zinc deficiency may result in severe disturbances in immune cell numbers and activities leading to immune dysfunction 18 .
A study by Sahebari et al. 19 found that serum Zn values were lower in SLE patients than healthy age and sex-matched controls. On the other hand, zinc-deficient diets retarded autoantibody production and enhanced survival in mice 20 . In their review, Constantin et al. 21 advised to restrict consumption of some minerals such as zinc and sodium.
As the data about the relation between environmental risk factors and SLE in Egypt are lacking and the incidence is increasing, the present study was proposed to determine the association between some environmental exposures with SLE risk; in order to assess the extent to which SLE is environmentally induced.
Results
Estimation of SLE risk in relation to some socio-demographic characteristics. Analysis of data regarding the socio-demographic characteristics revealed that patients and controls were similar in terms of demographic background, no statistical significant difference between cases and controls except for the education level, the occupation, and the residence near agricultural areas (Tables 1, 2).
Regarding the educational level, there was a statistically significant difference between cases and controls (P < 0.05). The percentage of illiterate cases was (31%) and constituted more than double their percentage in the control group (14.8%). Also, the percentage of cases with higher education was only (3.4%), which is very low compared to controls (14.8%). The findings showed that females with SLE who were in the primary or preparatory educational level were significantly more likely to be at risk of SLE (OR 14.67, 95% CI 1.16-185.23) compared to controls.
With regard to occupation, there was a statistically significant difference between cases and controls (P < 0.05), the majority of cases were housewives (86.2%) compared to (33.3%) in the controls group.
Estimation of SLE risk in relation to some lifestyle factors. Lack of physical activity, exposure to domestic animals, or to sun light showed significant results ( Table 3). As regards walking and physical activity, a high proportion of SLE patients (89.7%) do not like to walk or perform any physical activity compared to (40.7%) in the control group.
Sedentary lifestyle increased SLE risk by 12.61 folds (OR 12.61, 95% CI 3.05-52.17). Dealing with domestic animals was another risk factor that was tested in the current study; the results showed that 58.6% of cases were exposed to animals more than controls (29.6%). This may be because 48.3% of cases live in rural areas compared to 7.4% of controls. There was a statistically significant increase in SLE risk when dealing with farm animals (sheep, chicken) (OR 3.36, 95% CI 1. 11-10.19).
Results about UV radiation exposure and SLE risk showed a statistically significant increase of SLE risk (OR 13.714, 95% CI 3.768-49.920).
Estimation of SLE risk in relation to some indoor environmental risk factors.
The risk values for the association between some indoor environmental exposures and SLE were statistically not significant except Regarding passive smoking at home, which was assessed by the number of active smokers who were residing in the same house, (62.1%) of cases were exposed to Environmental Tobacco Smoke (ETS) versus (29.6%) of controls, there was a statistically significant difference between cases and controls with regard to exposure to ETS (OR 3.886, 95% CI 1.273-11.861).
Moreover, SLE risk increased with the increase in the number of cigarettes smoked per day (the risk was 2.96 and 10.36 times for light and heavy passive smoking (OR 2.96, 95% CI 0.9-9.75; OR 10.36, 95% CI 1.1-97.69 respectively). A significant trend for risk was noticed with increased passive smoking (X 2 for trend = 5.87, P = 0.015) concluding that ETS may be an important risk factor for SLE.
Estimation of SLE risk in relation to a family history of any auto-immune disease. The study
found a statistically significant difference between cases and controls regarding the family history of any auto immune disease as illustrated in (Table 5), 33.3% of controls had surprisingly a family history of an auto immune disease including SLE compared to 3.4% of cases.
Estimation of SLE risk in relation to hormonal factors.
The findings of the present study showed the prevalence of menstrual disorders and hormonal disturbances among SLE patients as demonstrated in Table 5. Early menopause was present in (31%) of SLE patients compared to (0%) of controls, irregular menstrual cycle was prevalent in (40%) of SLE patients compared to (8%) of controls, (37.9%) of cases used hormonal therapy compared to (11.1%) of controls, and (31%) of cases had problems in uterus versus (7.4%) of controls.
There was a statistically significant increase in SLE risk with early menopause, menstrual irregularities, use of hormonal therapy, and presence of problems in uterus (OR 31. 26 Estimation of SLE risk in relation to blood levels of lead, cadmium and zinc. As shown in (Table 6), the range of values of blood lead levels (Pb) in the cases group was very extensive from minimum 0.0237 mg/L to maximum 0.6951 mg/L and it is higher than the range of values in the controls group. The median concentration of blood lead in the cases group (0.115 ± 0.165) was significantly higher than in the con- www.nature.com/scientificreports/ trols group (0.067 ± 0.077). There were significantly higher blood lead levels in the cases group compared to the controls group (U = 210, P = 0.003).
After categorization of the levels of blood lead in the sample, the SLE risk associated with blood lead levels ≥ 0.09 mg/L was higher and statistically significant (OR 7.71, 95% CI 1.85-32.21) in comparison with subjects having blood lead levels < 0.05 mg/L. A statistically significant trend was computed (Chi-square trend for odds = 7.77, P value = 0.005) showing that the more the increase in blood lead level, the more the chance of SLE occurrence.
The median levels of blood cadmium (Cd) in cases group (0.059 ± 0.102 mg/L) were significantly higher than in the controls group (0.017 ± 0.042 mg/L) as U = 216, P = 0.004.
As presented in (Table 6), the level of blood cadmium in the sample was categorized into groups, females having blood cadmium levels from 0.03 to less than 0.07 and ≥ 0.07 mg/L blood had 6.68 and 4.45 times more risk to develop SLE in comparison with females having blood cadmium levels < 0.03 mg/L blood and the risks for these upper two categories were statistically significant (95% CI 1.58-28.29; 1.18-16.8 respectively). The observed increased trend was found to be statistically significant (X 2 trend for odds = 4.98, P = 0.026).
Regarding blood zinc levels (Zn), the median concentration of zinc in the controls group (2.275 ± 0.707 mg/L blood) was lower than the median concentration of zinc in the cases group (2.660 ± 0.970 mg/L blood), but the median blood zinc values in the cases and controls groups did not differ significantly (U = 290.5, P > 0.05).
Pearson's correlation coefficient was calculated to demonstrate the relationship between lead and cadmium blood levels with each other and yielded a result of r = 0.548, P = 0.002, portraying a positive, moderate, linear, and significant correlation between lead and cadmium blood levels for the cases group (Fig. 1), whereas a nonsignificant correlation was observed in the controls group (r = 0.123, P = 0.542).
Analysis of risk factors affecting SLE risk by stepwise logistic regression.
A multivariate stepwise logistic regression model was built in order to determine which of these predictors really contribute to predicting SLE risk, and exclude those who do not.
Negelkerke R Square suggests that the model explains 78.4% of the variance in the outcome. The accuracy of the model was 87.5%, depicting that the model can correctly classify 87.5% of the cases. The sensitivity and specificity of the model were calculated, and they were 86.2%, and 88.9%, respectively.
As illustrated in (Table 7), the final model revealed that only five factors showed significant association with SLE. The risk was highest for subjects living near agricultural areas with an OR of 58.556 (95% CI 1.897-1807.759), followed by subjects exposed to passive smoking with an OR of 24.116 (95% CI 1.763-329.799), then subjects having blood lead levels ≥ 0.075 mg/L who were 18.981 times (95% CI 1.228-293.364) more likely to have SLE than those having blood lead levels < 0.075 mg/L, followed by subjects exposed to the sunlight who were at increased risk of SLE by 9.549 (95% CI 1.299-70.224). Whereas walking or doing exercise were significantly protective against SLE (P = 0.006); the table demonstrated a negative B coefficient (− 5.246) which indicates that a decrease in the walking and exercise is associated with a greater likelihood of SLE risk.
Discussion
The current study purpose was to gain insights into the etiology of SLE and some possible risk factors. The statistically significant difference between cases and controls regarding the education level assumes that lower education level-which is an indicator of socioeconomic status-may constitute a risk for SLE. That is similar to other studies that showed the association between lower education level and SLE onset and flare 22,23 .
The distribution of the study sample according to the residence area revealed that 48.3% of cases reported living near agricultural areas compared to 7.4% of controls, with 11.67 times more SLE risk (OR 11.67, 95% CI 2.32-58.6) compared to those not living near these areas. This may be attributed to some environmental exposures such as exposure to sunlight and pesticides used in agriculture, in addition to lower socioeconomic status, lower educational level, and poverty. This finding is in accordance with Pons-Estel et al. (2012) who concluded that rural residence was associated with high levels of disease activity at diagnosis and with renal disease occurrence in a Latin American multi-ethnic cohort (OR 1.65, 95% CI 1.06-2.57; OR 1.77, 95% CI 1.00-3.11) 24 .
Our data showed an increase of SLE risk due to sedentary lifestyle which is in accordance with a cross sectional study that showed that a high proportion of SLE patients were physically inactive with a long daily sedentary time 25 .
The statistically significant increase in SLE risk when dealing with animals or birds was in the same line with a case control study in Southern Sweden that reported a statistically significant difference between cases and controls regarding close contact with sheep 26 . This was not in the same line with the results of a recent study which supported the idea that exposures related to childhood farm residence and livestock farming may decrease susceptibility to developing SLE and that contact with livestock may confer protection against SLE 27 .
Current study findings support the role of sun exposure as a trigger for SLE, adding evidence to experimental and human studies that have shown that it can trigger disease onset and induce disease flares in SLE patients 26,28,29 .
ETS as a risk factor for SLE was not enormously previously discussed except for a cross sectional study of Brazilian SLE patients that assessed the association between smoking and SLE and confirmed that never smokers confer a 22% relative SLE risk reduction compared to ever smokers (including second hand smokers) 30 .
In addition, data from a cohort of SLE patients and controls suggested that secondhand smoke during childhood may be a risk factor for SLE (OR 1.81, 95% CI 1.13-2.89) 31 . Whereas, a US prospective cohort study concluded that Early-life exposure to cigarette smoke due to mothers' or fathers' smoking did not increase the risk of adult-onset SLE (RR 0.9, 95% CI 0.6-1.4; RR 1.0, 95% CI 0.8-1.3 respectively) 32 .
Our data revealed that the highest proportion of the study patients (96.6%) had negative family history of SLE. This supports the great contribution of other risk factors rather than the genetic factors in the development of SLE; suggesting the influence of environmental triggers on disease expression 8,17 . This finding was in line with a study that reported that autoimmune diseases in family members have not been associated with SLE 33 . On the other hand, a previous case control study reported the prevalence of auto-immune disease in first degree relatives in cases more than controls (53% vs 39%) and stated that a family history of any auto-immune disease was associated with increased risk of SLE (OR 6.8, 95% CI 1.4-32) 26 . This finding is also in contrast with other previous studies that stated that family history of an auto immune disease in first degree relatives (parents or siblings) was associated with increased SLE risk 34,35 .
The results of the current study regarding SLE risk in relation to hormonal factors was in concordance with a cohort study that revealed that menstrual irregularity was associated with an increased SLE 36 . In the same line comes a cross sectional study of 61 SLE patients, in which 49.2% of the patients had menstrual irregularities, of which 60% had sustained amenorrhoea (premature menopause) compared to the control group (16.7%) 37 . Moreover, a cross sectional study (N = 87) showed menstrual alterations in 37.9% of SLE patients and amenorrhea in 11.5% of patients which was higher than the general population 38 . Additionally, an increased SLE risk was reported in the NHS (Nurses' Health Study) with the use of estrogen replacement therapy 39 . Moreover, a population-based nested case control study found that the current use of COCs (Combined Oral Contraceptives) was associated with an increased SLE risk (RR 1.54, 95% CI 1.15-2.07) 40 . Table 7. Results of the stepwise logistic regression analysis for risk factors affecting SLE risk (Rheumatology outpatient clinic, Alexandria University Hospital, Alexandria, 2020). Ref1 not living near agricultural areas, Ref2 not walking or doing exercise, Ref3 no sun exposure, Ref4 no passive smoking, Ref5 0.075 mg/L. Model: variables initially included: living near agricultural areas, walking and exercise, level of education, early menopause, Sun exposure, hormones, Passive smoking, and Lead, cadmium, and Zinc. pseudoR 2 model = 78.4%%, which mean that the model could explain about 78% of variance in SLE outcome. Other studies contradicted the findings of the current study and stated that the use of HRT (Hormone Replacement Therapy) in postmenopausal SLE women did not appear to increase the rate of lupus flares and appeared to be well tolerated and safe in postmenopausal SLE patients 41,42 .
Data from experimental studies suggested that heavy metals may enhance systemic autoimmunity or accelerate disease progression in experimental models of lupus and that co-exposure to certain heavy metals may increase the risk associated with other exposures 5 .
Detailed studies on SLE onset and flares with reference to lead, cadmium, and zinc are scanty. Therefore, the current study was conducted to evaluate the role of lead, cadmium, and zinc in SLE. All blood samples were found to have lead, cadmium, and zinc concentrations.
It was observed that the median blood lead levels in the cases group as well as the controls group in this study were higher than the CDC permissible range of less than 5 µg/dL of lead in children and adults (0.05 mg/L) and this is indicative of the extent of environmental lead pollution 43 . It was also noticed that the median blood cadmium levels for both cases and controls in this study were higher than the WHO permissible range of 0.03-0.12 µg/dL of Cd 44 . This suggests more protection measures to be taken into consideration in order to avoid the toxic effects of lead and cadmium.
It was observed that the median blood zinc levels were not consistent with reference ranges of zinc in blood (70-120 µg/dL) 45 , requiring further consideration of zinc levels to avoid overdose and toxicity.
In contrast to the current study is a recent similar case control study that reported that SLE diagnosis was associated with lower serum Zn (P = 0.003), and Pb (P = 0.020) 46 . Other studies reported similar results 47,48 . However, some studies did not observe a significant difference in serum Zn concentrations between SLE patients and healthy controls which is similar to the finding of the present study 49,50 .
A case control study reported a positive correlation between lead and cadmium blood levels for the exposed group (ρ = 0.39, P = 0.023) which was consistent with the finding of the current study, whereas blood zinc levels correlated negatively with both lead (ρ = − 0.41, P = 0.015) and cadmium blood levels (ρ = − 0.44, P = 0.009). In addition, no correlations between the studied metals were found in the control group and the study suggested that zinc insufficiency is more likely to occur in cases of combined exposure to cadmium and lead, because of competition between similar ions for receptors involved in absorption, transport, storage or function, and this would explain the negative correlation in the controls group between zinc blood levels with either lead and cadmium levels 51 .
Limitations of the study.
• The potential of case control studies for recall bias and misclassification error; as most exposure information were based on self-reported history so some inaccuracies can be expected. The design of the study also limited the ability to ascribe causal relationships to the associations detected and to control for all potential confounders. • Absence of genetic information for participants, so the potential effects of genetic heterogeneity on the association between risk factors and SLE risk could not be determined. • Human population is rarely exposed to a single agent over time, and there may be a significant delay between exposure and the onset of the disease. • The small sample size may make it difficult to determine if a particular outcome is a true finding and in some cases no difference between the study groups is reported.
Conclusion and recommendations
To date, our knowledge about the etiology of SLE is still unclear and limited; genetic and environmental interactions were suggested. From the present case control study, it is concluded that the risk portion attributed to unsafe and unhealthy environment was found to be quite significant, showing how the environment can play an important role in SLE occurrence.
Exposure to potential environmental risk factors specifically heavy metals should not be under-estimated. Hence, there is an urgent need for interventions to reduce environmental risk factors exposure in order to achieve substantial public health gains.
Increasing the awareness of patients about the environmental pollutants and the ways to protect their health is necessary. As well as the awareness of health care providers about environmental risk factors, so they can advise the patients about the ways of reduction of exposure to environmental hazards. Educational awareness programs to patients and their family should be carried out through media and non-governmental organizations (NGOs) to raise their knowledge about environmental hazards and how to minimize the sources of their exposure and their consequent negative impacts.
Additional experimental and epidemiological studies are required to determine the causative role of several environmental exposures, to confirm data from case control studies.
Methods
This case control study was conducted at the Rheumatology outpatient clinic of Alexandria Main University Hospital. The Inclusion criteria were female patients diagnosed with idiopathic SLE according to SLICC criteria 52 . The controls included healthy females who accompanied the SLE patients who came from remote rural areas in their visit to the Rheumatology outpatient clinic. The cases and controls were matched for age and parity.
Exclusion criteria included (1) male SLE patients "they were excluded mainly to avoid gender bias that may affect the results due to the hormonal effect. Besides, the disease affects mainly females"; (2) drug-induced Lupus; Data collection methods and tools. 1. A pre-designed pre-coded structured interviewing questionnaire was used to collect data from all participants (cases and controls). It included personal and socio-demographic data, data about occupation, lifestyle factors, the medical history, the smoking history including exposure to passive smoking, and some possible indoor environmental risk factors aiming to ascertain exposure to some environmental risk factors suspected to affect SLE risk. Regarding the evaluation of poor water quality, the participants were asked about the availability and quality of drinking water, any problems concerning drinking water (clarity, taste, smell), the type of drinking water pipes (with or without lead) by asking them whether they are old or newly installed, and the drinking water source (filtered, bottled, or tap water). Whereas for sun exposure, they were asked about the daily exposure to the sun, duration of exposure, wearing of protective clothes, and use of sunscreen, as well as whether there is an occupational sun exposure. Concerning the evaluation of sedentary lifestyle, they were questioned whether they perform any physical activity and the duration per week, whether they prefer walking or taking the car, and the duration of watching TV.
Questions of the questionnaire were taken from similar previously validated Arabic and English research questionnaires 26,53 . In addition, it was assessed by an expert at the faculty of medicine, Alexandria University. The English version of the questionnaire underwent a forward and back translation by native speakers whom are experts in public health.
Laboratory investigation
A blood sample (5 mL) was drawn from each participant to measure blood levels of lead, cadmium, and zinc. The samples were transferred into heparinized collection tubes. Cadmium, lead, and zinc were extracted from the blood samples using the conventional wet acid digestion method using concentrated nitric acid (HNO 3 ). A blank using deionized water instead of blood was done for each batch of analysis for comparison 44,54 . The digested samples were filtered and were subjected to elemental analysis using flame atomic absorption spectrophotometer (Shimadzu model AA-6650) at the central laboratory of the High Institute of Public Health (HIPH), Alexandria University.
Statistical design. SPSS version 21 was used for data entry and analysis. Qualitative variables were described through number and percentages of cases. For quantitative continuous variables, tests of normality were done. Mean and standard deviation (mean ± SD) were calculated if the variable follows normal distribution, and median and interquartile range (median ± IQR) if it does not follow normal distribution. "Pearson's Chisquare test (X 2 )" was used to calculate significant differences between cases and controls for the categorical data. If the assumptions were violated, "Fisher's exact test" (if 2*2 table) or "Monte Carlo test" (if m*n table) were used.
Differences between the means of the two groups were examined using independent t test for the continuous, normally distributed variables. The Mann-Whitney-Wilcoxon non-parametric test (U), for the continuous, non-normally distributed variables. Odds ratio (OR) was calculated to measure SLE risk. Increasing trends in SLE risk concerning some risk factors were tested using "Chi-square for trend". Pearson's correlation coefficient was used to test the association between quantitative variables.
A multivariate stepwise logistic regression analysis was used to see if there were significant associations between specific exposures and SLE and to adjust for some potential confounders. Negelkerke R 2 was calculated to tell the amount of variation in SLE risk which is explained by the model. Ethical approval. Approval of the Ethics Committee of the High Institute of Public Health for conducting the research was obtained. Approval for conducting the study at the Rheumatology Outpatient Clinic of Alexandria Main University Hospital was obtained from the hospital outpatient clinics director after a formal written request for that. The study was performed in accordance with the ethical standards in the Declaration of Helsinki. A written informed consent was taken from all study participants after explanation of the purpose and benefits of the research. Anonymity and confidentiality were ensured. All methods were performed in accordance with the relevant guidelines and regulations.
Data availability
The datasets used and/or analyzed during the current study are not publicly due to privacy and ethical concerns but are available from the corresponding author on reasonable request. | 2022-02-04T16:05:59.765Z | 2022-02-02T00:00:00.000 | {
"year": 2023,
"sha1": "cb3368bfb254954f8f1ab49fba9918a159d53592",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ac2008823989a2abd8634fe90586748a2947ebc0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255801809 | pes2o/s2orc | v3-fos-license | Emotional Freedom Technique (EFT): Tap to relieve stress and burnout
The effects of the COVID-19 pandemic have exponentially increased stress, anxiety and burnout levels for all healthcare professionals and students. The psychological effects of working with COVID-19 patients and the physical stress of working under distressing conditions exacerbate an already stressed workforce. Working long hours, shift work, short staffing, demanding workloads, dealing with death and dying and conflicts with management, other staff and disciplines and poor communication between disciplines are among the issues that can lead to burnout, anxiety and depression. Emotional Freedom Technique (EFT) or tapping is a holistic practice that is easy to learn and apply to self and produces relief from stress, anxiety and the symptoms of burnout within in minutes. There are many websites, videos, and tutorials which can teach and support the healthcare practitioner in the practice of EFT.
The challenges of COVID-19 to health care professionals and students
Nursing, by its very nature, is a high stress work environment. The work is often physically stressful and emotionally draining. Working long hours, shift work, short staffing, demanding workloads, dealing with death and dying and conflicts with management, other staff and disciplines and poor communication between disciplines are among the issues that can lead to burnout, anxiety and depression. [1][2][3][4][5][6] According to Ref. 7; burnout is a significant health problem for US nurses who leave their positions. Nursing students are not immune from these stressors during their education, particularly during their clinical experiences. Some studies even report that student nurses experience higher levels of stress than students in other health related disciplines. [8][9][10][11] The effects of the COVID-19 pandemic have exponentially increased stress, anxiety and burnout levels for all healthcare professionals and students. The psychological effects of working with COVID-19 patients and the physical stress of working under distressing conditions exacerbate an already stressed workforce. The scoping review by 12 revealed that the 230 healthcare respondents had psychosocial problems ranging from anxiety of various levels, stress disorder, depression and insomnia. Nurses had higher rates of mental health effects than other healthcare providers. All these consequences were related to the risk factors of working long hours, having family issues; putting their health and their families' health in jeopardy, minimal or lack of PPE, and providing frontline care to COVID-19 patients. 12,13 In a global study of 44 countries during the pandemic, 14 ; found that of the 10,051 healthcare workers, physicians and "paramedical staff" (nurses), the stress levels of the respondents were 25.8% higher that the general population. This study noted that the increased stress levels among nurses leads to increased burnout and increased psychological distress causing nurses to leave nursing. In another study on the impact of the pandemic on 1200 nurses in the US, it was reported that 60% of acute care nurses experienced burnout, and another 75% are "feeling stressed, frustrated, and exhausted". 1 Younger nurses have been especially hard hit by stress, anxiety and burnout. From this same US study, 69% of nurses under age 25 report having burnout, which is twice those older than age 25 or 30%. These younger nurses (46%) also indicated that they suffered a traumatic or distressing incident related to COVID-19. Of those nurses under age 35, 60% reported feeling anxious and 47% reported feeling depressed. 1 The most significant issues that need to be dealt with are the systemic ones. Nurses feel that their facility/institution does not support them. After three years of the pandemic nurses are still reporting the same issuesexperiencing burnout, PTSD, and mental health challenges due to staffing shortages, working extended hours, being required to work "mandatory overtime," and having unmanageable workloads 1,3,7 When healthcare workers experience burnout and other mental health challenges, patient safety is put at risk, medical errors increase, hospital acquired infections increase, and the general health of the health care provider is jeopardized. The costs of burnout are staggering, for those leaving the field are $9 billion for nurses and $2.6 to $6.3 billion for physicians. Other essential workers are not even included in these numbers. 13 It is imperative that nurses and students find a quick, easy, E-mail address: shb34@drexel.edu. practice that produces relief from stress, anxiety and the symptoms of burnout. Emotional Freedom Technique (EFT) or tapping is a holistic practice that is easy to learn and apply to self and produces relief within minutes.
Emotional Freedom Technique and the evidence
Emotional Freedom Technique (EFT) is an evidenced-based practice which is grounded in the integrative field of energy psychology (EP) and integrates components of cognitive behavioral therapies, exposure therapy and body stimulation of acupressure points on the face and body. [15][16][17][18][19] Because of EFT's acupressure component, it is commonly known as "tapping." EFT begins with a simple statement of the issue of concern, called the "Set-up Statement." The format used is "Even though I have a problem (e.g., stress, anxiety, etc.), I deeply and completely accept myself." The first part of the statement is the exposure piece. The cognitive piece is the self-acceptance part of the statement. As this statement is being said, and repeated, tapping on acupressure points occurs. There are eight acupressure points on the face and upper body that have an effect that specifically affects stress reduction. 15,16,[18][19][20] There have been more than 100 peer-reviewed studies consisting of random controlled trials (RCT), systemic reviews, meta analyses, and outcome reviews, signifying the efficacy of EFT for both psychological and physiological indications. 15,17,18 EFT is easy to learn how to use and apply, non-pharmacological, no cost, safe and effective in reducing the stress, anxiety and symptoms of burnout. 15,17,21 An ever increasing number of studies on the benefits of EFT for stress, anxiety and burnout, as well as for other conditions are being documented. Many of these studies demonstrate that EFT is effective as a selfhelp tool, and for use in healthcare settings by healthcare professionals and students. These studies are evaluated according to the American Psychological Association's Division 12 Task Force on Empirically Validated Treatments. 22 Many studies have been conducted on the psychological effects on EFT. However, there is a dearth of studies on the physiological effects of EFT.
A study conducted by 23 examined the effects of EFT on the brain and pain (and other factors) using resting state functional magnetic resonance imaging (fMRI) pre and post intervention. This study not only reported decreased levels of pain, but also somatic symptoms, depression and anxiety. Also reported in this study is that EFT significantly increased happiness, quality of life and satisfaction with life. These results substantiate the effects of EFT as a valid intervention. 15 studied the effects of EFT on heart rate variability (HRV), heart coherence (HC), blood pressure (BP), and the endocrine system by assessing cortisol and the immune system by evaluating salivary immunoglobulin A (Sig A). These factors were studied in conjunction with psychological symptoms. The outcomes for this study demonstrated substantial decreases in the physiological markers (HRV, HC, BP, and the endocrine and immune systems) as well as the decrease in psychological indicators (anxiety, depression, PTSD, pain, and cravings). There was a positive increase in happiness, as well as in the immune system. The findings of this study support the use of EFT to have positive health and mental wellbeing effects.
Instructions and recommendations for use
Tapping may be done anywhere, however, it is best done in a place that is quiet and removed from a busy area. Tapping is done in a comfortable sitting position. No equipment or resources are needed. EFT can be used individually or in a group setting. To do a complete session of EFT takes only a few brief minutes (2-3 rounds/cycles) and the relief of stress and anxiety is almost immediate. 24 1. First, identify the issue that is causing difficulty. It can be anythingstress, anxiety, feelings of sadness or even physical complaints. 2. Then, frame the problem into a statement using this template: "Even
though I have this problem [insert what is bothering you], I accept myself and how I feel."
This statement is said while tapping the meaty side part of the hand alongside the pinky finger (Fig. 1). 3. Repeat the statement as each acupoint is tapped on the body. This can be said out loud or it can be thought. This is a way to acknowledge those negative feelings. Tapping occurs on the eight acupressure points which correspond to the meridians of Chinese medicine. Tapping is thought to influence the flow of body energy (Fig. 2). 3
Strategies that promote regular practice
Being a busy and overworked professional, finding time to do tapping is always difficult. However, EFT is a quick and easy practice to learn and can be done anywherein the car before going to or from work, during a meal break or even in the bathroom. The most difficult challenge is to commit to doing the practice. Some other barriers to practicing tapping are "forgetting" to use EFTset up reminders on all calendars. Another barrier is not having enough time -EFT is a practice that takes only a few minutes. Feeling awkward performing tapping can be another barrier-do it anyway and tap about it! Developing regular use of tapping is the best way to reduce stress, anxiety and symptoms of burnout.
Resources for continued exploration and/or guided practice
EFT is a practice that can be used on oneself to immediately reduces or alleviate stress, anxiety, worries and other inhibiting feelings. There are several resources that can be used readily to begin an EFT practice. Some of these resources are websites, videos, blogs and YouTube. All of these resources are free and may be accessed on all devices.
ANA Healthy Nurse Healthy Nation:
On this webpage of the American Nurses Association is a brief explanation of tapping and how it relieves stress and how nurses can benefit from tapping. It offers an easy to follow 4 step instructional guide to get tapping right away. https://engage.healthynursehealthynation.org/blogs/8/2379?
The Tapping Solution
This website was founded by the Ortner siblings in 200. This webpage offers an extensive collection of information about tapping. The site offers hundreds of tapping meditations, videos, explanations and diagrams/pictures and free You Tube videos. All done in an easy to follow format. https://www.thetappingsolution.com.
The Tapping Solution App
In 2018, the Tapping App was released. The app includes hundreds of Tapping Meditations that can be listened to and downloaded right to your phone. A study done by 25 using a sample of 270,461 users assessed the effects of the Tapping Solution App. The findings indicated that using the Tapping Solution App had similar efficacy as conventional tapping formats for relief of anxiety and stress. https://www.thetappingsolutionapp.com.
EFTUniverse
This website was founded by Dawson Church, one of the pioneers in the use of EFT. The aim of Dr. Church is to provide the benefits of EFT tapping and meditation to a wide audience. The website offers a list of over 100 scientific papers about EFT research studies, as well as seminars, workshops, books, podcasts, free videos, free, live tapping circles, and certification. https://www.eftuniverse.com.
How to do the EFT Tapping Basics -The Basic Recipe
This website was created by Gary Craig who claims to be the founder of EFT. He provides EFT training videos which record the development of EFT from 1995 through today. The site provides many free tutorials, free a free newsletter, free e-book, practitioner listings and certification.
Key takeaways
• Healthcare providers and students are undergoing exceptional amounts of stress and anxiety with little to no readily available applications of how to achieve relief. • Emotional Freedom Technique (EFT) or Tapping is an effective, efficacious, and evidenced-based practice that can relieve feelings of stress and anxiety quickly and then practiced by the individual on their own. • Stress and anxiety reduction lead to feelings of well-being which support psychological health. • EFT is quick, noninvasive, non-pharmaceutical, no cost, easy to learn and apply, self-practice that produces relief from that stated problem, in any setting. | 2023-01-15T14:04:36.947Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "2e82a11a6a54e4ae84282f62b89e24dd5f386d85",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9840127",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cce3818b53710d86722a30f7e7cb6cf1d39f4f9c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271465951 | pes2o/s2orc | v3-fos-license | Unveiling Allosteric Regulation and Binding Mechanism of BRD9 through Molecular Dynamics Simulations and Markov Modeling
Bromodomain-containing protein 9 (BRD9) is a key player in chromatin remodeling and gene expression regulation, and it is closely associated with the development of various diseases, including cancers. Recent studies have indicated that inhibition of BRD9 may have potential value in the treatment of certain cancers. Molecular dynamics (MD) simulations, Markov modeling and principal component analysis were performed to investigate the binding mechanisms of allosteric inhibitor POJ and orthosteric inhibitor 82I to BRD9 and its allosteric regulation. Our results indicate that binding of these two types of inhibitors induces significant structural changes in the protein, particularly in the formation and dissolution of α-helical regions. Markov flux analysis reveals notable changes occurring in the α-helicity near the ZA loop during the inhibitor binding process. Calculations of binding free energies reveal that the cooperation of orthosteric and allosteric inhibitors affects binding ability of inhibitors to BRD9 and modifies the active sites of orthosteric and allosteric positions. This research is expected to provide new insights into the inhibitory mechanism of 82I and POJ on BRD9 and offers a theoretical foundation for development of cancer treatment strategies targeting BRD9.
Introduction
The complexity and therapeutic resistance of cancers have long posed significant challenges to medical research and human health.Among various factors influencing cancer development, epigenetic changes play a crucial role, particularly acetylation modification of histones [1,2].This modification not only regulates gene transcription activity but is also closely associated with the onset and progression of various cancers.The bromodomain (BRD) family proteins, acting as epigenetic "readers" capable of recognizing acetylated histones [3,4], have emerged as novel targets for cancer therapy.BRD family proteins are evolutionarily conserved epigenetic reader modules that recognize N-acetylated lysine (KAc) residues on histones and other proteins [5].These histone tail modifications are involved in controlling chromatin accessibility.The motifs recognized by BRD family proteins work in concert with other chromatin factors to regulate gene transcription [6,7].
To date, the BRD family has been classified into eight subfamilies [8][9][10], including at least 56 nuclear or cytoplasmic proteins, in humans, with diverse structures and functions.Furthermore, there is growing evidence suggesting that BRD family members play a role in the pathogenesis of various diseases by modulating the transcription of multiple genes associated with cancer growth and inflammation [11][12][13][14].Consequently, BRD family members have been potential targets for anti-cancer drug design, with several BRD inhibitors in development and some even in clinical trials for both oncological and non-oncological conditions.
Despite significant sequence variation among various BRD family members, all BRD proteins share a common topology [15][16][17], including four left-handed antiparallel αhelices (αZ, αA, αB and αC) connected by functional loop regions, the ZA loop and BC loop, which are responsible for substrate specificity (Figure 1A).Binding pockets of BRD family proteins are depicted in Figure 1B.Among the BRD family members, BRD4 is one of the most extensively studied proteins and has been shown to play a crucial role in various cancers [18][19][20].In recent years, the development of allosteric inhibitors targeting BRD4, such as ZL0590 (also known as POJ), have proven to be interesting in new therapeutic approaches of cancers [21].ZL0590, as a novel allosteric inhibitor, binds to a new site on BRD4 that is different from the binding pocket of traditional acetylated lysine and possesses a specific binding mode and therapeutic potential.Insights into binding of ZL0590 to BRD proteins are of importance for design of allosteric inhibitors toward BRD family members.
Molecules 2024, 29,3496 2 of 22 inhibitors in development and some even in clinical trials for both oncological and nononcological conditions.Despite significant sequence variation among various BRD family members, all BRD proteins share a common topology [15][16][17], including four left-handed antiparallel α-helices (αZ, αA, αB and αC) connected by functional loop regions, the ZA loop and BC loop, which are responsible for substrate specificity (Figure 1A).Binding pockets of BRD family proteins are depicted in Figure 1B.Among the BRD family members, BRD4 is one of the most extensively studied proteins and has been shown to play a crucial role in various cancers [18][19][20].In recent years, the development of allosteric inhibitors targeting BRD4, such as ZL0590 (also known as POJ), have proven to be interesting in new therapeutic approaches of cancers [21].ZL0590, as a novel allosteric inhibitor, binds to a new site on BRD4 that is different from the binding pocket of traditional acetylated lysine and possesses a specific binding mode and therapeutic potential.Insights into binding of ZL0590 to BRD proteins are of importance for design of allosteric inhibitors toward BRD family members.In this figure, the boxes illustrate the differences among molecular groups.The compound 9 was taken from the work of Clegg et al [22].
Similar to BRD4, BRD9 also belongs to the BRD family, and its role in tumor development is gaining special attention.Since BRD9 is also involved in the regulation of chromatin remodeling and gene expression, it significantly impacts the proliferation and survival of tumor cells [23][24][25][26].Currently, multiple investigations have been performed to probe the binding mechanism of inhibitors to BRD9 and the conformational alterations of BRD9 caused by inhibitor binding [22,[27][28][29][30]. Recently, an orthosteric inhibitor, 4-chloro- In this figure, the boxes illustrate the differences among molecular groups.The compound 9 was taken from the work of Clegg et al [22].
Similar to BRD4, BRD9 also belongs to the BRD family, and its role in tumor development is gaining special attention.Since BRD9 is also involved in the regulation of chromatin remodeling and gene expression, it significantly impacts the proliferation and survival of tumor cells [23][24][25][26].Currently, multiple investigations have been performed to probe the binding mechanism of inhibitors to BRD9 and the conformational alterations of BRD9 caused by inhibitor binding [22,[27][28][29][30]. Recently, an orthosteric inhibitor, 4-chloro-2-methyl-N-methylaminopyridine (also known as 82I), has been identified as an atypical acetyl-lysine methyl mimic and can bind to the active site of BRD9, showing an inhibitory activity with an IC50 of 25.1 µM against BRD9 [22].Figure 1C shows the molecular structure of 82I.The development of this orthosteric inhibitor provides a new strategy for treatment of BRD9-related diseases.While the allosteric inhibitor ZL0590 has shown high affinity and efficacy on BRD4, as depicted in Figure 1D [31], its impact on BRD9 remains unclear.To obtain more experimental information, two additional orthosteric inhibitors, compound 9 (named as LIG) with an IC50 value of 2000 nM and P8Z with an IC50 value of 200 nM, were used to compare their binding information and probe the effects of structural differences on inhibitor binding [22].Therefore, it is highly necessary to probe the molecular mechanism of orthosteric and allosteric regulations on BRD9's activity for the development of anti-cancer drugs targeting BRD9.
To reach our aims, orthosteric inhibitor 82I and allosteric inhibitor POJ were selected for this work to study the orthosteric/allosteric effect, and two orthosteric inhibitors P8Z and LIG were used to test the reliability of our results.Molecular dynamics simulations [32][33][34][35] and free energy analyses [36][37][38][39][40][41] have been extensively adopted to decode the regulation mechanism of small molecules on the target activity.Multiple independent molecular dynamics (MIMD) simulations [42,43], followed by Markov modeling and associated analyses [44], were performed to explore the various macro-states transitions of BRD9 induced by the association of inhibitors.Time-lagged independent component analysis (TICA) [45][46][47] and principal component analysis (PCA) [48][49][50][51] were combined to explore changes in conformational space of BRD9 due to inhibitor binding.Flux analysis was carried out to reveal transition probabilities between different macro-states of BRD9 and clarify the corresponding structural information [52][53][54].In addition, molecular mechanicsgeneralized Born surface area (MM-GBSA) and solvated interaction energy (SIE) methods were applied to calculate the binding free energies of 82I, LIG, P8Z and POJ to BRD9 and understand their binding abilities.We also anticipate that this work can contribute useful information to the development of potent anti-cancer drugs for the BRD family.
Structural Stability and Flexibility
To evaluate the effect of orthosteric and allosteric regulation on structures of BRD9, RMSDs of all the six systems were calculated by using the coordinates of Cα atoms relative to the crystal structure.Their time course and distributions are depicted in supporting information, with Figure S1A-C showing the analyses of four systems, including BRD9 without the inhibitor binding (APO-BRD9), 82I-bound BRD9 (82I-BRD9), POJ-bound BRD9 (POJ-BRD9) and both the 82I and POJ-bound BRD9 (ALL-BRD9), and Figure S2A-C showing the 82I-, LIG-, P8Z and POJ-BRD9 systems.Overall, the systems share similar structural fluctuations (Figures S1 and S2).According to the RMSD distributions (Figure S1B,C), binding of orthosteric and allosteric inhibitors does not produce an obvious influence on the structural stability of BRD9.Differently, binding of LIG and P8Z decreases the RMSD of BRD9 (Figure S2B), indicating that the presence of these two inhibitors is favorable for the stabilization of the BRD9 structure.Additionally, the time evolution of secondary structures from BRD9 in the systems was computed to evaluate the inhibitor-induced effect on secondary structures (Figures 2 and S3).Except for interconversion between turns and bends formed by residues 155 to 165, secondary structures of BRD9 are stable throughout the entire MD simulations, which possibly facilitates the function of BRD9.It is worth noting that binding of only the allosteric inhibitor leads to the evolution of the helix into the 3-10α after ~950 ns while in ALL-BRD9 the helix is well maintained, indicating that binding of an allosteric inhibitor can lead to a different effect compared orthosteric inhibitors.
S1H,I and S2H,I display the MSA distribution.It is observed that the MSAs of 82I-BRD9, POJ-BRD9, ALL-BRD9, LIG-BRD9 and P8Z-BRD9 were reduced by 150, 250, 295, 150 and 250 Å2 relative to APO-BRD9, respectively, indicating that the binding of orthosteric and allosteric inhibitors decreases the contacting extents of the solvent with BRD9, in particular for ALL-BRD9.This result implies that the presence of inhibitors in orthosteric and allosteric positions of BRD9 affects the activity of BRD9.To understand the influence of orthosteric and allosteric regulations on the structural flexibility of BRD9, RMSFs of BRD9 were computed using the coordinates of the Cα atoms (Figures 3A and S4A), and the structural regions showing obvious alterations in RMSFs are exhibited in Figures 3B and S4B.As observed in Figure 3A, the ZA-loop of BRD9 shows high flexibility while the BC-loop has a weaker flexibility compared to the ZA-loop (Figure 3B).By comparison with APO-BRD9, the association of orthosteric and allosteric inhibitors weakens the structural flexibility of the ZA-loop, particularly in the case of ALL-BRD9 (Figure 3A).On the contrary, binding of 82I and POJ slightly strengthens the structural flexibility of the BC-loop relative to the APO-BRD9, especially when both inhibitors are bound (ALL-BRD9) (Figure 3A,B).Compared to the POJ-BRD9, binding of three orthosteric inhibitors (82I, LIG and P8Z) leads to a more rigid ZA-loop (Figure S4A).In addition, the B-factor of the Cα atoms was also estimated and visualized using the PyMOL program to clarify the inhibitor-mediated changes in flexibility for BRD9.The results are depicted in Figure 3C-F.It is observed that the flexibility of the ZA-loop was decreased by inhibitor binding compared to the APO-BRD9, which is in good agreement with the The radius of gyrations (Rgs) for the APO-, 82I-, POJ-and ALL-BRD9 were determined to reveal changes in structural compactness of BRD9 during the simulations, and their time courses and distributions are displayed in Figure S1D-F.Meanwhile, the radius of gyrations (Rgs) for the 82I-, LIG-, P8Z-and POJ-BRD9 are depicted in Figure S2D-F.It is noted that the Rgs of BRD9 in the systems possess similar fluctuation range (Figures S1D and S2D); moreover, the presence of orthosteric and allosteric inhibitors hardly impacts the structural compactness extents of BRD9 (Figures S1E,F and S2E,F).At the same time, the molecular surface area (MSA) of BRD9 from six systems was computed to probe the effect of orthosteric and allosteric regulation on the solvent-accessible extents of BRD9.Figures S1G and S2G show the evolution of MSA over the simulation time, and Figures S1H,I and S2H,I display the MSA distribution.It is observed that the MSAs of 82I-BRD9, POJ-BRD9, ALL-BRD9, LIG-BRD9 and P8Z-BRD9 were reduced by 150, 250, 295, 150 and 250 Å2 relative to APO-BRD9, respectively, indicating that the binding of orthosteric and allosteric inhibitors decreases the contacting extents of the solvent with BRD9, in particular for ALL-BRD9.This result implies that the presence of inhibitors in orthosteric and allosteric positions of BRD9 affects the activity of BRD9.
To understand the influence of orthosteric and allosteric regulations on the structural flexibility of BRD9, RMSFs of BRD9 were computed using the coordinates of the Cα atoms (Figures 3A and S4A), and the structural regions showing obvious alterations in RMSFs are exhibited in Figures 3B and S4B.As observed in Figure 3A, the ZA-loop of BRD9 shows high flexibility while the BC-loop has a weaker flexibility compared to the ZA-loop (Figure 3B).By comparison with APO-BRD9, the association of orthosteric and allosteric inhibitors weakens the structural flexibility of the ZA-loop, particularly in the case of ALL-BRD9 (Figure 3A).On the contrary, binding of 82I and POJ slightly strengthens the structural flexibility of the BC-loop relative to the APO-BRD9, especially when both inhibitors are bound (ALL-BRD9) (Figure 3A,B).Compared to the POJ-BRD9, binding of three orthosteric inhibitors (82I, LIG and P8Z) leads to a more rigid ZA-loop (Figure S4A).In addition, the B-factor of the Cα atoms was also estimated and visualized using the PyMOL program to clarify the inhibitor-mediated changes in flexibility for BRD9.The results are depicted in Figure 3C-F.It is observed that the flexibility of the ZA-loop was decreased by inhibitor binding compared to the APO-BRD9, which is in good agreement with the RMSF analysis.At the same time, the structural flexibility of the ZA-loop in BRD9, when bound by 82I, LIG and P8Z, is reduced relative to POJ-BRD9 (Figure S4C-F).In summary, binding of orthosteric and allosteric inhibitors exerts certain impacts on the solvent-accessible extents and structural flexibility of BRD9.The structural flexibility of the ZA-loop was suppressed due to the binding of orthosteric and allosteric inhibitors compared to the APO-BRD9, while that of the BC-loop was enhanced by the associations of two types of inhibitors.Moreover, the presence of double inhibitors (ALL-BRD9) produces a more obvious effect on the flexibility of these two loops than the binding of a single inhibitor.These changes imply that orthosteric and allosteric inhibitors can regulate the activity of BRD9, and our current findings agree well with the previous work [22,28].
Internal Dynamics of BRD9 Affected by Binding of 82I and POJ
To clarify inhibitor-mediated impacts on internal dynamics of BRD9, the DCCMs were computed using the coordinates of the Cα atoms with the CPPTRAJ program, and the results are presented in Figure 4, in which cyan and purple indicate strong positive correlations in motion and anti-correlation movements, respectively.It is noted that binding of orthosteric and allosteric inhibitors mainly affect correlation of movements of three regions: including the R1 (parts of the ZA-loop: residues 178-193), R2 (the parts of the BCloop and the C-terminal of αB: residues 204-221) and R3 (the parts of the BC-loop and the N-terminal of αC).For the APO-BRD9, the regions R1 and R2 yield anti-correlated motions relative to residues 159-174, while the region R3 generates weak anti-correlation movement relative to residues 123-154 (Figure 4A).By comparison with the APO-BRD9, In summary, binding of orthosteric and allosteric inhibitors exerts certain impacts on the solvent-accessible extents and structural flexibility of BRD9.The structural flexibility of the ZA-loop was suppressed due to the binding of orthosteric and allosteric inhibitors compared to the APO-BRD9, while that of the BC-loop was enhanced by the associations of two types of inhibitors.Moreover, the presence of double inhibitors (ALL-BRD9) produces a more obvious effect on the flexibility of these two loops than the binding of a single inhibitor.These changes imply that orthosteric and allosteric inhibitors can regulate the activity of BRD9, and our current findings agree well with the previous work [22,28].
Internal Dynamics of BRD9 Affected by Binding of 82I and POJ
To clarify inhibitor-mediated impacts on internal dynamics of BRD9, the DCCMs were computed using the coordinates of the Cα atoms with the CPPTRAJ program, and the results are presented in Figure 4, in which cyan and purple indicate strong positive correlations in motion and anti-correlation movements, respectively.It is noted that binding of orthosteric and allosteric inhibitors mainly affect correlation of movements of three regions: including the R1 (parts of the ZA-loop: residues 178-193), R2 (the parts of the BCloop and the C-terminal of αB: residues 204-221) and R3 (the parts of the BC-loop and the N-terminal of αC).For the APO-BRD9, the regions R1 and R2 yield anti-correlated motions relative to residues 159-174, while the region R3 generates weak anti-correlation movement relative to residues 123-154 (Figure 4A).By comparison with the APO-BRD9, inhibitor binding enhances the anti-correlation movements of R1 and R2 relative to residues 159-174, particularly in the binding of the allosteric inhibitor POJ (Figure 4B-D).Binding of the orthosteric inhibitor 82I increases the anti-correlation motion of R3 compared to APO-BRD9, while the cooperation of 82I and POJ leads to stronger anti-correlation motion in R3 relative to APO-BRD9 and 82I-BRD9 (Figure 4B,D).By comparison with BRD9 bounded by three orthosteric inhibitors 82I, LIG and P8Z, binding of POJ strengthens the anti-correlation motions of R2 and results in the disappearance of anti-correlation motions in the R3 region (Figure S5).Binding of POJ obviously enhances the anti-correlated motion of the R1 region relative to the LIG-and P8Z-BRD9 but hardly changes the correlated movements of this region compared to 82I-BRD9 (Figure S5).According to structural information, R1 and R2 are involved in the orthosteric regulation of BRD9, while region R3 is involved in the allosteric regulation of BRD9.inhibitor binding enhances the anti-correlation movements of R1 and R2 relative to residues 159-174, particularly in the binding of the allosteric inhibitor POJ (Figure 4B-D).
Binding of the orthosteric inhibitor 82I increases the anti-correlation motion of R3 compared to APO-BRD9, while the cooperation of 82I and POJ leads to stronger anti-correlation motion in R3 relative to APO-BRD9 and 82I-BRD9 (Figure 4B,D).By comparison with BRD9 bounded by three orthosteric inhibitors 82I, LIG and P8Z, binding of POJ strengthens the anti-correlation motions of R2 and results in the disappearance of anti-correlation motions in the R3 region (Figure S5).Binding of POJ obviously enhances the anti-correlated motion of the R1 region relative to the LIG-and P8Z-BRD9 but hardly changes the correlated movements of this region compared to 82I-BRD9 (Figure S5).According to structural information, R1 and R2 are involved in the orthosteric regulation of BRD9, while region R3 is involved in the allosteric regulation of BRD9.To reveal the influence of orthosteric and allosteric inhibitors on the concerted motions of BRD9, the first eigenvector from PCA was visualized using the PyMOL program with the initialized structure (Figures 5 and S6).The ZA-loop of APO-BRD9 shows more disordered motions, represented by the length of the arrows (Figure 5A), and meanwhile the BC-loop has an outward motion tendency (Figure 5A).Compared to APO-BRD9, binding of a single orthosteric 82I not only inhibits the structural fluctuation of the ZA-loop along the first eigenvector but also induces an inward movement tendency of the BC-loop (Figure 5B).By referencing APO-BRD9, binding of a single allosteric inhibitor POJ leads To reveal the influence of orthosteric and allosteric inhibitors on the concerted motions of BRD9, the first eigenvector from PCA was visualized using the PyMOL program with the initialized structure (Figures 5 and S6).The ZA-loop of APO-BRD9 shows more disordered motions, represented by the length of the arrows (Figure 5A), and meanwhile the BC-loop has an outward motion tendency (Figure 5A).Compared to APO-BRD9, binding of a single orthosteric 82I not only inhibits the structural fluctuation of the ZA-loop along the first eigenvector but also induces an inward movement tendency of the BC-loop (Figure 5B).By referencing APO-BRD9, binding of a single allosteric inhibitor POJ leads to highly concerted fluctuations along the first eigenvector and mediates an inward motion tendency of the BC-loop (Figure 5C).Through comparison with APO-BRD9, the cooperation of orthosteric and allosteric inhibitors not only greatly inhibits the fluctuations of the ZA-loop along the first eigenvector but also induces a downward motion tendency of the BC-loop (Figure 5D).By referencing BRD9 complexed with three orthosteric inhibitors, the binding of POJ and the disappearance of 82I, LIG and P8Z from the orthosteric pocket not only strengthens the fluctuations of the ZA-loop and BC-loop along the PC1 direction but also changes fluctuation tendencies of these two loops (Figure S6).tion tendency of the BC-loop (Figure 5C).Through comparison with APO-BRD9, the cooperation of orthosteric and allosteric inhibitors not only greatly inhibits the fluctuations of the ZA-loop along the first eigenvector but also induces a downward motion tendency of the BC-loop (Figure 5D).By referencing BRD9 complexed with three orthosteric inhibitors, the binding of POJ and the disappearance of 82I, LIG and P8Z from the orthosteric pocket not only strengthens the fluctuations of the ZA-loop and BC-loop along the PC1 direction but also changes fluctuation tendencies of these two loops (Figure S6).By combining the above analyses, the binding of orthosteric and allosteric inhibitors obviously affects the correlated motions and concerted movements of the ZA-loop and BC-loop.The ZA-loop and BC-loop are involved in the binding pocket at the orthosteric position, while the BC-loop also takes part in the formation of the allosteric binding pocket.Thus, the changes in correlated motions and concerted movements of these two loops caused by 82I and POJ are likely to modulate the function of BRD9 and its activity.The previous works also indicated that inhibitor binding produced a significant effect on the function of BRD9 [13,22], which supports our current findings.
Analyses of Markov Model
To assess macrostates and microstates of inhibitor-bound BRD9, a Markov model was employed to analyze MD trajectories, incorporating techniques such as TICA, the kmeans clustering algorithm, lag time calculations and flux analysis using the HTMD package [55].For the under-study systems, APO-, 82I-, POJ-and ALL-BRD9, three independent By combining the above analyses, the binding of orthosteric and allosteric inhibitors obviously affects the correlated motions and concerted movements of the ZA-loop and BC-loop.The ZA-loop and BC-loop are involved in the binding pocket at the orthosteric position, while the BC-loop also takes part in the formation of the allosteric binding pocket.Thus, the changes in correlated motions and concerted movements of these two loops caused by 82I and POJ are likely to modulate the function of BRD9 and its activity.The previous works also indicated that inhibitor binding produced a significant effect on the function of BRD9 [13,22], which supports our current findings.
Analyses of Markov Model
To assess macrostates and microstates of inhibitor-bound BRD9, a Markov model was employed to analyze MD trajectories, incorporating techniques such as TICA, the k-means clustering algorithm, lag time calculations and flux analysis using the HTMD package [55].For the under-study systems, APO-, 82I-, POJ-and ALL-BRD9, three independent molecular trajectories were transformed into three two-dimensional arrays with dimensions of 600,000 × 101.The first dimension of the two-dimensional array (600,000) corresponded to the conformation numbers of the three independent trajectories, while the second dimension (101) represented the residue numbers.Then, TICA was performed using the k-means algorithm to capture low-frequency motions, with a k-value of 100 selected through the clustering procedure, as depicted in Figure 6.The analysis revealed that APO-BRD9, 82I-BRD9, POJ-BRD9 and ALL-BRD9 have 4, 5, 6 and 4 macrostates, respectively.A main macrostate of APO-BRD9, two main macrostates of 82I-BRD9, a main macrostate of POJ-BRD9 and a main macrostate of ALL-BRD9 account for 77.4%, 81.4%, 67.6% and 98.1% of the total sampling times, respectively (Figure S7).These results imply that the cooperation of orthosteric and allosteric inhibitors stabilizes the conformations of BRD9.
Molecules 2024, 29, 3496 8 of 22 molecular trajectories were transformed into three two-dimensional arrays with dimensions of 600,000 × 101.The first dimension of the two-dimensional array (600,000) corresponded to the conformation numbers of the three independent trajectories, while the second dimension (101) represented the residue numbers.Then, TICA was performed using the k-means algorithm to capture low-frequency motions, with a k-value of 100 selected through the clustering procedure, as depicted in Figure 6.The analysis revealed that APO-BRD9, 82I-BRD9, POJ-BRD9 and ALL-BRD9 have 4, 5, 6 and 4 macrostates, respectively.
A main macrostate of APO-BRD9, two main macrostates of 82I-BRD9, a main macrostate of POJ-BRD9 and a main macrostate of ALL-BRD9 account for 77.4%, 81.4%, 67.6% and 98.1% of the total sampling times, respectively (Figure S7).These results imply that the cooperation of orthosteric and allosteric inhibitors stabilizes the conformations of BRD9.To more efficiently construct Markov models, the lag times of the systems were computed through the aforementioned clustering results arising from the TICA, and the results are displayed in Figure 7.The minimum lag time, which is the Markov time, is identified when the relaxation time scale reaches convergence.For the APO-BRD9 and POJ-BRD9, the relaxation time scales tend to converge when the lag time is 1 ns (Figure 7A,C).With respect to the 82I-BRD9 and ALL-BRD9, the relaxation time scales tend to converge when the lag time is 4 ns (Figure 7B,D).Compared to the APO-BRD9 and POJ-BRD9, the To more efficiently construct Markov models, the lag times of the systems were computed through the aforementioned clustering results arising from the TICA, and the results are displayed in Figure 7.The minimum lag time, which is the Markov time, is identified when the relaxation time scale reaches convergence.For the APO-BRD9 and POJ-BRD9, the relaxation time scales tend to converge when the lag time is 1 ns (Figure 7A,C).With respect to the 82I-BRD9 and ALL-BRD9, the relaxation time scales tend to converge when the lag time is 4 ns (Figure 7B,D).Compared to the APO-BRD9 and POJ-BRD9, the presence of the orthosteric inhibitor 82I results in a longer convergence lag time, indicating that orthosteric inhibitors can better stabilize the BRD9 conformation relative to allosteric inhibitors.Based on obtained various macroscopic states, flux analysis was carried out to compute the transition probability matrix of the Markov model, which is visually shown in Figure 8 with its proportion of each path.Meanwhile, the same information regarding the flux analysis was also listed at Tables 1 and S1-S3.For the 82I-BRD9, the SA → S1 → SB is a main conformational transition path (Figure 8A), and this transition accounts for 98.9% of the total flux, while the other transitions only account for 1.1% of the total flux.Compared to the APO-BRD9, binding of orthosteric inhibitor 82I induces two primary transition pathways SA → S2 → SB and SA → S1 → S2 → SB (Figure 8B), respectively accounting for 58.4 and 40.5% of the total flux (Table S1).This result implies that the presence of orthosteric inhibitors leads to conformational rearrangement of BRD9.By referencing the APO-BRD9, the binding process of allosteric inhibitor POJ mediates four main transition pathways, including SA → S2 → S1 → SB, SA → S2 → S4 → S1 → SB, SA → S1 → SB and SA → S4 → SB (Figure 9C).They account for 51.1, 30.6, 13.0 and 4.1% of the total flux (Table S2), individually, showing that binding of the allosteric inhibitor induces more energy states than the APO-BRD9 and the activity of BRD9 can be regulated through the allosteric inhibitor.By comparison with the APO-BRD9, three key transition pathways are detected in the ALL-BRD9, consisting of SA → S1 → SB, SA → SB and SA → S1 → S2 → Based on obtained various macroscopic states, flux analysis was carried out to compute the transition probability matrix of the Markov model, which is visually shown in Figure 8 with its proportion of each path.Meanwhile, the same information regarding the flux analysis was also listed at Tables 1 and S1-S3.For the 82I-BRD9, the SA → S1 → SB is a main conformational transition path (Figure 8A), and this transition accounts for 98.9% of the total flux, while the other transitions only account for 1.1% of the total flux.Compared to the APO-BRD9, binding of orthosteric inhibitor 82I induces two primary transition pathways SA → S2 → SB and SA → S1 → S2 → SB (Figure 8B), respectively accounting for 58.4 and 40.5% of the total flux (Table S1).This result implies that the presence of orthosteric inhibitors leads to conformational rearrangement of BRD9.By referencing the APO-BRD9, the binding process of allosteric inhibitor POJ mediates four main transition pathways, including SA → S2 → S1 → SB, SA → S2 → S4 → S1 → SB, SA → S1 → SB and SA → S4 → SB (Figure 9C).They account for 51.1, 30.6, 13.0 and 4.1% of the total flux (Table S2), individually, showing that binding of the allosteric inhibitor induces more energy states than the APO-BRD9 and the activity of BRD9 can be regulated through the allosteric inhibitor.By comparison with the APO-BRD9, three key transition pathways are detected in the ALL-BRD9, consisting of SA → S1 → SB, SA → SB and SA → S1 → S2 → SB (Figure 8D), while they separately account for 76.2, 19.6 and 4.2% of the total flux.This result suggests that the cooperation of orthosteric and allosteric inhibitors can regulate the activity of BRD9 by altering the conformational transition pathway.Based on the current analyses of the Markov model, binding of a single orthosteric inhibitor, allosteric inhibitor, or the cooperation of two types of inhibitors can exert different effects on the distributions of macrostates and microstates as well as the conformational transition pathway, which regulate the activity of BRD9.The distances between the ZA-loop and BC-loop indicate that 82I, POJ and the cooperation of two types of inhibitors yield different effects on the conformations of these two loops (Figure S8), basically supporting the results of our current Markov model analysis.In addition, the work on BRD4 from Yang et al. verified that the binding of orthosteric and allosteric inhibitors can tune the activity of BRD4 [13], which partially supports our work.
Binding Ability of Two Types of Inhibitors to BRD9
Binding free energies are regarded as an important indicator for measuring binding ability of inhibitors to targets.Because the experimental information on the binding of POJ in the allosteric sites of BRD9 is lacking, we used three different methods, including mo- Based on the current analyses of the Markov model, binding of a single orthosteric inhibitor, allosteric inhibitor, or the cooperation of two types of inhibitors can exert different effects on the distributions of macrostates and microstates as well as the conformational transition pathway, which regulate the activity of BRD9.The distances between the ZA-loop and BC-loop indicate that 82I, POJ and the cooperation of two types of inhibitors yield different effects on the conformations of these two loops (Figure S8), basically supporting the results of our current Markov model analysis.In addition, the work on BRD4 from Yang et al. verified that the binding of orthosteric and allosteric inhibitors can tune the activity of BRD4 [13], which partially supports our work.
Binding Ability of Two Types of Inhibitors to BRD9
Binding free energies are regarded as an important indicator for measuring binding ability of inhibitors to targets.Because the experimental information on the binding of POJ in the allosteric sites of BRD9 is lacking, we used three different methods, including molecular docking, MM-GBSA and SIE methods, to calculate binding free energies so as to evaluate the reliability of our calculations.MM-GBSA calculations were performed on 300 snapshots selected from the equilibrated parts of the MD trajectory under consideration.To reduce high computation time, 100 snapshots from the aforementioned 300 were applied to estimate the entropy contributions.The same snapshots used in the MM-GBSA calculations were utilized to perform the SIE calculations.
The inhibitors 82I, LIG and P8Z were docked into the orthosteric pocket of BRD9 while POJ was docked into the allosteric pocket and the binding data was provided in Table S4, with the structures of the first ten docking scores highlighted in blue (Figure S9).According to Table S4, binding ability of four inhibitors scaled by the best scoring are in the order POJ < 82I < LIG < P8Z.The binding poses of the four inhibitors indicate that 82I, LIG and P8Z with higher scores mostly bind at the orthosteric position while POJ, with higher scores, is mainly found in the allosteric pocket of BRD9 (Figure S9).These results confirm the reliability of the docked structures of 82I, LIG, P8Z and POJ used for MD simulations.
Table S5 lists the results calculated by the MM-GBSA method, and Table S6 provides the data estimated by the SIE method.Firstly, the ranks of binding free energies of 82I, LIG and P8Z predicted by two methods are in good agreement with those determined by the experimental values.Secondly, the ranks of binding ability for 82I, LIG, P8Z and POJ to BRD9 predicted by the previous molecular docking are in good consistency with those of the four inhibitors to BRD9 calculated by MM-GBSA and SIE methods.These results further verify that our current free energy analyses are reliable and valid.
Binding free energies of 82I and POJ in the three bound states, single 82I, single POJ and the ALL state, are shown in Table 2.The electrostatic interactions (∆E ele ), van der Waals interactions (∆E vdW ) and non-polar solvation free energies (∆G sur f ) contribute favorable forces to associations of 82I and POJ with BRD9 while polar solvation free energies (∆G gb ) and the entropy effect (−T∆S) are unfavorable for the binding of these two inhibitors (Table 2).Binding free energies of 82I to BRD9 in the 82I-BRD9 and ALL-BRD9 states are −4.83 and −1.31 kcal/mol, respectively, and that of POJ to BRD9 in the POJ-BRD9 and ALL-BRD9 states are −2.42 and −3.59 kcal/mol, individually (Table 2).The binding ability of 82I to BRD9 in the ALL-BRD9 is weakened by 3.52 kcal/mol relative to the 82I-BRD9, but that of POJ to BRD9 in the ALL-BRD9 is strengthened by 1.17 kcal/mol compared to the POJ-BRD9.This result implies that the cooperation of orthosteric and allosteric inhibitors can affect their binding ability to BRD9 and efficiently regulate the activity of BRD9, which is useful for the design of novel inhibitors targeting BRD9 and even the BRD family.To determine the roles of separate residues in the binding of the two types of inhibitors, inhibitor-residue interactions were calculated through the residue-based free energy decomposition method (Figures 9 and S10).According to Figures 9A and S10A, the active sites of orthosteric inhibitor 82I mainly involve residues PHE160, VAL165, ALA212, ASN216 and TYR222 in the 82I-BRD9, and their interaction energies with 82I are more negative than −0.8 kcal/mol.As shown in Figure S10B,C, the two orthosteric inhibitors LIG and P8Z share binding clusters of residues that are the same as those in 82I, implying orthosteric inhibitors possess a similar binding cavity.By comparison with the three orthosteric inhibitors, allosteric inhibitor POJ loses the interaction with the first residue cluster labeled in green (Figure S10D).Compared to the 82I-BRD9, the cooperation of two types of inhibitors leads to decreases in the interaction energies of PHE160, VAL165, ALA212, ASN216 and TYR222 with 82I in the ALL-BRD9 (Figure 9C).As shown in Figures 9B and S10D, the active sites of allosteric inhibitor POJ primarily include residues LYS206, LYS227, LEU230, HIE231 and PHE234 in the POJ.Moreover, the interaction energies of these five residues are more than −0.8 kcal/mol.Although the cooperation of the two types of inhibitors weakens the interactions of LYS206, LYS227, LEU230, HIE231 and PHE234 with POJ compared to the POJ-BRD9 (Figure 9D), it strengthens the interactions of MET236 and MET237 with POJ.Meanwhile, the cooperation of the two types of inhibitors also induces additional interaction clusters of residues (the helix αZ) with POJ relative to the POJ-BRD9 (Figure 9B,D), which strengthens the binding ability of POJ to BRD9 in the ALL-BRD9.Thus, the cooperation of the two types of inhibitors in the orthosteric and allosteric positions (Figure 9E) can efficiently regulate the activity of BRD9.The orthosteric active sites revealed by our current work, including PHE160, PHE163, VAL165, ILE169, ASN216 and TYR222, agree well with the information uncovered by previous works [22], implying the reliability of our results.
Based on the above analyses, the cooperation of the two types of inhibitors produces a significant effect on the activity of BRD9: (1) the cooperation of 82I and POJ weakens the binding ability of orthosteric inhibitor 82I to BRD9 and strengthens that of allosteric inhibitor POJ to BRD9, (2) the cooperation of 82I and POJ changes the active sites at the orthosteric and allosteric positions, and (3) van der Waals interactions are the main forces of the two types of inhibitors to BRD9.Thus, more attention should be paid to van der Waals interactions in the future design of orthosteric and allosteric inhibitors.
Materials and Methods
In this study, we employed a suite of computational biology methods to investigate the dynamics and functions of proteins.Our methodology started with molecular docking, integrated with three-dimensional structural information obtained from the Protein Data Bank (PDB) to construct a series of distinct system models.Specifically, we built apo systems, orthosteric systems (including 82I, P8Z and LIG systems), allosteric systems (POJ system) and dual-inhibitor systems (ALL system).These systems were subsequently subjected to refined processing through molecular dynamics simulations (MD) to acquire in-depth dynamic structural information.
Based on the trajectories derived from the simulations, we carried out two main analytical tasks in parallel.First, we conducted an in-depth analysis of the synergistic effects of the orthosteric, allosteric, and dual-inhibitor systems, employing methods such as Markov modeling, molecular mechanics with Poisson-Boltzmann surface area (MM GBSA), and principal component analysis (PCA).Second, to validate the behavioral differences among various systems, we performed comparative verification analysis on multiple orthosteric and allosteric systems, primarily utilizing MM GBSA, Solvated Interaction Energy method (SIE), and molecular docking techniques.
The flowchart of this study (see Figure 10) encapsulates the overall framework of the aforementioned methods, clearly depicting the complete pathway from system construction to simulation and then to analysis.This process reflects the logicality of our research design, facilitating the understanding of the computational methods and analytical strategies we employed.
Preparation of Simulation Systems
To investigate molecular mechanism of orthosteric and allosteric regulation, we designed six distinct systems, including the APO form of BRD9 (APO-BRD9) without inhibitor binding, the orthosteric inhibitor 82I/BRD9 complex (82I-BRD9), the allosteric inhibitor POJ/BRD9 complex (POJ-BRD9), and the dual inhibitor/BRD9 complex (ALL-BRD9).In addition, two orthosteric inhibitors, LIG and P8Z, were used for this study to perform information comparison and check the reliability of our results.The initial atom coordinates of 82I-BRD9 and P8Z-BRD9 were obtained from the Protein Data Bank (PDB), and their PDB entries correspond to 6YQW and 6YQS [22].Since the structures of the POJ-BRD9 and LIG-BRD9 complexes are unavailable in the PDB, the inhibitor POJ [31] and LIG [22] were, respectively, docked into the allosteric binding pocket and orthosteric binding pocket of BRD9 using the HTMD program [55] to generate the structures of the LIG-BRD9 and POJ-BRD9 complexes.The structure of the 82I/POJ-BRD9 (ALL-BRD9) was obtained by removing one BRD9 from the superimposed structures of the 82I-BRD9 and POJ-BRD9.The APO-BRD9 without inhibitor binding was obtained by removing 82I from 6YQW.The H++ 3.0 program [56] was employed to assess protonation states of BRD9 residues and assign reasonable protonation states to each residue from BRD9.The missing hydrogen atoms from the crystal structure were added to the corresponding heavy atom using the Leap module [57,58] in Amber 22. Parameters for BRD9 were derived from the ff19SB force field [59].The structures of the four inhibitors (82I, LIG, P8Z and POJ) were optimized at the semi-empirical AM1 level, and subsequently, BCC charges were assigned to each atom of the inhibitors using the Antechamber module in Amber [60,61].The force field parameters for 82I, LIG, P8Z and POJ were obtained using the general Amber force field (GAFF2) [62,63].Each of the six current systems was solvated in an octahedral periodic box of TIP3P water molecules with a 10.0 Å buffer to mimic the solvent environment, and the force field parameters for water molecules were taken from the TIP3P model [64].An appropriate number of sodium ions (Na + ) and chloride ions (Cl − ) were added to the water box at a 0.15 M concentration of NaCl salt to form neutral simulation systems, with the parameters for Na + and Cl − ions sourced from the research by Joung et al. [65,66].
Preparation of Simulation Systems
To investigate molecular mechanism of orthosteric and allosteric regulation, we designed six distinct systems, including the APO form of BRD9 (APO-BRD9) without inhibitor binding, the orthosteric inhibitor 82I/BRD9 complex (82I-BRD9), the allosteric inhibitor POJ/BRD9 complex (POJ-BRD9), and the dual inhibitor/BRD9 complex (ALL-BRD9).In addition, two orthosteric inhibitors, LIG and P8Z, were used for this study to perform information comparison and check the reliability of our results.The initial atom coordinates of 82I-BRD9 and P8Z-BRD9 were obtained from the Protein Data Bank (PDB), and their PDB entries correspond to 6YQW and 6YQS [22].Since the structures of the POJ-BRD9 and LIG-BRD9 complexes are unavailable in the PDB, the inhibitor POJ [31] and LIG [22] were, respectively, docked into the allosteric binding pocket and orthosteric binding pocket of BRD9 using the HTMD program [55] to generate the structures of the LIG-BRD9 and POJ-BRD9 complexes.The structure of the 82I/POJ-BRD9 (ALL-BRD9) was obtained by removing one BRD9 from the superimposed structures of the 82I-BRD9 and POJ-BRD9.The APO-BRD9 without inhibitor binding was obtained by removing 82I from 6YQW.The H++ 3.0 program [56] was employed to assess protonation states of BRD9 residues and assign reasonable protonation states to each residue from BRD9.The missing hydrogen atoms from the crystal structure were added to the corresponding heavy atom using the Leap module [57,58] in Amber 22. Parameters for BRD9 were derived from the ff19SB force field [59].The structures of the four inhibitors (82I, LIG, P8Z and POJ) were optimized at the semi-empirical AM1 level, and subsequently, BCC charges were assigned to each atom of the inhibitors using the Antechamber module in Amber [60,61].The force field parameters for 82I, LIG, P8Z and POJ were obtained using the general Amber force field (GAFF2) [62,63].Each of the six current systems was solvated in an octahedral periodic box of TIP3P water molecules with a 10.0 Å buffer to mimic the solvent environment, and the force field parameters for water molecules were taken from the TIP3P model [64].An appropriate number of sodium ions (Na + ) and chloride ions (Cl − ) were added to the water box at a 0.15 M concentration of NaCl salt to form neutral simulation systems, with the parameters for Na + and Cl − ions sourced from the research by Joung et al. [65,66].
Multiple Independent Molecular Dynamics Simulations
Initialization of the six BRD9-related systems may lead to high-energy contacts and orientations between atoms throughout the systems, potentially disrupting the stability of the system simulations.To address this issue, all six BRD9-related systems firstly underwent a 5000-cycle steepest descent minimization followed by a 10,000-cycle conjugate gradient minimization.Secondly, in the canonical ensemble (NVT) conditions, the temperature of the six optimized systems was gradually increased from 0 to 300 K over 2 ns, during which all non-hydrogen atoms of the BRD9-related systems were restrained with a weak harmonic restraint of 2 kcal•mol −1 •Å2.Thirdly, a 3-ns equilibration process was performed at 300 K under the isothermal-isobaric ensemble (NPT) to further optimize systems.A 15-ns NPT simulation was then carried out to maintain the system density at 1.01 g/cm 3 .Finally, three independent 400-ns MD simulations were conducted for six current systems at NVT with periodic boundary conditions (PBC) and the particle mesh Ewald method (PME) [67].In each independent MD simulation, atomic velocities were randomly assigned according to the Maxwell distribution.To facilitate post-processing analysis, the three independent MD trajectories were merged into a single MD trajectory (SMT).Throughout all MD simulations, chemical bonds connecting hydrogen atoms to heavy ones were constrained using the SHAKE algorithm [68].Langevin dynamics [69] were employed to tune the temperature of the six BRD9-related systems, with which a collision frequency of 2.0 ps −1 was adopted.The PME method, in conjunction with an appropriate 12-Å cutoff, was utilized for calculating electrostatic interactions (EIs), and this cutoff was also applied for handling van der Waals interactions (VDWIs).The pmemd.cuda program embedded in Amber 22 was used to run MD simulations and relax the systems [70,71].Analyses of MD trajectories, including root-mean-square deviations (RMSDs), root-meansquare fluctuations (RMSFs), PCA and dynamics cross-correlation maps (DCCMs) [49], were performed through the CPPTRAJ module in Amber [72].The VMD [73] and PyMOL (www.pymol.org)programs were employed to show structures and depict figures.
Markov Models
Markov models facilitate the categorization of highly similar conformations recorded in MD trajectories into different microstates.Each microstate matches a state belonging to the Markov model state space.The transition probabilities are used to characterize the transformations between these states, as represented by a transition probability matrix.The Markov model can be used to analyze the dynamic relationships between various microstates by means of the transition probability matrix and flux analysis methods [44].Our current Markov model mainly involves the following method: TICA [45-47], k-means clustering algorithm, lag time calculations and flux analysis, and the details for these methods are clarified as follows.
TICA Dimensionality Reduction Method
TICA is a commonly utilized dimensionality reduction technique in the construction of Markov models.Markov models can efficiently extract information from multiple repeated trajectories through input MD trajectories.High-dimensional data taken from MD trajectories are difficult to use directly.To improve the efficiency of data processing and analyses, the TICA dimensionality reduction method is adopted to treat data and reveal conformational changes of targets.Although TICA cannot analyze the principal component of data, it can detect the coordinates of the maximum autocorrelation at a given lag time.Thus, TICA can effectively extract slow order parameters from molecular dynamics data, which enables it to be an excellent choice for processing molecular simulation data before k-means clustering [74].In the context of Markov models, TICA identifies the characteristic function and approximate eigenvalues of the underlying Markov operator from the input data, allowing for the estimation of TICA transformation values.These values can then be used to obtain eigenvalues, eigenvectors or project input data onto the slowest TICA components.
K-Means Clustering Algorithm
The k-means clustering algorithm is a prevalent method in machine learning and falls under the category of unsupervised learning techniques.The algorithm's core objective is to partition an unlabeled dataset, without prior knowledge, into k segments by optimizing an evaluation function.As a non-convex algorithm, k-means is susceptible to local optima.To mitigate this, random initialization is performed multiple times, with the expectation that the optimal solution will be global.In this study, the k-means clustering process was repeated ten times for each k-value to evaluate the results.Determining the hyperparameter k is challenging and can be guided by the elbow method.However, the elbow method is not always effective.Therefore, this study simplifies the process by applying the rationale of the elbow method, conducting a grid search for more reasonable k-values.In Markov models, k-means clustering is primarily used to cluster conformational states in molecular simulation trajectories, with each cluster defined as a microstate in the Markov model, resulting in a discrete trajectory of microstates [75,76].Subsequently, the Markov model calculates the transition probabilities between microstates based on this discrete trajectory, yielding the transition probability matrix.
Determination of Lag Time
Lag time is a critical parameter in Markov models for determining the transition probability matrix through multiple separate trajectories.The physical interpretation of lag time is the duration of each jump between discrete trajectories.In practice, after obtaining the discrete trajectories of microstates, a lag time jump is performed along these trajectories, recording the jump information in a counting matrix, which is then transformed into a transition probability matrix.Markov time refers to the minimum lag time that ensures Markov, or memoryless, behavior and is achieved by evaluating the system's loss of dependence on the implied relaxation timescale [52,77].Lag time significantly impacts Markov models: a too short lag time can lead to substantial differences between eigenvector errors and spectral errors, while an excessively long lag time may introduce significant numerical errors.The Chapman-Kolmogorov test is commonly used to assess the appropriateness of the chosen lag time, comparing the left and right sides of the Chapman-Kolmogorov equation [78]: where P(τ) is the transition probability matrix determined by the lag time τ, and kτ represents multiples of τ.The lag time τ is amplified multiple times, and a Markov model is constructed to obtain the transition probability matrix.If the left and right sides of the equation are equal or very close, it indicates that the chosen lag time τ enables the Markov model to exhibit memoryless behavior, which is typically considered appropriate if the difference between the two sides is less than 5% [79].
Flux Analysis
If molecular simulation trajectories have been clustered into macroscopic stable states, flux analysis can be performed.In practice, once the transition probability matrix of the Markov model is obtained, this result can be utilized to acquire thermodynamic and kinetic information of the trajectory.Due to the high complexity of this model, a more coarse-grained model may be employed to provide the same quantitative information more concisely, making it more suitable for flux analysis [53].The objective of flux analysis is to determine the transition pathways within the microstate space constructed by the Markov model, which requires a probability distribution matrix and the calculation of flux.The central concept involves calculations of the forward probability q + i and the backward probability q − i for each microstate when it is at equilibrium.The calculation methods for q + i and q − i are as follows [54]: in which T ij represents the transition probability between two specified microstates, provided by the transition probability matrix of the Markov model, with A and B being the two endpoint microstates [52].After calculating q + i and q − i , the effective flux f ij from microstate i to microstate j can be determined based on this.The calculation formula is as follows, where ρ i is the Boltzmann probability of state i: Thus, the net flux from microstate i to microstate j can be estimated by using the two effective fluxes, f ij and f ji : Using this method, the net flux between any two microstates can be calculated.For multiple stable states in molecular dynamics simulation, it is only necessary to identify the initial stable state A and the final stable state B of the trajectory and then determine the intermediate states between the two endpoints AB based on the transition probability matrix and the flux between the stable states.Due to the relationships between transition probabilities, intermediate states are not unique and may present multiple scenarios.Each state scenario will determine an occurrence probability based on the flux along its path.The calculation method for path probability is as follows: After calculating the probabilities of each path, the conformational changes of each path and the probability of its occurrence can be used for path analysis.If one wishes to integrate the cluster size of each stable state, which is also known as the frequency of stable state occurrence, with the flux probability, an allocation coefficient can be assigned to each stable state based on the number of times that this state is observed in the path, and the total probability can be calculated by weighting it according to the probability of the path.Ultimately, the path order with the highest probability can be obtained.
MM-GBSA Calculations
MM-PB/GBSA [80] methods have been an efficient tool for calculating inhibitortarget binding free energies.A series of works were performed by Hou's group to check the performance of these two methods [34,81].According to their tests, the MM-GBSA method was applied to calculate inhibitor-BRD9 binding free energies with the following Equation ( 7): ∆G bind = ∆E ele + ∆E vdw + ∆G gb + ∆G sur f − T∆S (7) in which ∆E ele and ∆E vdw individually indicate the electrostatic and van der Waals interactions of inhibitors with BRD9, while ∆G pol and ∆G sur f separately suggest the polar and nonpolar contributions to solvent free energy of the inhibitor-BRD9 complexes.The ∆E ele and ∆E vdw were estimated from the Amber ff19SB force field parameters.The term ∆G sur f was estimated by using the empirical equation ∆G sur f = γ × ∆SASA + β, in which ∆SASA denotes the solvent accessible surface area, and the parameters of γ and β are assigned as 0.0072 kcal•mol•Å −2 and 0.0 kcal•mol −1 , respectively.The ∆G pol was computed by using the generalized Born (GB) model [82].The last component, −T∆S, represents the contribution of the entropic change to binding free energies and was estimated through the MMPBSA.pyprogram in Amber 20 [83].In our current work, 300 snapshots were extracted from the equilibration parts of MD trajectories to calculate binding free energies.Since the entropy calculation is computationally demanding, 100 snapshots were selected from the previously mentioned 300 to calculate the entropy contributions to inhibitor-BRD9 binding.
Solvated Interaction Energy Method
The SIE is another efficient method to quickly calculate binding free energies of inhibitors to targets.In this method, the SIE function [84] to estimate inhibitor-BRD9 binding free energies is expressed as the following Equation (8): from which E c and E vdW correspond to the intermolecular Coulomb and van der Waals interaction energies in the bound state of BRD9, respectively.The component ∆G R describes the change in the reaction field energy due to the binding of an inhibitor, which is obtained by solving the Poisson equation using the boundary element method (BRI BEM) [85,86] and a solvent probe with a variable radius of 1.4 Å [87].The term γ•∆MSA represents the change in the molecular surface area resulting from binding.The four empirical parameters used in this work, ρ, D in , γ and C, represent the Amber van der Waals radii linear scaling coefficient, the solute interior dielectric constant, the molecular surface area coefficient and a constant, respectively.The empirical parameter α relates to the overall proportionality coefficient associated with the loss of conformational entropy upon binding [88].The optimized values of these parameters are α = 0.1048, D in = 2.25, ρ = 1.1, γ = 0.0129 kcal/(mol•Å) and C = −2.89kcal•mol −1 [84,89].The SIE calculations were performed with the program Sietraj [89].
Conclusions
BRD9 is thought to be a key player in chromatin remodeling and gene expression regulation.Inhibition of BRD9 activity plays an important role in the treatment of certain cancers, making it a potential target for anticancer drugs.Three independent MD simulations were followed by Markov modeling and PCA to probe molecular mechanism of orthosteric and allosteric regulation on the activity of BRD9 by inhibitors 82I and POJ.Our results indicate that binding of orthosteric and allosteric inhibitors induces significant structural changes in the protein, particularly in the formation and dissolution of α-helical regions, and alters the transition pathway.Markov flux analysis reveals that notable changes occur in the α-helicity near the ZA loop during the inhibitor binding process.The calculations of binding free energies using the MM-GBSA method reveal that the cooperation of orthosteric and allosteric inhibitors affects the binding ability of inhibitors to BRD9 and alters the active sites of orthosteric and allosteric positions.This research is expected to provide new insights into the inhibitory mechanism of 82I and POJ on BRD9 and offer a theoretical foundation for the development of cancer treatment strategies targeting BRD9.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/molecules29153496/s1.Table S1.The flux analysis data of 82I-BRD9; Table S2.The flux analysis data of POJ-BRD9; Table S3.The flux analysis data of ALL-BRD9; Table S4.Binding ability of inhibitors to BRD9 predicted by molecular docking; Table S5.Binding free energies of inhibitors to BRD9 obtained by MM-PBSA method; Table S6.Binding free energies of inhibitors to BRD9 obtained by SIE method;
Figure 1 .
Figure 1.Molecular structures: (A) structural superimposition of inhibitor-bound BRD9 complex, (B) binding pocket of orthosteric site and allosteric site, (C) orthosteric inhibitor 82I, (D) allosteric inhibitor POJ, (E) orthosteric inhibitor compound 9 (named as LIG) and (F) orthosteric inhibitor P8Z.In this figure, the boxes illustrate the differences among molecular groups.The compound 9 was taken from the work of Clegg et al[22].
Figure 1 .
Figure 1.Molecular structures: (A) structural superimposition of inhibitor-bound BRD9 complex, (B) binding pocket of orthosteric site and allosteric site, (C) orthosteric inhibitor 82I, (D) allosteric inhibitor POJ, (E) orthosteric inhibitor compound 9 (named as LIG) and (F) orthosteric inhibitor P8Z.In this figure, the boxes illustrate the differences among molecular groups.The compound 9 was taken from the work of Clegg et al[22].
Molecules 2024 ,
29, 3496 5 of 22 RMSF analysis.At the same time, the structural flexibility of the ZA-loop in BRD9, when bound by 82I, LIG and P8Z, is reduced relative to POJ-BRD9 (Figure S4C-F).
Figure 3 .
Figure 3. Structural flexibility of the APO-, 82I-, POJ-, ALL-BRD9: (A) the RMSFs of BRD9 calculated using the coordinates of the Cα atoms, (B) structural regions with obvious alterations of RMSFs and (C-F) corresponding to the structural flexibility of APO-BRD9, 82I-BRD9, POJ-BRD9 and 82I/POJ-BRD9.The tendency from blue to red indicates the increase in structural flexibility which is scaled in B-factor.
Figure 3 .
Figure 3. Structural flexibility of the APO-, 82I-, POJ-, ALL-BRD9: (A) the RMSFs of BRD9 calculated using the coordinates of the Cα atoms, (B) structural regions with obvious alterations of RMSFs and (C-F) corresponding to the structural flexibility of APO-BRD9, 82I-BRD9, POJ-BRD9 and 82I/POJ-BRD9.The tendency from blue to red indicates the increase in structural flexibility which is scaled in B-factor.
Figure 4 .
Figure 4. DCCMs of BRD9 calculated by using the coordinates of the Cα atoms: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.The red-colored R1, R2, and R3 regions represent the primary differences among the four systems.
Figure 4 .
Figure 4. DCCMs of BRD9 calculated by using the coordinates of the Cα atoms: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.The red-colored R1, R2, and R3 regions represent the primary differences among the four systems.
Figure 5 .
Figure 5. Concerted motions of structural domains in four systems: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.In this figure, BRD9 is shown in cartoon modes and inhibitors are displayed in stick modes.
Figure 5 .
Figure 5. Concerted motions of structural domains in four systems: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.In this figure, BRD9 is shown in cartoon modes and inhibitors are displayed in stick modes.
Figure 6 .
Figure 6.Free energy surface of four systems arising from the TICA: (A) the results of k-means diagrams of the APO-BRD9, (B) the results of k-means diagrams of the 82I-BRD9, (C) the results of k-means diagrams of the POJ-BRD9 and (D) the results of k-means diagrams of the ALL-BRD9.
Figure 6 .
Figure 6.Free energy surface of four systems arising from the TICA: (A) the results of k-means diagrams of the APO-BRD9, (B) the results of k-means diagrams of the 82I-BRD9, (C) the results of k-means diagrams of the POJ-BRD9 and (D) the results of k-means diagrams of the ALL-BRD9.
Molecules 2024 ,
29, 3496 9 of 22presence of the orthosteric inhibitor 82I results in a longer convergence lag time, indicating that orthosteric inhibitors can better stabilize the BRD9 conformation relative to allosteric inhibitors.
Figure 7 .
Figure 7. Lag time determination of four systems: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.In the figure, each curveʹs color indicates a different implied timescale, and the black line with the gray shading below shows the modelʹs resolution limit.
Figure 7 .
Figure 7. Lag time determination of four systems: (A) the APO-BRD9, (B) the 82I-BRD9, (C) the POJ-BRD9 and (D) the ALL-BRD9.In the figure, each curve's color indicates a different implied timescale, and the black line with the gray shading below shows the model's resolution limit.
Figure 10 .
Figure 10.The flowchart of this study, which is used to describe the flow that we performed this study.
Figure 10 .
Figure 10.The flowchart of this study, which is used to describe the flow that we performed this study.
Table 1 .
The flux analysis data of APO-BRD9.
Table 2 .
Binding free energies of inhibitors to BRD9 obtained by MM-GBSA method. | 2024-07-27T15:11:24.197Z | 2024-07-25T00:00:00.000 | {
"year": 2024,
"sha1": "e7e04df0eb32cbc7b6b919f734c1a6e5421f42e4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3951e6e1ec9353e2e14c6f712daa15a512f4de61",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54984746 | pes2o/s2orc | v3-fos-license | Climate Change Impact on Maize Yield in the Region of Novi Sad (Vojvodina)
PDSSAT 4.0 is a crop model commonly used to quantify the climate change impact on agriculture production. The model objective is to predict the duration of growth, average growth rates and the amount of assimilate partitioned to the economic yield components of the plant. A model structure is comprehensively described in this paper. The input parameters were used from NSSC 640 maize nine year field experiment from Rimski Šančevi in Novi Sad and climate data were given by Republic Hydrometeorology Service of Serbia (RHSS) for current climate, as well as for 2030 and 2050 using three climate models (ECHAM5, HadCM3 and NCAR-PCM) under two scenarios (A1B SRES and A2 SRES). DSSAT 4.0 maize yield simulations were performed for period 1971-2000, as well as for 2030 and 2050. The change in yield for 2030 and 2050 was calculated relative to 1971-2000. The results showed lower yields in 2030 and 2050, especially in rainfed conditions.
Introduction
M. Jančić University of Novi Sad, Faculty of Agriculture, 8 Trg Dositeja Obradovića, 21000 Novi Sad e-mail: orhideja007@gmail.comDuring the last decades of the twentieth century, a higher level of awareness has arisen about the impact of global climate change on the world economy (Houghton. 1996).The most vulnerable part of economy is agriculture and key focus of current research is to predict future changes in climate and suggest how these changes will curtail current agricultural technology at individual locations (Eitzinger et al. 2010).Every country monitors changes to make adaptation options and decisions in agrotechnology.
Serbia is a developing country with favourable agroecological conditions, which has made agricultural production traditionally important part of the national economy.According to the Statistical Office of the Republic of Serbia (STAT 2006), the agriculture accounts for about 11.5% of gross value added in 2004.
The previous climate studies in Vojvodina region included comparative analyses of the current weather conditions and expected for 2040, 2080 and its impacts on agrometeorological conditions for some crops (Lalic et al. 2011).The impact of climate changes on winter wheat yield was quantified with Sirius crop model (Lalic et al. 2007, Lalic et al. 2011).Obtained results gave a possibility in making adaptation measures (Lalic et al. 2011, Mihailović et al. 2010).The succeeding research was focused on identification of agroclimatic parameters which in the best possible way point out the effects of climate change and variability on winter wheat yield change in the Pannonian lowland, with respect to different CO 2 concentrations and soil types (Lalic et al. 2012).
Maize grain has significant place in plant production in Serbia and it participates with 68.4% in whole plant production.Maize has the biggest sown area of all agricultural crops: 1,235,000 ha and maize production was 7,207,191 t in 2010.In economic aspect, maize grain export was 3.42 % and valued in external trade balance with 334,923,000 USD which was 1.18% of national GDP (STAT 2010).Maize production is organised mainly in non-irrigated conditions.
There is a need to estimate how climate changes may affect maize yield under climate change conditions.
Climate models and crop models are used to quantify the climate change impact on crop growth, dynamic and yield.It is possible to estimate the limits and vulnerability for crop production in climate change conditions, based on the model results.
The scientific community has successfully evaluated and validated DSSAT crop model for maize varieties in various regions (Samuhel et al. 2007, Maytin et al. 1995); potential adaptation measures estimated with DSSAT crop model were used by Wang et al. (2011), evapotranspiration rates by Tao et al. (2010), maize productivity and water use by Tao et al. (2010), and earlier sowing dates and new varieties by Kapetanaki et al. (1996).This paper presents: (a) DSSAT 4.0 calibration and validation results and (b) calculated relative change in yield for NSSC 640 maize hybrid for 2030 and 2050 in rainfed and irrigated conditions.The period 1971-2000 was considered as baseline period in the analysis.
A crop model results may indicate the limits and vulnerability for certain crop production in climate change conditions.This information will help both producers and plant breeders to select appropriate adaptation measures and to make long-term plans in agriculture for high and stable yield.
Materials and Methods
DSSAT 4.0 Model Description DSSAT 4.0 model was developed as a result of IBSNAT project (International Benchmark Sites Network for Agrotechnology Transfer project) to simulate biological crop demands and the most effective use of current and expected climate and soil resources (Tsuji et al. 1998).Model contains submodules that describe atmosphere-soil-crop interaction.Functional scheme of DSSAT 4.0 model consists of input data, submodules and output results (Fig. 1).
Meteorological Conditions.For current climate conditions, daily weather data that included meteorological element values (maximum temperature, minimum temperature, precipitation, evaporation, solar radiation and wind speed) were observed at weather station Rimski Šančevi for For expected climate conditions in Novi Sad region, daily meteorological data were obtained using three global climate models ECHAM5 (Roeckner et al. 2003), HadCM3 (Gordon et al. 2000) and NCAR-PCM (Washington et al. 2000).The simulations were performed for the so called "pessimistic" (SRES-A2) and "realistic" (SRES-A1B) scenario for greenhouse gas (GHG) emission for 2030 and 2050.The first step in an assessment of climate change impacts on agriculture is downscaling of climate model outputs.In this paper, the result regionalisation was performed by Met & Roll weather generator (Dubrovsky 1996(Dubrovsky , 1997)).In all simulations, CO 2 concentration was 330 ppm.This approach is denoted as `climate change only´ in literature of crop simulations.
Climate Conditions
Current Climate (1971Climate ( -2000)).In Novi Sad the temperature and precipitation were observed in two periods: April-September (AS) growing season and June-July-August (JJA), which is critical period for maize water demands.In AS period, observed temperature was 17.8 o C with 360.1 mm of precipitation.During JJA period, temperature was 20.6 o C with 206.1 mm of precipitation.
Expected Climate (2030 and2050).The results from climate model ECHAM5 under A2 scenario predicted that temperature in 2030 for AS is expected to be 1.3°C higher and 1.5°C higher for JJA period, while the precipitation for these periods (AS) may be 15% lower and 33% lower in JJA than in 1971-2000.In 2050 ECHAM5 under A2 scenario predicted temperatures were 2.5°C higher in AS period and 3.1°C higher in JJA, while the precipitation for AS period is expected to be 26% lower and 46% lower than in 1971-2000.
Soil Conditions.The soil type and characteristics (mechanical and chemical) were assimilated from Ćirić (2008).The values of parameters used in simulations are given in Tables 1 and 2.
Agrotechnology.The experimental field was managed at the Institute of Field and Vegetable Crops in Novi Sad in a nine-year period (Pejic et al. 2009).In the first experimental year, the sowing was performed on April 20, 1997 with NSSC 640 medium season maize variety.In this trial NSSC 640 was sown on 35 m 2 field area, in block system in rows with density of 5.7 plants/ m 2 (57.143 plants/ha), on 5 cm depth at 70 cm distance between rows and 25 cm between plants in a row.Mineral fertilizers were applied in fall (135 kg/ha of N, 135 kg/ha of P and 175 kg/ ha of K) and spring (46 kg/ha of N with urea).Standard agronomic practices for maize growing were applied.The experimental aim was to test sprinkler irrigation method and its effect on maize yield, with 180 mm of water supplied per vegetation period.The yield values differences between rainfed and irrigated maize field were monitored.
Five genetic coefficients were defined in maize simulations (Tab.3).These coefficients are necessary input data because they describe varieties phenological characteristics.They were given calculating the temperature sum for each vegetation phenophase (Ritchie et al. 1993).
Genetic coefficients were assimilated from DSSAT 4.0 crop model for medium late maize, as NSSC 640.DSSAT 4.0 crop model contains three submodules: meteorological module (Weatherman), soil module (Soilbuild) and crop simulation module (CERES).In meteorological module (Weatherman), observed meteorological element values and values from climate models were input data, where they were classified and saved in special files necessary for further crop simulations.Soil module contained observed soil characteristics as input data.After processing data, soil type for further crop modelling was defined.Soil nitrogen quantity and organic carbon fertility were also necessary input data.
Production of cereal crops which may be simulated in DSSAT 4.0 by CERES crop model include maize, wheat, barley, millet, sorghum and rice.A feature of this model is its capability to include cultivar specific information that makes possible prediction of the cultivar variations in plant ontogeny and yield component characteristics and their interaction with weather and soil.CERES calculates vegetation phase, growth rate and biomass partitioning in plant parts.These processes are dynamic and affected by genetic coefficients and environmental conditions.Biomass growth is calculated using the radiation use efficiency approach; biomass production is partitioned between leaves, stems, roots, ears and grains.The proportion partitioned to each growing organ is determined by the stage of development and general growing conditions.The crop yield is defined as a product of the grain numbers per plant times the average kernel weight at physiological maturity.The grain numbers are calculated from the above ground biomass growth during the critical stage in the plant growth cycle for a fixed thermal time before anthesis.The grain weight is calculated as a function of cultivar specific optimum growth rate multiplied by the duration of grain filling (Tsuji et al. 1998).Based on genetic coefficients, environmental conditions and field experiment data, it is possible to calibrate crop model.
Model Calibration and Validation
The first step in crop model use is its calibration and validation.Using input data, model is calibrated for NSSC 640 hybrid in rainfed and irrigated conditions adjusting genetic coefficients to get the similar simulated yield values and maturity duration to observed values from experimental field at the Institute of Field and Vegetable Crops at Rimski Šančevi (Tab. 4 and Tab. 5).
The relative deviation between simulated and observed yield was 37.1% in rainfed experiment field (Tab.4).The highest (relative) deviation was for 2000, 2002, 2003and 2004 (Fig. 2 (Fig. 2) in which the number of dry days was above long- term average in growing season.This significant difference between simulated and observed yield values is a consequence of model inability to simulate the plant reaction to stress in extreme conditions, such as high variations in daily air temperature and precipitation sum in short time intervals (Lalić et al. 2011).If the yield values in dry years were excluded from calculation (calibration), the relative deviation in yield will be 4% in rainfed conditions.In irrigated conditions (Tab.5), relative deviation between simulated and observed yield was 8.8%.The highest relative deviation was given for 2000, 1998and 2004 (Fig. 3 (Fig. 3) in irrigated conditions, because those were the years with extreme dry conditions.If the yield values in dry years are excluded from calculation (calibration), the relative deviation in yield will be 5.1% rainfed conditions.
The expected maize yield in 2030 and 2050 Using current agrotechnology, soil characteristics data and synthesized weather series for three GCMs (H -ECHAM5, M -HadCM3, N -NCAR PCM) under two scenarios (A1B and A2) the yield was projected for 2030 and 2050.The yield results were obtained in rainfed and irrigated conditions.In Tables 6 and 7, change in yield for 2030 and 2050 calculated relative to the baseline yield 1971-2000 is presented.
If we analyse the obtained results, all GCMs and both scenarios gave significant decrease in yield for 2030 and 2050.Using DSSAT -CERES model, in rainfed conditions, the maize yield was significantly lower (up to 40% in 2030 and 58% in 2050) than in irrigated conditions (20% in 2030 and 31% in 2050).In expected climate, it is concluded that irrigation method has to be adapted to keep stable and high yield.After detailed analysis, two used scenarios (A1B, A2) gave similar yield results for one integration period.In a comparison of three climate model results, there were no differences between ECHAM5 and HadCM3 and slightly differences between HadCM3 and NCAR PCM in 2050.
Only, the ECHAM5 and NCAR PCM climate models gave differences in yield results ranging from 9 to 15%.The lowest yield for future conditions in rainfed and irrigated conditions was predicted by ECHAM5 model under A2 scenario in both 2030 and 2050 integration period.
Conclusions
DSSAT 4.0.crop model structure was described in detail, calibrated and validated for NSSC 640 medium late maize.The yield results are necessary for adaptation measures in current agrotechnology to keep high and stable yield in expected climate conditions.In rainfed conditions, very significant decrease in yield up to 58% (ECHAM5, A2) in 2030 and up to 40% (ECHAM5, A2) in 2050 was predicted.In irrigated conditions lower yield up to 20% (ECHAM5, A2) in 2030 and up to 31% (ECHAM5, A2) in 2050 was predicted.The expected higher temperatures and lower precipitation during AS and JJA period will impact dynamics in vegetation phases, as well as in yields.Based on climate data, soil water analysis and maize water demands, the necessary water quantity for stable maize yield will be calculated.All relative changes in yield results and expected climate conditions show lower precipitation in vegetation period and problem in soil water hold capacity.The control and better water holding capacity may be reached with modifications in current agrotechnology with introducing adaptation measures: 1) earlier sowing (sowing should be earlier, how the soil water reserves will be better used in germination and emergence phenophases); 2) new genotypes tolerant to drought stress and 3) occasional shallow tillage (lower soil water losses); 4) implementation of irrigation system and method (in summer vegetation phases, frequent in heat days with maximum daily temperature higher than 30 o C is expected, which could affect shorter vegetation duration and smaller yield).
Table 3 .
Genetic coefficients of maize
Table 4 .
Validation of grain yield (t/ha) for NSSC 640 in rainfed conditions at Rimski Šančevi for 1997-2005 period Figure 2. Simulated and observed grain yield of hybrid NSSC 640 (t/ha) in rainfed conditions at Rimski Šančevi for 1997-2005
Table 5 .
Validation of grain yield (t/ha) for NSSC 640 in irrigated conditions at Rimski Šančevi for the 1997-2005 period Figure 3. Simulated and observed grain yield for NSSC 640 (t/ha) in irrigated conditions at Rimski Šančevi for the period 1997-2005 | 2018-12-06T11:23:29.652Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "0b8c69192a6fcd42502eb3b02fadee45c67a5e0a",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1821-3944/2013/1821-39441303022J.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0b8c69192a6fcd42502eb3b02fadee45c67a5e0a",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
237977790 | pes2o/s2orc | v3-fos-license | Atypical Antipsychotic Induced Weight Gain in Schizophrenic Patients
Atypical antipsychotics are widely prescribed and have the potential to cause weight gain, which may result in the development of metabolic syndrome. Also, it is important to monitor the use of atypical antipsychotic for metabolic disturbance. The purpose of this study is to determine the side effects of atypical antipsychotics in increasing body weight in schizophrenia patients after 4 weeks of use. Furthermore, a retrospective design was conducted and data were collected based on consecutive sampling in 80 adult psychiatric inpatients (20 women and 60 men) with initial diagnoses of schizophrenia and with the same daily nutrition. The patients were hospitalized from January to March 2019, within the term (over 4 weeks) of initiation atypical antipsychotic. The patient body weight was collected before and 4 weeks after the treatment of atypical antipsychotic. The results showed that patients (20 women and 60 men) receiving atypical antipsychotic had a mean age of 35.6 years and a percentage of 70% women and 56% men had a weight gain of 1–5 kg over 4 weeks. The mean weight observed among our subjects increased from 57.55±10.743 kg to 59.83±12.205 kg after initiating treatment (p=0.001). However, the dual combination of atypical antipsychotics risperidone and clozapine are the most widely atypical antipsychotic used with a percentage equal to 91.25%, 3.75% clozapine, and 5% risperidone. Furthermore, it can be concluded that atypical antipsychotics use for at least 4 weeks can cause weight gain in schizophrenic patients. Pharmacist and doctors are recommended to monitor the metabolic side effects due to the atypical antipsychotic use.
Introduction
Schizophrenia is a serious mental condition that affects 1% of the general population and its treatment usually causes various side effects. Atypical antipsychosis is the second generation of antipsychosis that has been developed and used to treat symptoms of scizophrenia. Examples of atypical antipsychoses include olanzapine, clozapin, and risperidone. 1 People with scizophrenia are twice as likely to die than normal people due to cardiovascular disease and decrease more than 20 years of life expectancy. 1 Antipsychotic drugs reduce hallucinations and delusions in patients with scizophrenia especially bipolar schizophrenia. 2 Atypical antipsychotics are the first-line treatment of schizophrenia and rare classified as based on the side effects of extrapyramidal syndrome and tardive dyskinesia. 3 Typical antipsychotics are rarely used, and are replaced with atypical antipsychotics due to smaller extrapyramidal side effects and tardive dyskinesia. 2 Meta-analysis reported that antipsychotics cause various cardiovascular diseases compared to placebo and it is also reported that 35.3% of patients who got atypical antipsychotics got metabolic syndrome. 4 Significant weight gain occured after 6-8 weeks atypical antipsychotic use and in other studies also found patterns of weight gain after 47 days of initial use of olanzapine and risperidone. 5,6 A study conducted in an Indonesian hospital showed that the most widely prescribed antipsychotic was risperidone with a percentage of 35.71%. 7 Another study in Indonesia showed that the percentage of use of the atypical combination of antipsychotic risperidone clozapine in schizophrenic patients was 43.4%. 8 In addition, a cross-sectional analytic descriptive study was conducted in psychiatric patients treated at Bali Provincial Mental Hospital from January 2018 to February 2018. There was a high prevalence of metabolic syndrome (MS) among patients with mental disorder. A total of 245 samples were included in this study with the prevalence of MS was 48.6%. According on sex, female had higher proportion among MS patients than male (42.8% vs 60.7%). 9 The objective of this study was to determine the side effects of atypical antipsychotic in increasing body weight in schizophrenia patients.
Methods
Through retrospective design, body weight data before and after 4 weeks treatment with atypical antipsychotic were collected on 80 adult psychiatric inpatients (20 women and 60 men) diagnosed with schizophrenia. Ethical clearance of this study was issued by Prof. The inclusion criteria for this study sample were patients with an initial diagnosis of schizophrenia, patients with a history of combination or single therapy of atypical antipsychotic for at least 1 month, female and male patients aged 18 years and over. Meanwhile, the exclusion criteria in this study were schizophrenic patients who entered the inclusion criteria but were excluded in the study because treatment is stopped or did not reach the minimum treatment time, which is 1 month, data of the treatment profile and supporting examination in the medical record were incomplete (the required data is not listed or is illegible), patients with additional diagnosis type 2 diabetes mellitus and/or hypertension and/or disilipidemia at the time of administration atypical antipsychotic therapy, patients taking oral antidiabetic (OAD) or insulin and/or anti-hypertension and/or lipid-lowering drugs at the time of administration antipsychotic therapy. Data on diagnosis, drug use, and patients' body weight were taken from medical records. Data on body weight before and 4 weeks after treatment with atypical antipsychotic were analyzed using Wilcoxon analysis.
Results
Results of the 80 patients receiving atypical antipsychotic shows that the mean age was 35.6 years, and 70% female and 56% male patients gained 1-5 kg in weight over a period of 4 weeks. The most widely atypical antipshycotic used was combination dual atypical antipshycotic risperidone-clozapine with a percentage equal to 91.25%, 3.75% clozapine, and 5% risperidone. The patients' characteristics can be seen in Table 1. Table 2 shows weight gain during atypical antipsychotic therapy of 80 adult psychiatric inpatients diagnosed with schizophrenia within medium term (over 4 weeks) of initiation of atypical antipsychotic. After Wilcoxon analysis was performed, it was found that there was significant association between the use of atypical antipsychotic and patients' weight gain. The mean weight observed among the subjects increased from 57.55±10.743 kg to 59.83±12.205 kg after initiating treatment (p=0.001).
Discussion
The most widely atypical antipshycotic used was combination dual atypical antipshycotic risperidone-clozapine with a percentage equal to 91.25%, 3.75% clozapine, and 5% risperidone. All patients obtained nutrition controlled by the calculation of calorie needs by nutritionists in the hospital.
The use of antipsychotic combinations was commonly given to patients in order to stabilize the disease and to achieve a greater therapeutic response when the therapeutic response to a single antipsychotic was not responsive. However, the use of a combination of antipsychotics including prescribing higher total doses is feared to increase the risk of side effects. For schizophrenic patients who are resistant to treatment that is not responsive to single clozapine, the administration of second antipsychotic as addition will be an alternative therapy. Adding other antipsychotics with clozapine increases the level of dopamine receptor D2. However, this increase may also be related to an increase in extrapyramidal symptopms (EPS) risk. 10 The most serious side effect of using atypical antipsychotic drugs is obesity 11 and MS which causes an increased risk of cardiovascular disease. 12 The metabolic rate of the syndrome increases over time as the use of atypical antipsychotics increases. Patients starting therapy with atypical antipsychotics have a three times higher chance of metabolic syndrome. Patients who get olanzapin have a prevalence of MS of 20-25%, risperidone 9-24%, and haloperidol 9%. Another study reported that outpatients with schizophrenia and schizoaffective receiving clozapine had MS with prevalence of 20-25%, followed by risperidone 9-24% and haloperidol 0-3%. 13 The relationship between female sex and obesity in schizophrenic patients is caused by many factors. This can be resulted from the differences in pharmacokinetics between male and female patients, CYP450 enzyme activity which can cause differences plasma concentrations, and therapeutic profiles and concentrations of antipsychotics such as olanzapine and clozapine. Research with a larger population is needed to see the effect and mechanism of increasing body weight on gender. 14 This study has several limitations. First, the influence of independent variables (such as patients' demographic data and nutritional intake of patients before therapy) towards weight gain cannot be analyzed since this study was conducted retrospectively. Second one was the limited number of samples in our study which was due to frequent replacement of treatment regimens.
Conclusion
Atypical antipsychotics use for at least 4 weeks can cause weight gain in schizophrenic patients. Pharmacist and doctors are recommended to monitor the metabolic side effects due to the atypical antipsychotic use. | 2021-08-23T20:19:13.269Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "106528fc1aead19fba0164dd24ccbec2592ac6e2",
"oa_license": "CCBYNC",
"oa_url": "https://jurnal.unpad.ac.id/ijcp/article/download/23595/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a46c74bfe78b17b38aed57f37c9f11405640ad3a",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
269220187 | pes2o/s2orc | v3-fos-license | A power load forecasting model based on a combined neural network
The supply of electric power is vital for the daily lives of people, industrial production, and business services. At present, although enough electric power can be supplied to meet the power demand, there are still some challenges, especially in terms of long-distance power transmissions and long-term power storage. Consequently, if the power production capacity exceeds the immediate consumption requirements, i.e., the produced electric power cannot be consumed in a short period, and much electric power could be wasted. Evidently, to minimize the wastage of electric power, it is necessary to properly plan power production by accurately forecasting the future power load. Therefore, a preferable power load forecasting algorithm is crucial for the planning of power production. This paper proposes a novel deep learning model for the purpose of power load forecasting, termed the SSA-CNN-LSTM-ATT model, which combines the CNN-LSTM model with SSA optimization and attention mechanisms. In this model, the CNN module extracts the features from the sequential data, and then the features are passed to the LSTM module for modeling and capturing the long-term dependencies hidden in the sequences. Subsequently, an attention layer is employed to measure the importance of different features. Finally, the output is obtained through a fully connected layer, yielding the forecasting results of the power load. Extensive experiments have been conducted on a real-world dataset, and the metric R 2 can reach 0.998, indicating that our proposed model can accurately forecast the power load.
I. INTRODUCTION
Power load forecasting is vital to the stability of the power system, which helps to improve the efficiency of power production and utilization.Specifically, a preferable power load forecasting method can help reduce electricity waste and power costs and ensure power grid security. 1,2However, there are various factors affecting the power load, such as time, temperature, and holidays; thus, it is quite difficult to forecast the power load accurately. 3,4Therefore, the accuracy enhancement of power load forecasting has become an important issue in recent years.Essentially, power load forecasting is a problem of time-sequence forecasting, i.e., the task of power load forecasting is to forecast the electric power levels in the future based on the previous ones.For the task of time-sequence forecasting, the current methods include traditional statistical methods, machine learning methods, and deep learning methods.
In the early years, power load forecasting techniques mainly rely on traditional statistical methods, which can be realized without complex data analysis and high computational complexity.These methods analyze the historical records of power load and decompose the seasonal patterns to build some time sequence models for forecasting the future power load, such as the Linear Regression (LR) method 5 and the Autoregressive Integrated Moving Average Model (ARIMA) method. 6The forecasting accuracy of these methods is high when the historical records of power loads are linear and regular.However, the variation of power load is typically not stable and could be influenced by many factors, and the features hidden in the historical records of power load cannot be captured by traditional statistical methods.In addition, these methods perform badly in long-term power load forecasting.
With the advent of machine learning concepts, some machine learning methods have been applied to power load forecasting. 7achine learning methods are suitable for solving forecasting problems, and they can capture the nonlinear relationships and patterns hidden in complex data.The typical works include Random Forests (RF), 8,9 Support Vector Regression (SVR), 10 and XGBoost (XGB). 11Machine learning methods can also take multiple factors into account that affect the power load, rather than only considering ARTICLE pubs.aip.org/aip/adv the single feature of historical electricity consumption.In addition, machine learning methods are able to process large-scale data, so they have an outstanding advantage in the long-term forecasting of power loads.
In recent years, deep learning methods have made significant advancements in power load forecasting.The first deep learning method used for power load forecasting is the backpropagation neural network (BPNN), 12 which is a feed-forward neural network.However, BPNN does not perform well in learning the time sequence data. 3Another deep learning model, the Recurrent Neural Network (RNN), 13 focuses on learning the time sequence data.RNN can effectively deal with the time, temperature, holidays, and other characteristics of the power load data.Nevertheless, when RNN deals with long sequences, the gradient disappearance or gradient explosion could be caused, and it could struggle to capture the long-term dependencies.To solve the above-mentioned problems, Hochreiter and Schmidhuber propose a Long Short Term Memory (LSTM) recurrent neural network. 14,15Furthermore, some works combine Convolutional Neural Networks (CNNs) and LSTM into a hybrid model, which makes use of the advantages of CNN and LSTM jointly to carry out the task of power load forecasting and can achieve good forecasting accuracy. 16,17However, the combined hybrid model treats the input features equally, while the effects of the influencing factors on the power load are quite different. 18Moreover, the number of units in the hidden layer of the LSTM model has a great impact on the forecasting results, and the parameter tuning is important as well.
With the appearance of transformer, there have been new innovations in some power load forecasting models.Ran et al. 19 propose a power load forecasting method based on a transformer framework that achieves good results in terms of forecasting accuracy while consuming a high amount of time.Xu et al. 20 take the Informer structure into power load forecasting; although the forecasting accuracy can be improved, the time cost still remains very high.Rajbhandari et al. 21conduct a study of electricity demand and its relation to the previous day's lags and temperature by examining the case of a consumer distribution center in urban Nepal.The effect of temperature on load is specially investigated.
For the forecasting of power loads, the critical factors that affect the power load mainly include time, temperature, humidity, light intensity, wind power, rainfall, etc.Among these factors, the power load typically shows an obvious regularity over time due to the fact that each year has the same periodic seasons, holidays, and other temporal factors that significantly affect the power load.In addition, note that the temperature is also strongly related to the power load required by industrial production and residential lives.To this end, we select time and temperature as the main factors related to the power load.Note that the time cost also needs to be controlled.Therefore, we choose CNN-LSTM as the basic framework and integrate an attention layer 18,22 after the CNN-LSTM layer to handle the multi-feature inputs.In addition, we use the SSA optimization algorithm [Sparrow Search Algorithm (SSA)] 22,23 to optimize the number of units in the LSTM.
The challenge of this research is mainly that the accuracy requirements for power load forecasting are very high to meet the planning requirements of electricity production for power companies (such as our company, State Grid Jiangsu Electric Power Company).Specifically, the power load of tomorrow must be accurately predicted to determine the electricity production of tomorrow.If the electricity production exceeds the actual power load, then some electricity resources could be wasted; otherwise, if the electricity production is much smaller than the actual power load, then much electricity resources should be purchased from other provinces, reducing the production profit (the cost of purchased electricity is evidently larger than that of produced electricity).
This paper is organized as follows: In Sec.II, we introduce our proposed model.Then, in Sec.III, the SSA-CNN-LSTM-ATT model is compared with some other models in terms of forecasting the accuracy of power load.Finally, the conclusion is drawn in Sec.IV.
II. SSA-CNN-LSTM-ATT MODEL
Figure 1 shows the flow chart for power load forecasting.First, the dataset is divided into two parts: training data and testing data.Then, our proposed SSA-CNN-LSTM-ATT is trained, and the model parameters are optimized for power load forecasting.
Figure 2 illustrates the structure of SSA-CNN-LSTM-ATT.The main network structure of SSA-CNN-LSTM-ATT is the CNN-LSTM network, which adopts the SSA algorithm for the parameter optimization of LSTM.Moreover, an attention layer is added after the LSTM layer to further improve forecasting accuracy.
A. CNN-LSTM
The CNN-LSTM network is comprised of a convolutional neural network module and a long short-term memory module.The functions of the two modules are described as follows: CNN is taken as a feature extractor.By employing some convolutional layers, CNN effectively learns local features from the time sequence data.After the CNN layer, the sample is flattened into a one-dimensional vector as the input of the LSTM layer.
The LSTM module is used for sequence modeling.The extracted features are input into the LSTM module to capture temporal dependencies.By utilizing the memory units and gating mechanisms, the LSTM module effectively handles the long-term dependencies and learns the temporal patterns hidden in the sequence data.The LSTM module continuously adapts to the internal memory states and generates the outputs based on the previous states and inputs.
As illustrated in Fig. 2, the combination of the feature extraction of CNN for sequence data and the temporal dependency capturing of LSTM can yield a more comprehensive and accurate modeling capability.This combination significantly enhances the performance and effectiveness of our proposed SSA-CNN-LSTM-ATT model.
B. Attention mechanism
The time-series data outputted by CNN-LSTM have a temporal dependence, and the importance of temporal features varies over time.Some subsequences may have higher feature importance than others.If they are left untreated, their importance cannot be reflected.In our work, an attention layer is added after the LSTM layer to address this issue, and the attention mechanisms proposed by Bahdanau et al. 24 and Luong et al. 25 are applied, respectively.Finally, the one with the best performance is selected.The Bahdanau attention mechanism and the Luong attention mechanism have the same basic principles.Figure 3 shows the execution process of the Luong attention mechanism.First, the encoder generates the hidden states for each element in the input sequences.The previous hidden states and the output of the decoder are used by the decoder LSTM to generate a new hidden state.According to the new hidden state of the decoder and the hidden state of the encoder, the alignment scores are calculated.The alignment scores of each hidden state of the encoder are combined and represented into a single vector, which is then softmaxed.After that, the hidden states of the encoder are multiplied by their respective alignment scores to form a context vector.Finally, the context vector is concatenated with the hidden state of the decoder, and a new output is generated through a fully connected layer.The difference between the Bahdanau attention mechanism and the Luong attention mechanism lies in the calculation of alignment scores.
Current location of the best discoverer
A A matrix with randomly assigned elements equal to 1 or −1 The unit parameter setting of LSTM is an important issue for model training.In SSA-CNN-LSTM-ATT, the number of LSTM units is optimized by the SSA algorithm.The SSA algorithm updates the positions and velocities of sparrows in a random manner and selects the best solutions (with the minimum loss) based on the fitness evaluation of each sparrow.The schematic diagram of the SSA optimization algorithm is shown in Fig. 4.
The explanations of main symbols are provided in Table I.
The optimization process for the number of LSTM units by SSA is described as follows: (i) Initialization of the population.A set of initial sparrow individuals is randomly generated, where each individual represents a configuration of parameters to be optimized.(ii) Fitness evaluation.The fitness of each individual is calculated based on some metrics, such as Mean Squared Error (MSE).(iii) Position updates.The positions of sparrows are updated based on their fitness and positions.This updating process simulates the movements and adjustments of sparrows during the search.The initial number of sparrows is set to N. The formula for updating the discoverer's location is expressed as The formula for updating the tracker's location is expressed as The update formula for hazard warning is expressed as (iv) Termination condition.The termination of the optimization process is determined based on a stopping criterion.If the termination condition is not satisfied, return to (ii).(v) Output of the optimal solution.The optimal number of LSTM units is outputted.Dimensionality reduction Dense (16) Dimensionality reduction Dense (1) Output
SSA-CNN-LSTM-ATT adopts Mean Squared Error (MSE) as the loss function, which is a commonly used loss function typically used for regression problems. MSE measures the average squared difference between the forecasting values and the real values. A smaller MSE indicates a smaller gap between the forecasting values and the real values. MSE is written as
where n denotes the number of samples, y i denotes the real value, and ŷi denotes the forecasting value.MSE calculates the loss by computing the squared differences between each forecasting value and the corresponding real value.
E. Dataset
The dataset used for experiments is a real-world dataset (the power load dataset of Jiangsu Province in 2022).The sampling points are collected every 5 min for a total of 105 120 sample points.The dataset is in CSV format.The format of the dataset is shown in Table II, where "high_tmp" and "low_tmp" represent the highest temperature and lowest temperature (in degree Celsius) of the day, respectively."xq" denotes the day of the week.The last column of "load" denotes the power load.The first five columns are input features, and the last column is the value to be forecasted.After EDA analysis, it can be found that the dataset has no missing or abnormal values, and the data are true and credible (because the data are obtained from the real power grid).
As shown in Fig. 5, the horizontal axis represents the value of the power load, and the vertical axis represents the frequency.When the value of the power load falls into the interval [67 600, 84 100], the highest frequency and proportion are yielded.As depicted in Table III, the deep learning model innovatively combines four modules: CNN, LSTM, attention, and SSA optimization algorithms to achieve high accuracy in power load forecasting.Our proposed model first extracts the features of the input time series through a CNN layer, and then the LSTM layer captures the long-term dependencies for the sequence modeling.The attention layer processes the importance of different time steps, allowing this model to adaptively focus on the most important historical information at the current prediction time step rather than simply considering all historical data equally.In addition, the parameters of the LSTM units are optimized by the SSA optimization algorithm to achieve the optimal configuration.Therefore, high accuracy in power load forecasting can be achieved by combining the above four modules.
G. Evaluation metrics
We employ the following metrics for measuring forecasting accuracy: R 2 (R-squared) is a statistical metric used to evaluate the goodness-of-fit of a regression model, and it is expressed as where n denotes the number of sample points, and a smaller MAE indicates a higher forecasting accuracy.RMSE (Root Mean Squared Error) measures the root mean squared difference between the forecasting values and the real values, which is expressed as where MSE denotes the mean squared error as described in (4).A smaller RMSE indicates higher forecasting accuracy.Compared with MAE, RMSE is more sensitive to the larger forecasting error.
III. EXPERIMENT RESULTS AND ANALYSIS
A. Forecasting accuracy First, compare the performance of different attention mechanisms by training the models using the Luong attention mechanism and the Bahdanau attention mechanism.We selected 10 512 sampling points from the winter as the testing dataset.The experiment results are shown in Table IV.
From Table 4, it can be observed that the Luong attention mechanism outperforms the Bahdanau attention mechanism.Additionally, the models with attention mechanisms yield better forecasting results compared to those without attention mechanisms.Therefore, we adopted the Luong attention mechanism as the attention layer in the following experiments.
From Tables V and VI, we find that, compared with both deep learning models and machine learning methods, our proposed SSA-CNN-LSTM-ATT achieves a significant improvement in terms of R 2 , which can reach a value of 0.998.Additionally, SSA-CNN-LSTM-ATT yields a smaller error (with a MAE of 266 and a RMSE of 355), indicating that more accurate and stable forecasting results can be obtained by SSA-CNN-LSTM-ATT.The above-mentioned experiments were conducted on the data in three hours.In Sec.III C, we will conduct daily experiments, weekly experiments, and monthly experiments in summer and winter, respectively.
B. Error metric
As shown in Fig. 6, the average error ratio of our proposed SSA-CNN-LSTM-ATT model is about 0.002, which is obviously lower than that of other models, also indicating that our proposed model has a higher prediction accuracy.
C. Loss and forecasting results in winter and summer
As shown in Fig. 7, the loss value rapidly decreases and converges, ultimately tending toward 0.
This experiment is conducted at high temperatures in the summer and low temperatures in the winter.The test is divided into three time dimensions: month, week, and day.Our method is compared with the SVR method.
As shown in Figs.8-10, we first test the winter data.The blue solid line represents the true values, the green dashed line represents the forecasted values of the SVR method, and the orange dashed line represents the forecasting values of our method.From these figures, it is obvious that our method fits better with the true values, whether forecasting for a month, a week, or a day.
Then, we test the summer data, as shown in Figs.11-13.As shown in these figures, the orange dashed line almost fits perfectly with the blue solid line, while the green dashed line still has a significant gap.This phenomenon shows that our method also has high accuracy in the case of high temperature data in the summer.
D. Time cost
Experiment results indicate that the average forecasting time (one week) of our proposed model is about 2.33 s, while the average forecasting time (30 days) is about 8.67 s, which are acceptable.
IV. CONCLUSIONS
This paper proposes a CNN-LSTM model combining the SSA algorithm and the Luong attention mechanism for power load forecasting.Our proposed model takes advantage of the feature extraction ability of CNN and the time sequence modeling advantage of LSTM, adds a Luong attention layer to weight the outputs of LSTM, and then optimizes the number of units of LSTM through the SSA algorithm to yield preferable forecasting results.This paper evaluates the performance of the proposed model by training and testing it on a real-world dataset from Jiangsu Province in 2022.The experiment results show that this model has the lowest MAE and RMSE ARTICLE pubs.aip.org/aip/advcompared with other models or methods, while the R 2 value is the highest.
where y i represents the real values, ŷi represents the forecasting values, ∑ i (ŷ i − yi) 2 represents the sum of squares of the difference between forecasting values and real values in the regression model, and ∑ i (y − yi ) 2 represents the sum of squares of the differences between the real value and the mean.The R 2 value ranges from 0 to 1.The value closer to 1 indicates that the model can better explain the variability of the dependent variable, also indicating a better fit.The value closer to 0 indicates that the model has a weaker ability to explain the dependent variable, indicating a poorer fit.MAE (Mean Absolute Error) is the average of the absolute difference between the forecasting values and the real values, and it is expressed as MAE = 1 n ∑ n i=1 |ŷi − yi|,
TABLE I .
Explanations of symbols.
TABLE II .
Format of the dataset.
TABLE IV .
Comparisons between two attention mechanisms.
TABLE V .
Comparisons among related models.Boldface denotes the best results in Tables V and Tables VI.
TABLE VI .
Comparisons among related methods. | 2024-04-19T15:21:25.596Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "8970e24f32a1c9b4c096bdc281542a3885eb7258",
"oa_license": "CCBY",
"oa_url": "https://pubs.aip.org/aip/adv/article-pdf/doi/10.1063/5.0185448/19885819/045231_1_5.0185448.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "893dde4421929b7c55cf749542c932114d6fe382",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
248386340 | pes2o/s2orc | v3-fos-license | Assessment and Counseling Gaps Among Former Smokers Eligible for Lung Cancer Screening in US Adults
Background Lung cancer screening (LCS) for former and current smokers requires that current smokers are counseled on tobacco treatment. In the USA, over 4 million former smokers are estimated to be eligible for LCS based on self-report for “not smoking now.” Tobacco use and exposure can be measured with the biomarker cotinine, a nicotine metabolite reflecting recent exposure. Objective To examine predictors of tobacco use and exposure among self-reported former smokers eligible for LCS. Design Cross-sectional study using the 2013–2018 National Health and Nutrition Examination Survey. Participants Former smokers eligible for LCS (n = 472). Main Measures Recent tobacco use was defined as reported tobacco use in the past 5 days or a cotinine level above the race/ethnic cut points for tobacco use. Recent tobacco exposure was measured among former smokers without recent tobacco use and defined as having a cotinine level above 0.05 ng/mL. Key Results One in five former smokers eligible for LCS, totaling 1,416,485 adults, had recent tobacco use (21.4%, 95% confidence interval (CI) 15.8%, 27.0%), with about a third each using cigarettes, e-cigarettes, or other tobacco products. Among former smokers without recent tobacco use, over half (53.0%, 95% CI: 44.6%, 61.4%) had cotinine levels indicating recent tobacco exposure. Certain subgroups had higher percentages for tobacco use or exposure, especially those having quit within the past 3 years or living with a household smoker. Conclusions Former smokers eligible for LCS should be asked about recent tobacco use and exposure and considered for cotinine testing. Nearly 1.5 million “former smokers” eligible for LCS may be current tobacco users who have been missed for counseling. The high percentage of “passive smokers” is at least double that of the general nonsmoking population. Counseling about the harms of tobacco use and exposure and resources is needed.
INTRODUCTION
The US Preventive Services Task Force (USPSTF) recommends annual lung cancer screening (LCS) with low-dose computed tomography in eligible adults. As of 2021, the updated guidelines recommend screening for adults aged 50-80, who have a 20 pack-year smoking history, and currently smoke or have quit within the past 15 years. 1 According to the 2010 National Health Interview Survey (NHIS), 14.3% of the US population met the previous criteria for LCS, half of which were former smokers. 2 Extrapolating to the entire US population, 8.6 million Americans met the criteria for LCS, including 4.1 million former smokers. 2 As part of annual LCS, smoking cessation interventions are strongly recommended for current smokers by the USPSTF and required by the Center for Medicare and Medicaid Services (CMS). 3 A study by Howard et al. also recommends smoking cessation interventions for those eligible for screening and to include smoking cessation in clinical practice, as those advised by their physician are more likely to quit. 4 After the initial 2014 USPSTF guidelines, smoking cessation interventions documented in the electronic medical record increased among eligible current smokers from 30.1 to 34.0%. 5,6 While recommendations are clear for current smokers who present for LCS, 7 the guidelines for counseling former smokers are not well defined.
Prior studies have used cross-sectional statistical methods to inform data-driven understanding of screening trends, which can then inform care delivery. [8][9][10] A study of current smokers using the National Health and Nutrition Examination Survey (NHANES) found good reliability of self-reported current smoking status and cotinine levels, which reflect tobacco exposure in the past few days, among adults eligible for LCS. 11 However, less is known about the tobacco use and exposure among former smokers who are eligible for LCS.
Understanding former smoker behavior is important because they may relapse (i.e., when a former smoker returns to smoking regularly 12 ), especially within the first 6-12 months of quitting, 13 and it can take smokers multiple attempts to quit for good. 14 Also, while smoke exposure has substantially declined in the USA, 21.0% of nonsmoking adults aged 20 years and older, including former smokers, still had detectable cotinine levels in 2018, indicating exposure to tobacco smoke. 15 The Surgeon General has concluded that there is no risk-free level of smoke exposure, and eliminating exposure is important not just for preventing lung cancer but more immediately for cardiovascular mortality. 16 The purpose of this study is to identify high-risk former smokers who would benefit from targeted counseling at the time of LCS.
Data Source
We used data from the 2013/2014 to 2017/2018 National Health and Nutrition Examination Survey (NHANES) (n = 28,061), as the tobacco questions were consistent during this time period. 17 NHANES is a nationwide probability sample of the US civilian noninstitutionalized population conducted continuously and released in 2-year cycles. 18
Study Sample
We included former smokers who were eligible for LCS (n = 532) (Fig. 1). Eligible former smokers who were missing cotinine (n = 33), using nicotine replacement therapy (n = 12), or had previously been diagnosed with lung cancer (n = 20) were excluded, for a final sample size of 472. This analysis of deidentified, secondary data was exempt from review as compliant with the policy of the University of California, Davis Institutional Review Board.
Former smokers were classified as those who had smoked at least 100 cigarettes in their life and responded, "Not at all" to the question "Do you now smoke cigarettes?". Former smokers were eligible for LCS if they were 50-80 years old, had a 20 pack-year smoking history, and quit within the past 15 years. Pack-years were calculated by multiplying the calculated cigarette packs (number of cigarettes smoked per day divided by 20 cigarettes per pack) by the number of years smoked. Years since quitting was self-reported based on the following question "How long has it been since you quit smoking cigarettes?". Years since quitting smoking were also categorized at 0-3, 4-6, and > 6 years, based on sample sizes. Age was top coded at 80, and therefore included adults 80 and older (n = 39) as eligible for LCS.
Study Outcomes: Tobacco Use and Exposure
This study included two outcomes: recent tobacco use and exposure. Former smokers were classified as having recent tobacco use based on two sources: (1) self-reported use of any tobacco products in the past 5 days or (2) cotinine levels above the cut point for each racial/ethnic group (≥ 5.92 ng/mL (non-Hispanic Black), 4.85 ng/mL (non-Hispanic White), 0.84 ng/mL (Hispanic; this level reflects the largest subgroup being Mexican Americans 19 ), and 3.08 ng/mL (all other). 20 Former smokers without recent tobacco use were classified as having recent tobacco exposure if their cotinine levels were greater than 0.05 ng/mL. Although NHANES now has a lower limit of detection (0.011 ng/mL), we used 0.05 ng/mL for historical comparison with the general population.
Tobacco History
Former smokers who recently used a tobacco product were further classified into using cigarettes, e-cigarettes, or another tobacco product (pipes, cigars, and smokeless tobacco were categorized together due to small sample sizes). We did not consider dual use of tobacco products due to small sample sizes. Former smokers were classified as living with a household smoker, if they responded with one or more to "How many people who live here smoke cigarettes, cigars, little cigars, pipes, water pipes, hookah, or any other tobacco product?" Former smokers were classified as being recently exposed to indoor smoke outside the home, if they reported that they spent time in an area (work, restaurant, bar, car, another home, or other indoor area) with someone else smoking in the past 7 days.
Demographics and Medical Conditions
Self-reported demographic characteristics included age (50-64, 65-74, and 75-80 years), race/ethnicity (Hispanic, non-Hispanic White, non-Hispanic Black, and non-Hispanic Asian), gender (male, female), income (less than or equal to 100% of the federal poverty level), education (less than a high school graduate or GED and some college or a college graduate), married or living with a partner, type of health insurance (private, public [Medicare, Medicaid, or other], or uninsured), and survey cycle. Self-reported medical conditions included respiratory disease (emphysema, chronic bronchitis, or asthma), coronary heart disease or stroke, cancer, and diabetes. Depression was categorized as yes (mild, moderate, severe) or no based on the Patient Health Questionnaire (PHQ-9), which is a self-reported assessment based on DSM-IV signs and symptoms for depression.
Data Analysis
Prevalence estimates of recent tobacco use and recent tobacco exposure among former smokers were estimated for each demographic characteristic, tobacco-related behavior, and medical condition. Differences in prevalence estimates were assessed using the chi-square test. Adjusted logistic regression analysis was used to examine the association between the characteristics and each outcome, adjusted for each characteristic (except for previous cancer). Sensitivity and specificity were calculated for self-reported recent tobacco use and cotinine levels with the racial/ethnic cut points for tobacco use. 20 The sample size for the sensitivity and specificity analysis was 450 because 22 people were missing data on self-reported tobacco use in the past 5 days. To assess the stability of each prevalence estimate, relative standard errors (RSE) (standard error divided by estimate) were calculated. All analyses were weighted to account for differential sampling probabilities and response rates, and standard errors were adjusted for the survey design using surveyspecific procedures in SAS 9.4 (SAS Institute, Cary NC).
Overview of Former Smokers Eligible for LCS
Among 50-80-year-old former smokers, 20.7% (95% CI 18.3%, 23.0%) or 6,937,000 adults (extrapolated to US population) were eligible for LCS (results not shown). As shown in Table 1, former smokers eligible for LCS were mostly non-Hispanic White (82.7%), male (67.1%), married or living with a partner (62.0%), with incomes above 100% federal poverty level (83.3%), and with either private health insurance (56.5%) or Medicare (25.6%). Over half were in the youngest age group (50-64 years) and had at least some college education. Less than half of former smokers (43.2%) had quit smoking less than 6 years ago. Approximately 1 in 5 reported living with a household smoker (22.4%) or having recent exposure in another indoor area (28.0%). Eligible former smokers reported a range of medical conditions: 26.5% had a respiratory condition (not shown: emphysema 11.6%, chronic bronchitis 14.6%, asthma 16.2%), 19.6% had coronary heart disease (CHD) or stroke, 16.7% had cancer (other than lung cancer), 30.2% had diabetes, and 27.1% had depression.
Former Smokers with Recent Tobacco Use
As shown in Table 2, about 1 in 5 (or 21.4%) former smokers eligible for LCS had recent use of a tobacco product, as defined by either self-reported use of tobacco in the past 5 days or cotinine levels indicating active tobacco use above racial/ethnic cut points. While 17.5% of former smokers reported recent use of a tobacco product in the past 5 days, 19.7% had cotinine levels indicating active tobacco use above racial/ethnic cut points (results not shown). There was high sensitivity (80.3%) and specificity (97.9%) between selfreported recent tobacco use and cotinine levels (results not shown).
Groups with a higher percentage of recent tobacco use than their counterparts include men, those who quit within the past 0-3 years, and those living with a household smoker ( Table 2). In adjusted analysis, these differences remained statistically significant. Former smokers reporting having had a cancer diagnosis had a lower percentage of recent tobacco use than those who did not report cancer. Among the tobacco products used (not shown), about a third of former smokers used cigarettes (34.7%, 95% CI: 22.5%, 46.9%), less than a third used e-cigarettes (28.5%, 95% CI: 16.6%, 40.4%), and over a third used pipes, cigars, or smokeless tobacco (36.8%, 95% CI: 24.4%, 49.2%).
Former Smokers with Recent Tobacco Exposure
Among former smokers without recent tobacco use, over half (53.0%) had cotinine levels above > 0.05 ng/mL, indicating recent tobacco exposure (Table 3). (Not shown, almost threequarters (74.0%, 95% CI 65.9%, 82.0%) had any detectable cotinine > 0.011 ng/mL.) The majority (84.3%) of former smokers who reported living with a household smoker, and over two-thirds (71.1%) of former smokers who reported pastweek exposure to indoor smoke outside the home, had cotinine levels > 0.05 ng/mL. Other groups with a higher percentage of tobacco exposure than their counterparts include those below the federal poverty level, those not married or living with a partner, and those who quit within the past. 7 Hispanic former smokers had a lower percentage of tobacco exposure, compared with non-Hispanic White former smokers. These differences remained statistically significant in adjusted analysis.
DISCUSSION
In a nationally representative study, there are high levels of recent tobacco use and exposure among former smokers eligible for LCS, which demonstrate a need for improved assessment and provider counseling. One in five survey respondents who identified as former smokers eligible for LCS had evidence of recent tobacco use which, extracted to the US population, represents approximately 1,416,485 adults. Over half of the remaining former smokers eligible for LCS had cotinine levels indicating tobacco exposure, which is double that of the general nonsmoking population. 21 Providers can use this data to inform strategies that improve assessments and target former smokers for counseling on their continued risk for tobacco-related addiction, disease, and mortality, particularly for cardiovascular and respiratory disease. 16 The NHANES question "do you now smoke cigarettes" may reflect how smoking status is assessed in current clinical practice. However, additional information about a patient's current and former smoking status is needed. 1) For "Meaningful Use" requirements of electronic health records, the smoking status assessment categories of current ("every day," "some day," "heavy," and "light") and former smokers lack detailed definition. 22,23 Data collection about type of tobacco product used, last use, quit date, or exposure status are not required. 2) For tobacco assessment and counseling quality metrics, there is no defined timeframe for current or former tobacco status, nor is exposure status included. 23 , 24 3) For the LCS guidelines, tobacco status questions are focused on eligibility with total pack years of cigarette smoking and quitting within 15 years 7 whereas also asking about recent tobacco use could better distinguish current and former smokers. Perhaps due to this lack of clarity, significant clinical discrepancies in documentation of smoking status by clinical providers in the electronic health record have been described, and health systems may benefit from updating LCS processes. 25 Guidance for assessing tobacco use in clinical care should be clarified to minimize misclassification of current smokers and thus any missed opportunities for tobacco cessation counseling.
This study found that many respondents identified as "former smokers" had self-reported recent tobacco use or cotinine levels suggestive of active tobacco use. This may be due to response bias, tobacco exposure, or what a respondent considers "current tobacco use." Asking about tobacco product use in the past month might be a preferable standard that aligns with documenting current "some day" smoking. Some former smokers may use tobacco irregularly, not meeting criteria for current use or "relapse." However, "some day" smokers, even with just 6-10 cigarettes per month, still have higher mortality risks than never smokers. 26 More information on current tobacco use by self-reported former smokers would provide opportunities for targeted counseling. A suggested improvement is to have patients submit self-reported responses to standardized questions. 27 This study found that over half of former smokers without recent tobacco use had higher proportions of tobacco exposure than the general population, as measured by cotinine. This underscores the need for provider counseling that addresses the environment. Patients may be counseled that there is no risk-free level of smoke exposure and advised to make a smoke-free home rule, not just for the continued risk of developing lung cancer but more immediately for cardiovascular health. 16,21 There is growing evidence for family system interventions to support quitting. 28 For example, a brief to moderate intensity educational intervention about smokefree living with a smoker and household nonsmoker led to long-term quit rates similar to standard smoking cessation trials. 29 Referral to evidence-based tobacco treatment resources such as quitlines can also be offered to family or household members who use tobacco. 30 All states have access to a quitline, which provides free evidence-based counseling that doubles the chances of long-term quitting. 13 Cotinine might be a useful adjunct to counseling for biomarker feedback and has been suggested in LCS guidelines as a possible tool for education and counseling. 31 Biomarker feedback would be helpful for the 53% of former smokers who did not recently use tobacco and had biochemically validated exposure, as only 22.4% of all former smokers self-reported living with a smoker or having recent indoor exposure; measuring cotinine may initiate a discussion about identifying additional sources of exposure. Measuring cotinine in current tobacco users may have marginal benefit, as the reliability of self-reported tobacco use in the past 5 days was confirmed by the cotinine analyses in our study. We found a sensitivity and specificity of 80.3% and 97.9% among former smokers who recently used a tobacco product, which is slightly lower than 86.4% and 99.7% in a study 21 that examined current smokers and cotinine using NHANES data over a 20-year period. If former smoker status is assessed by recent tobacco use instead of the question "do you now smoke cigarettes," measuring cotinine levels may be best suited in former smokers with secondhand smoke exposure to guide conversations on risk. Unfortunately, the high sensitivity cotinine lab test may not be widely available in commercial laboratories. Future studies should assess whether including cotinine levels may change patient behavior and outcomes, and what the cost-effectiveness of this approach might be.
Two factors associated with having a higher percentage of both recent tobacco use or exposure were living with a household smoker or having recently quit within the past 0-3 years. This finding underscores the need for continued counseling among former smokers about the risk of relapse or ongoing nicotine addiction and any environmental or behavioral changes needed to eliminate exposure. Other subgroups with a higher percentage of exposure included those below the federal poverty level, and those not married or living with a partner. Low socioeconomic groups also have access barriers to LCS as previous studies have found lower rates of screening among the uninsured, 32,33 and not all Medicaid programs cover LCS. 34 For racial/ethnic subgroups, only Hispanic former smokers had a lower risk of smoke exposure than non-Hispanic White former smokers. Our study used the 2021 guidelines which have lower age and pack-year requirements that help include more racial/ethnic subgroups. 34 Identifying these higherrisk subgroups of former smokers may help future studies or guideline updates about targeted counseling for patients identified as former smokers.
Improving smoking cessation interventions for LCS is needed and ongoing. Although smoking cessation interventions documented in the electronic medical record increased after the 2013 USPSTF LCS guidelines, only 34% of adults eligible for LCS post-guidelines received any sort of smoking cessation intervention from their health care provider. 5 The Optimizing Lung Screening (OaSiS) cluster randomized trial will evaluate strategies to implement smoking cessation interventions during LCS. 35 A meta-analysis of smoking cessation interventions to use during LCS found that counseling and pharmacotherapy increased cessation at 12 months. 36 Studies are also in effect to examine changing eligibility criteria to include risk prediction models to limit over-diagnosis. 37 These efforts may be extended to examine strategies for former smokers eligible for LCS.
This study had several limitations. Self-reported smoking status may be subject to social desirability and reporting biases. Serum cotinine reflects only recent tobacco exposure over the past few days and other biomarkers may reflect longer exposure periods or distinguish those using nicotinereplacement products. NHANES assigns adults older than 80 years old an age value of 80, and our study may slightly overestimate the number of former smokers eligible for LCS (4.6 million compared with 4.1 million found using NHIS). 2
CONCLUSION
Former smokers eligible for LCS should be asked and counseled about recent tobacco use and exposure and considered for cotinine testing. Tobacco status assessment for former smokers should include questions about last tobacco use, type of tobacco product, household smokers, and indoor exposure. Certain subgroups are at higher risk for tobacco use or exposure, especially those having quit within the past 3 years or living with a household smoker. Data-driven counseling can target tobacco-related disease and mortality, steps for a healthier environment, and assistance for household tobacco users. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 2022-04-26T19:58:36.179Z | 2022-04-26T00:00:00.000 | {
"year": 2022,
"sha1": "85f60cb4e6100614bac3b1b450ba82bcf19452b1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-022-07542-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "85f60cb4e6100614bac3b1b450ba82bcf19452b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236033877 | pes2o/s2orc | v3-fos-license | Pilot Study Suggests Online Media Literacy Programming Reduces Belief in False News in Indonesia
Amidst the threat of digital misinformation, we offer a pilot study regarding the efficacy of an online social media literacy campaign aimed at empowering individuals in Indonesia with skills to help them identify misinformation. We found that users who engaged with our online training materials and educational videos were more likely to identify misinformation than those in our control group (total $N$=1000). Given the promising results of our preliminary study, we plan to expand efforts in this area, and build upon lessons learned from this pilot study.
Introduction
While the use of targeted misinformation campaigns by state actors is well documented (Keller et al. 2020, Zannettou et al. 2019, Bradshaw & Howard 2018, ordinary citizens often both consume and spread (mis)information through their well-meaning online activity, which can inadvertently help to spread falsehoods to their friends and followers. Many users of online and social media systems are not aware of how misinformation is generated or spread, and thus they may unwittingly participate in the acceleration or distribution of these materials. Furthermore, the more exposed to misinformation a person becomes, the more likely they are to spread it, for repeated exposure leads to an increased belief of this information (Guess et al. 2020). Ultimately, healthy democracies depend on a well-informed public, the very possibility of which is undermined by this new deluge of digital misinformation (Lewandowsky et al. 2012).
In response to these challenges, we present the results of a small pilot study of a media literacy campaign in Indonesia. This pilot study is the first step in a long-running program to understand the effects of digital misinformation on individuals in developing digital economies -those who may not be sufficiently equipped to understand and navigate perils that proliferate on social media platforms.
There are many active projects seeking effective ways to stop this spread of misinformation, from new AI-based misinformation detection warning systems (Yankoski et al. 2020), to "inoculation" style misinformation games such as Harmony Square , to fact checking organizations that meticulously comb through available evidence to affirm or debunk claims, such as Snopes.com. But there is no silver bullet for this accelerating problem: the adversarial nature of misinformation content creation processes means that AI-based misinformation detection systems will likely encourage the development of more sophisticated misinformation campaigns designed to elude the latest generation of AI-based detection systems (Anderson & Rainie 2017). Furthermore, fact-checking efforts may backfire by actually increasing the visibility of the very falsehoods they are seeking to debunk. For even if exposure to misinformation comes by seeing a piece of misinformation proven false, once a person has been exposed it is very difficult to turn them back to a state where they have never heard of it (Lewandowsky et al. 2012).
Many view media literacy and fact checking as the obvious solution(s) to misinformation and fake news (Kim & Walker 2020). For example, watching others be corrected on social media can reduce beliefs in misperceptions (Bode et al. 2020), and techniques such as gamification Chang et al. (2020), inoculation theory, and prebunking can reduce a user's susceptibility to misinformation , Lee (2018). Indeed, social media platforms like Facebook and Twitter are rolling out fact checking systems (Fowler 2020) and Google is promoting its Be Internet Awesome media literacy program (Seale & Schoenberger 2018). However, others view these efforts with warranted suspicion. danah boyd (2017) argues that "thorny problems of fake news and the spread of conspiracy theories have, in part, origins in efforts to educate people against misinformation" because media literacy campaigns naturally ask the savvy information consumer to question the narratives that are presented to them. A study by Craft et al. (2017) showed that higher levels of news literacy predicted a lower endorsement of conspiracy theories. In contrast, Jones-Jang et al. (2021) found that only information literacy, which was measured by a skills test, predicted a user's ability to recognize fake news stories; whereas a user's media, news, and digital literacy, which are self-reported metrics, were not predictive of a user's ability to recognize fake news stories.
In this article we offer preliminary results from our pilot study in combating the growing phenomenon of online misinformation: a social media literacy education campaign strategy that seeks to empower new digital arrivals in the Republic of Indonesia with increased ability to identify misinformation. After an assessment of the media landscape in Indonesia, we created 6 animated videos and 2 live-action videos demonstrating various lessons in online social media literacy. These lessons were presented as advertisements on YouTube, Google, Facebook, and Twitter and backed by http://literasimediasosial.id. Building off of other work such as Harmony Square (Roozenbeek & van der Linden 2020) and Bad News (Basol et al. 2020), which are each games in which players learn how misinformation spreads online, we ask the following research question: Does online media literacy content lead to an increased ability to identify misinformation? In other words, can we teach people to do their own fact checking, which then would result in a potentially scalable solution to solve the global problem of misinformation? We measure the effectiveness of our campaign by asking visitors to our website via a phone survey to rate the accuracy of true, misleading, and false headlines. The results from this pilot study suggest that there is a modest increase in the ability to determine whether a story is real or not after engaging our media literacy lessons.
Why Indonesia?
Indonesia is a large democracy with a rapidly growing Internet user base and social media penetration. As of 2019, approximately 68% of Indonesia's 270 million citizens were online, a figure that represents a dramatic increase from the approximately 43% of the population just four years prior in 2015. By 2025 an estimated 89% of Indonesians will be online. As of 2019, 88%, 84%, 82%, and 79% of Indonesian internet users self-reported use of YouTube, WhatsApp, Facebook, and Instagram, respectively (Statista 2020). Because of this, many Indonesian citizens are new to the Internet. We position our work in this country to measure the effectiveness of this media literacy approach to a population which has had less exposure to the Internet, as compared to a Western audience. Additionally, this work provides important insights into the problems of online misinformation and propaganda, specifically within Southeast Asia, which remains an understudied region. Specifically, this context allows us to perform our research in a geographic area which is both in the Muslim world and in a country that is subjected to the Chinese sphere of influence. While similar previous work, such as Harmony Square and Bad News Game, was done targeting a Western audience, this study specifically targets another cultural context.
Many political events are frequently met with substantial coordinated misinformation campaigns (known as "hoaxes" in the Indonesian context). For example, in the 2018 Jakarta mayoral election, recent reports have indicated that the election campaigns paid as much as 280 USD per month to individuals who would promote messages from a particular candidate on social media Lamb (2018). Additionally, digital misinformation played a significant and highly divisive role in the national election in April of 2019 (Theisen et al. 2021). Protests against the results of the election resulted in six deaths and the temporary suspension of access to social media platforms by the Indonesian government (BBC 2019).
As more Indonesians gain reliable access to the Internet and begin to use online social media platforms, it is possible that those who are new to the Internet have not yet fully developed the media literacy skills needed to distinguish between trustworthy and false news sources, and may therefore be vulnerable to manipulation through misinformation. Because Indonesia is a relatively young democracy, traditional democratic institutions like the press may not be as robust as those institutions in more established democracies (Bennett & Livingston 2018). Furthermore, the susceptibility of the voting population to misinformation campaigns may also pose a threat to the stability of the democratic institutions of the nation itself.
Assessing the Online and Social Media Landscape in Indonesia
Our initial work focused on identifying legitimate news stories, propaganda, and disinformation, and the popular narratives and hashtags that were used in these stories and across these domains. For this we used a methodology developed by Moonshot, a company which uses social media monitoring, digital campaigns, and interventions to counter online harms.
This methodology uses a combination of desk research, consultation and workshops with subjectmatter experts, open-source intelligence methods, and computer systems to identify words, phrases, tropes, slogans, memes, slang, and other indicators of engagement, in multiple languages and across search, social media, image boards, forums, apps, and other discursive online spaces as necessary. Exact sources vary depending on the subject matter and for this deployment in February 2019 we focused on words and phrases indicative of intent to engage with disinformation on Google Search, YouTube, Facebook, and Twitter.
The fundamental output of this methodology is a clear understanding of how interest in a given online harm manifests online. Depending on the exact methods of collection, this can be broken down by time, platform, aggregate user location, age and gender, subcategories of harm and different levels of risk. This understanding is then used to inform the creation of campaigns designed to reduce the impact of that harm, as was the case for this project.
Analysis of the resulting dataset revealed four common themes of disinformation specific to the Indonesian context: (1) anti-Chinese, (2) anti-Communist, (3) Islamic chauvinism, and (4) political smears. This assessment of the social media and search engine landscape in Indonesia was used to inform the specific content used in the social media literacy campaign, based upon the common themes of disinformation that were identified.
Social Media Literacy Intervention
We developed a preliminary social media literacy education website (http://literasimediasosial. id) consisting of six informative lessons with several short educational videos and context-specific slogans that encourage social media behavior characterized by an increased ability to identify misinformation for widespread online delivery. This Learn To Discern (L2D) (Murrock et al. 2018) approach, developed by IREX, builds communities' resilience to state-sponsored misinformation, inoculates communities against public health misinformation, promotes inclusive communities by empowering its members to recognize and reject divisive narratives and hate speech, improves young people's ability to navigate increasingly polluted online spaces, and enables leaders to shape decisions based on facts and quality information (Vogt 2021). L2D is a purely demand-driven approach to media literacy, encouraging participants to increase self-awareness of their own media environments and the forces or factors that affect the news and information that they consume. L2D is traditionally presented as a training program and it typically includes a package of inperson activities, online games, distance-learning courses, public service announcements, and other methods that are tailored to the needs of social media users. Examples of L2D lessons are shown in Fig. 2. For this campaign several short explainer videos were created. The full-length videos are hosted on YouTube (https://www.youtube.com/channel/UCU7jNlA-4gH3cxsFV6_xkjw) and direct users to http://literasimediasosial.id. Shortened versions of these animated and live action videos were also created to capture various aspects of the media literacy curriculum in short commercial-length snippets.
Delivering Media Literacy Content to Social Media Users
Whereas previous media literacy efforts have been shown to be effective when conducted via inperson classroom settings (Hobbs & Frost 2003), our goal was to use the digitized lessons and explainer videos to directly reach users -not just students -within the social media platform itself. The Redirect Method (Helmus & Klein 2018) is a methodology created by Moonshot and Jigsaw in 2016, originally to fight against violent extremism. It is now used by Moonshot to counter a range of online harms by reaching individuals who perform searches which indicate intent to engage in, or that they are affected by, harmful content on social media or on search engines, with positive, alternative content.
Just as other companies use advertisements on social media and search engines to sell material products to an audience defined by, in whole or in part, the keywords they are searching for (for instance, a vacuum cleaner company might bid for advertising space from users who are searching for cleaning products), the Redirect Method places ads in the search results and social media feeds of users who are searching for pre-identified terms that we have associated with disinformation.
The Redirect Method can be extensively tailored to platform requirements and campaign goals, but at its core are three fundamental components: the indicators of risk (e.g. keywords); the 9 advertisements triggered by the indicators; and the content to which users are redirected by the advertisements.
The Indicators of Risk
We curated an extensive gazette of keywords and phrases indicative of a desire to engage with harmful content -in this case, disinformation. The keywords for this campaign were created through a combination of desk research, consultation, and workshops with independent and local subjectmatter experts and fact-checking organizations (some of whom shared their own keywords), and continuous mining for new keywords throughout the course of the project. The database currently contains more than 1,000 individual keyword phrases in Bahasa Indonesia.
Of the keywords identified for the campaign as indicative of intent to engage with disinformation, the three most-searched for were as follows: 1. "jokowi pki", which translates to, "Jokowi [President Joko Widodo] is a member of PKI [the communist party of Indonesia]" 2. "PDIP adalah topeng pki", which translates to, "PDIP [the Indonesian Democratic Party of Struggle and the party to which the President belongs] is a mask of the PKI" 3. "9 naga pendukung jokowi", which translates to "the nine dragons support Jokowi". The nine dragons refers to an age-old conspiracy that the capital city of Indonesia and its politicians are controlled by an underworld of nine Chinese mafia bosses. It is a myth based on the racist trope of "ethnic-Chinese control", used to instill fear and distrust.
The Advertisements
The fundamental purpose of these advertisements is the same as those used in the commercial sector, i.e., to entice people to click on them. The difference in the case of this deployment of the Redirect Method is that social gain takes the place of commercial gain, with users who were initially seeking disinformation being taken instead to content designed to improve their media literacy Using Google Ads to display advertisements, e.g., Fig. 1, in response to disinformation-related keyword searches means we can gather data on the number of times our advertisements were triggered ('impressions'), thereby gaining an insight into the volume of searches for disinformation. When plotted by time, as in Fig 2., we can also check for correlations in the search data with relevant and notable offline events. The translation is: "Stop: Take a moment to pay attention to the image; Ask: How am I feeling right now?; State it: Express these feelings to yourself." We found that many of the visitors were spending less than a minute on our website, so it was necessary to condense the lessons into short, concise bullet points and present them like an advertisement.
The Content
Our social media literacy and search engine campaign was launched in August 2019 on Facebook, Twitter, Instagram, Google, and YouTube and concluded in April 2020. In total, the campaign generated 3.4 million impressions approximately equally distributed over 7 different video lessons. Those impressions resulted in 72,976 unique page views of the campaign website. Because of the wide reach of the advertisement campaign, we cannot be certain that individuals in our control group did not see our content. However, because 72,976 unique user sessions page views represents approximately 0.03% of Indonesia's estimated 183 million online users, we feel fairly confident that most of the control group users did not see our media literacy content, though we do not rule out this possibility. A sample of the media literacy lessons is illustrated in Fig. 3.
Overall, we received 3,444,398.2 impressions (as calculated by each platform). Of those, 72,976 users clicked through to the website. In Table 1, we show how long each user spent on our site based upon the platform that the user was coming from. Additional campaign statistics are described in the Table A1 in the Appendix.
Phone Surveys
After the conclusion of our social media literacy campaign, a team of interviewers based in Jakarta, Indonesia, conducted a computer-assisted telephone interview (CATI) for 1,000 successful interviews. The surveys were nationally representative proportionate to 2019 census data estimates at the province-level of location, age group, and gender. The main interview language of the survey was Bahasa Indonesia, but the interviewer team was able to switch to other languages such as Balinese and Javanese if requested by the interviewee.
The research protocol and questionnaire were approved by the internal ethics committee at the University of Notre Dame (#18-11-5009). Data is anonymized, but demographic information is included. Data and full cross tabulations can be found at https://www.geopoll.com/ misinformation-indonesia/ Visitors to the media literacy education website were asked to be included in a phone survey in exchange for approximately $.50 USD equivalent in local currency in phone credit. 331 individuals agreed and provided their phone number, and of those, 94 completed the CATI survey. This constitutes the treatment group. The control group was comprised of 996 respondents to verified random digit dialing phone surveys.
The treatment group was both older and skewed male as compared to the control, as shown in Tables A2 and A3 in the Appendix. We asked participants about general demographic information, inquired about their media consumption habits, and also queried their ability to correctly judge the veracity of news headlines that we presented to them. We acknowledge that the treatment group is likely to have self-selection bias. In addition, users who were redirected to the website were targeted because of their propensity to search for misinformation subjects, which presents another avenue for data bias. However, we believe that as a pilot study, this data provides important information about the efficacy of media literacy campaigns which invites further study. During data collection the survey team made approximately 4500 phone calls that resulted in a 21% response rate. The margin of error for the survey at the 95% confidence level is +/-3.10%.
Findings
To judge the effectiveness of the social media literacy campaigns, each survey participant was asked if they recognized two true, one misleading, and one false headline. If so, we asked them to rate Figure 4: 58.3 percent of our treatment group was able to identify the misinformation as either very or somewhat inaccurate, compared with 42.4 percent of the control group. We supply the number of respondents that gave an accuracy rating for each headline. For this plot, we averaged together the number of individuals who replied to our real headlines, since each person was given two real headlines to rate. Table 1: Treatment users were statistically better at identifying misinformation headlines compared to the control group at 90% confidence.
the accuracy of that headline on a Likert scale from very accurate to very inaccurate. Our goal was to examine the differences between how the control and treatment groups rated the accuracy of headlines. An improvement in the treatment group's ability to correctly identify false, misleading, and real stories when compared to the control group would demonstrate that our campaign had an effect on a user's ability to analyze the news headlines they encountered.
Finding 1: Users who encountered the media literacy content were more likely to identify false news than those that did not.
Our preliminary findings from this pilot study suggest that our media literacy intervention was positively correlated to a respondent's ability to identify misinformation. Treatment group users in our pilot study were 15.9% more likely to identify a misinformation headline as either very or somewhat inaccurate. These results are significant at 90 percent confidence (Mann-Whitney ρ = 0.569, N = 309, p = 0.081 one-tailed). Figure 4 shows the results of the Likert-scale responses and Table 2 shows the statistical breakdown of the responses. Because the treatment group consisted of individuals who were actively searching for misinformation, these results are even more encouraging Table 2: Awareness of news stories is positively correlated with its veracity χ 2 (2, N = 3,610) = 327.60 p < 0.001 because they suggest that our campaign had a measurable effect even on those who are familiar with and are actively searching for misinformation content. We plan to expand upon this pilot study's encouraging results in our future research, as we acknowledge the limitations of such a small sample size. Across both the control and treatment groups, accuracy ratings were very similar for the different headline types. Interestingly, as shown in Fig. 4, it appears that relatively few individuals in the treatment group had a neutral opinion of the misinformation story, as compared to the other groups and headline types. This suggests that our media literacy campaign might have persuaded those who were not sure about the validity of the story that the narrative was false, via the new skills they acquired from the website. This is supported by the observation that the truthfulness rating of the misinformation headlines was similar between the treatment and control groups. More research is needed to understand the effect of media literacy campaigns on these fence-sitters -those who have neutral feelings about the headlines that they see, and are unable to tell if these stories are true or false.
In summary, we found that our social media literacy intervention is associated with an increase in the ability of users to correctly identify misinformation. However, despite these findings, participants in our social media literacy campaign did not show an increased ability to accurately identify misleading or real news stories. This phenomenon presents an intriguing area for future research.
Finding 2: False news did not spread as broadly in Indonesia as real news.
Interestingly and rather unexpectedly, we found that the headline accuracy was positively correlated with its reach. These results, shown in Table 2, suggest that misinformation does not spread quite as easily as real news stories in Indonesia -however, further research is needed to understand this phenomenon. In future work, we plan to expand the number of headlines that we supply to our survey respondents.
Our results suggest that our media literacy campaign had an effect on the ability of individuals to accurately label hoaxes as misinformation. However, further research is needed to answer some of the questions that our findings raise. For instance, we see that large numbers of respondents have neutral feelings about news stories. While our campaign focused on identifying misinformation, perhaps future work can promote trust in legitimate news sources, which would create statistically significant differences in the abilities of media literacy campaign participants to identify real headlines as such, not simply to identify misinformation.
Conclusions
In this work, we present findings from our early pilot study on the efficacy of a media literacy campaign in Indonesia in helping users identify misinformation headlines. These results indicate that 58.3 percent of users who visited our media literacy website and engaged with the education content were able to correctly identify misinformation headlines as inaccurate, compared to our 42.4 percent of the control group. These results suggest that it is possible to use a media literacy approach to help citizens spot and identify misinformation.
This work presented a small, limited study to understand the misinformation landscape in Indonesia, and demonstrates the feasibility of conducting a larger research project of this nature. Since our results were positive, we plan on expanding this study in several ways. First, we will introduce a gamification approach -analogous to Harmony Square and Bad News game -into our media literacy education materials. Second, we will enlarge the sample size and test the significance of the results on larger groups. Third, we will track information such as how long a user stayed on the site and engaged with the material. Fourth, we will ask more questions about Internet usage, such as how long a user has been using the Internet. Fifth, we will ask users about more disinformation headlines, since currently we only ask them about the veracity of one headline. With these additional considerations, we will create and deploy a broader study which will give us even more insights into the efficacy of media literacy campaigns.
We present these preliminary results to others that might be considering larger, more comprehensive studies in this area. We hope that our work will help those who are designing such studies, and implement the lessons learned from our study to further study media literacy, especially in developing countries or for new digital arrivals. Table A5: Treatment group is similar to our control group based upon religious identity χ 2 (3, N = 990) = 2.09 p = 0.553) | 2021-07-19T01:16:05.731Z | 2021-07-16T00:00:00.000 | {
"year": 2021,
"sha1": "7c907c481accb5106c6b05b01ae8cf41d42a62ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7c907c481accb5106c6b05b01ae8cf41d42a62ca",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
44629798 | pes2o/s2orc | v3-fos-license | Fixed vs. Free-text Documentation of Indication for Antibiotic Orders
Abstract Background Requiring indications for antimicrobial orders can allow stewardship programs to evaluate adherence to guidelines and assess outcomes. We extracted indication data from our institution’s EPIC system and found that in a 29-month time frame there were 12,218 uniquely entered indications. Only 136 of these were standardized drop-down (fixed) menu options; the rest were entered manually (free-text). Enormous variation in these uniquely typed entries emphasizes the value and necessity of fixed indication options to allow for better evaluation of stewardship program outcomes. Methods We evaluated the 718 most commonly used indications accounting for a total of 113,741 unique antibiotic orders for 42,665 patients. We excluded indications used for less than 36 orders during the study period. We analyzed the characteristics of these orders to identify opportunities for improvement in indication documentation and developed a new list of less than 200 indications that could account for nearly all of the various indications entered. Results 66,404 (58%) orders were placed using fixed options available in the menu (Figure 1). 32,427 (29%) orders were placed with no indication listed. The remaining 14,910 (13%) orders were documented with free-text indications. Within these manual entries, 59% were identical or nearly identical to an option that was available in the drop down menu. 37% of free-text indications could not be appropriately placed with an option available in the menu. For example, the menu contained a fixed option for “Severe C. difficile infection” forcing all non-severe cases to be entered as varied free-text alternatives (Figures 2 and 3). Conclusion In our sample, use of fixed menu options was high but robust evaluation of proper antimicrobial use was substantially limited by failure to document indication and free-text entry by providers. Free-text entry and blank fields can be used as quality metrics, with high use indicating poor quality. We recommend that standard comprehensive indication lists are developed, providers are encouraged and empowered to utilize menu options consistently, and computerized order systems are programmed to prevent orders from being placed without an indication listed. Disclosures All authors: No reported disclosures.
Background. Current vaccination coverage of Human Papillomavirus vaccine (HPVV) in Japan is less than 1% because the Ministry of Health, Labour and Welfare (MHLW) suspended its proactive recommendations for HPVV in 2013 after some reports of possible adverse events following immunization. We evaluated the perception of Japanese physicians about HPVV in order to consider the appropriate countermeasure to improve HPVV coverage in Japan.
Methods. We conducted a cross-sectional study using a postal questionnaire targeting 330 Japanese physicians (78 pediatricians, 225 internists and 27 obstetricians-gynecologists (OB-GYNs)) in Kawasaki, Japan in 2016. The questionnaire comprised questions about education frequency, physicians' perception and recommendation behavior related to adolescent vaccines (HPVV, diphtheria tetanus toxoid (DT) and inactivated influenza vaccine (IIV)).
Conclusion.
Although Japanese physicians were cautious about HPVV and infrequently provided education or made a recommendation for HPVV compared with other adolescent vaccines, our survey suggested such a passive attitude could be improved by the MHLW resuming its proactive recommendation in Japan.
Disclosures. All authors: No reported disclosures. Background. Documentation of antibiotic indications at the time of ordering can provide helpful information for antimicrobial stewardship programs to track antibiotic utilization patterns and improve antibiotic prescribing. Yet accuracy of indications is not fully understood; antibiotics are often ordered empirically without a clear-cut indication, and orders are not often updated once a diagnosis has been made, or the first listed option may be chosen for convenience. As hospitals are implementing antibiotic indications at the time of order entry to meet stewardship standards, our study sought to assess the accuracy of indications in an antibiotic order compared with true indication for the drug.
Indications for Antibiotic Orders
Methods. Indications for antibiotics, selected from a standardized list, are a required field in the computerized order entry system at our institution. Study investigators, including at least one infectious diseases attending, performed an in-depth post-hoc review to assess antibiotic indication and appropriateness. The frequency that the true antibiotic indication, as assessed by study investigators, matched with the indication in the antibiotic order was analyzed.
Results. Of 396 antibiotic orders reviewed by the study team, 100 had discordant indications between what was written in the order and the investigator-assessed indication (25.3%). The highest rates of discordance were seen with GU-UTI (11/18 incorrect, 61.1%) followed by bacteremia/sepsis (44 of the 116 incorrect, 37.9%). For GU-UTI, the most common investigator assessed true indications were pulmonary including CAP, HAP and empyema. For bacteremia/sepsis, the discordance was often due to a more specific diagnosis or source being identified.
Conclusion. Discordant indications between what was entered at the time of initial order compared with an investigator assessed indication occurred frequently. This finding is of concern as evaluations of antibiotic appropriateness, utilization and benchmarking by the antimicrobial stewardship team rely on the accuracy of indications in the system. Entering a revised indication during an antibiotic time out could improve the accuracy of antibiotic indications and antimicrobial stewardship data.
Disclosures. All authors: No reported disclosures. Background. Requiring indications for antimicrobial orders can allow stewardship programs to evaluate adherence to guidelines and assess outcomes. We extracted indication data from our institution's EPIC system and found that in a 29-month time frame there were 12,218 uniquely entered indications. Only 136 of these were standardized drop-down (fixed) menu options; the rest were entered manually (free-text). Enormous variation in these uniquely typed entries emphasizes the value and necessity of fixed indication options to allow for better evaluation of stewardship program outcomes.
Fixed vs. Free-text Documentation of Indication for Antibiotic Orders
Methods. We evaluated the 718 most commonly used indications accounting for a total of 113,741 unique antibiotic orders for 42,665 patients. We excluded indications used for less than 36 orders during the study period. We analyzed the characteristics of these orders to identify opportunities for improvement in indication documentation and developed a new list of less than 200 indications that could account for nearly all of the various indications entered.
Results. 66,404 (58%) orders were placed using fixed options available in the menu (Figure 1). 32,427 (29%) orders were placed with no indication listed. The remaining 14,910 (13%) orders were documented with free-text indications. Within these manual entries, 59% were identical or nearly identical to an option that was available in the drop down menu. 37% of free-text indications could not be appropriately placed with an option available in the menu. For example, the menu contained a fixed option for "Severe C. difficile infection" forcing all non-severe cases to be entered as varied free-text alternatives (Figures 2 and 3).
Conclusion. In our sample, use of fixed menu options was high but robust evaluation of proper antimicrobial use was substantially limited by failure to document indication and free-text entry by providers. Free-text entry and blank fields can be used as quality metrics, with high use indicating poor quality. We recommend that standard comprehensive indication lists are developed, providers are encouraged and empowered to utilize menu options consistently, and computerized order systems are programmed to prevent orders from being placed without an indication listed.
Background. Antibiotic stewardship is key to minimizing antibiotic resistance. To assist antibiotic stewards in dissecting population-level antibiotic use patterns, our study group developed a dashboard that displays consolidated patterns, supports data exploration, and compares facility-level antibiotic use to others. We report fuzzy set qualitative comparative analyses (QCA) of interviews designed to elicit user experiences to uncover different combinations of causal conditions supporting dashboard use.
Methods. Dashboards were iteratively designed based upon longitudinal feedback from stewards. Views include antibiotic use stratified by diagnoses and duration of therapy. Eight VAMCs, each with 0.5 to 2.0 FTE stewards, used the dashboard. One to 2 stewards from each site were interviewed using a structured script that focused on: 1) structure (i.e., program FTE) and functions of the local stewardship program; 2) critical incident or usage story; and 3) perceived knowledge and efficacy.
Results. Qualitative codes were developed from the interviews and were scaled in a fuzzy logic framework (i.e., between 0 and 1) to reflect the degree to which the qualitative theme was present in the stewardship program at participating clinical sites. The scaling was assigned using prior knowledge external to the data. The most parsimonious QCA solution identified just the absence of program structure (program FTE) a sufficient causal configuration to the frequency of dashboard use (coverage = 0.612, consistency = 0.813). Intermediate solutions added stewardship activities, dashboard self-efficacy, and trust in the data (coverage = 0.502, consistency = 0.952) as sufficient conditions. The coverage for both solutions exceeded 0.75, which was the lower bound of acceptability. | 2018-11-29T07:20:57.104Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "f51c7d559389b2c37699a635776a46443622af95",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/article-pdf/4/suppl_1/S325/20490178/ofx163.769.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f51c7d559389b2c37699a635776a46443622af95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240422953 | pes2o/s2orc | v3-fos-license | Long non-coding RNA (lncRNA) nuclear enriched abundant transcript 1 (NEAT1) promotes the inflammation and apoptosis of otitis media with effusion through targeting microRNA (miR)-495 and activation of p38 MAPK signaling pathway
ABSTRACT Long non-coding RNA (lncRNA) plays a vital role in human inflammatory diseases. Our study aimed to investigate the function of lncRNA nuclear-enriched abundant transcript 1 (NEAT1) in otitis media with effusion (OME). The mRNA levels of NEAT1 and miR-495 were measured by RT-qPCR. The protein levels of p38 MAPK were detected by western blot. The levels of inflammatory cytokines were examined by ELISA. CCK-8 and flow cytometry assays were used to evaluate the cell viability and apoptosis, respectively. The interaction between NEAT1 and miR-495 was determined by luciferase reporter and RIP assays. NEAT1 was highly expressed in OME, and silencing of NEAT1 facilitated the cell proliferation and suppressed levels of inflammatory cytokines and cell apoptosis in LPS-induced HMEECs. Moreover, miR-495 was confirmed as a downstream target of NEAT1. Functional assays revealed that NEAT1 promoted the OME by targeting miR-495. It was further demonstrated that NEAT1 could activate the p38 MAPK signaling pathway by regulating miR-495, and the p38 MAPK inhibitor restored the effects of NEAT1 overexpression on the inflammation levels, cell proliferation, and apoptosis. Our study revealed that lncRNA NEAT1 served as a ceRNA to activate p38 MAPK signaling by targeting miR-495 in OME, which may offer a new target for OME treatment.
Introduction
Otitis media with effusion (OME) is a common childhood disease characterized by the presence of fluid in the middle ear [1,2]. Persistent OME may lead to long-term changes in the tympanic membrane and middle ear, resulting in some degree of hearing loss. Previous studies have indicated that several factors, such as eustachian tube dysfunction, allergies, immunologic factors, and bacterial infection, are major causes of OME [3][4][5]. Nevertheless, the exact molecular mechanism of OME is still unclear. Hence, it is important to investigate the underlying molecular mechanism of the pathogenesis of OME.
Long non-coding RNA (LncRNA) is a type of non-coding RNAs (ncRNAs) >200 nt in length without protein-coding ability [6]. Extensive studies have proposed that lncRNAs have essential functions in multiple biological processes, including inflammatory response. For instance, FTX inhibited the inflammatory response of microglia via regulating the miR-382-5p/Nrg1 axis in spinal cord injury [7]. HOXA-AS2 suppressed endothelium inflammation by activating NF-κB signaling pathway [8]. NEAT1 is a well-known lncRNA with various functions in different physiological and pathological processes. For instance, NEAT1 facilitated inflammatory response and cell apoptosis in acute lung injury via the miR-944/TRIM37 axis [9]. Besides, silencing of NEAT1 suppressed inflammation in LPS-treated sepsis by regulating miR-590 [10]. However, the biological function of NEAT1 in the pathogenesis of OME is still unclear.
p38 MAPK is implicated as a critical regulator of the release of inflammatory cytokines and modulates the gene expression involved in the acute phase response, such as IL-1β, IL-6, and TNF-α [11,12]. In addition, multiple extracellular stimuli can activate the p38 MAPK signaling pathway, which further activates a cascade of physiological outcomes, such as apoptosis, proliferative ability, and the transcription of certain genes [13,14]. A recent study implied that upregulated NEAT1 increased the production of numerous cytokines in human lupus through activating p38 MAPK signaling [15], which led us to speculate that NEAT1 exerted its biological role via MAPK signaling in OME.
This work aimed to explore the biological function and molecular mechanism of NEAT1 in OME development. We hypothesized that NEAT1 might play a critical role in the OME progression, and the results indicated that NEAT1 promoted the inflammation response and cell apoptosis through miR-495 and activation of the p38 MAPK signaling pathway in OME. Our data might offer a novel insight into the pathogenesis of OME.
Clinical sample
Sixty OME patients and 60 sex-and age-matched healthy controls were recruited at Zhangjiagang TCM Hospital Affiliated to Nanjing University of Chinese Medicine from February 2018 to June 2019. The following were the inclusion criteria: (i) Patients provided informed consent; (ii) Patients were diagnosed with OME; (iii) Patients without specific causes of sensorineural hearing loss, such as noise exposure, head injury, and systemic ototoxic drugs. Written informed consent was obtained from the participant. Our work was approved by the Ethics Committee of Zhangjiagang TCM Hospital Affiliated to Nanjing University of Chinese Medicine.
Cell culture and treatment
HMEECs were purchased from Cell Bank of Chinese Academy of Sciences (Shanghai, China) and cultured in 50/50 mixture of DMEM and BEBM (both from Gibco) at 37°C with 5% CO 2 .
To construct an in vitro OME model, 100 ng/ml lipopolysaccharide (LPS) was used to treat HMEEC cells as previously described [16].
Reverse transcription-quantitative PCR (RT-qPCR)
Total RNA was isolated using TRIzol reagent (Invitrogen) and reverse-transcribed to cDNA using a PrimeScript RT Reagent Kit (TaKaRa). RT-qPCR was conducted on ABI 7500 real-time PCR system (Thermo Fisher Scientific) with SYBR Green kit. GAPDH or U6 was used as an internal control [18].
Cell counting kit-8 (CCK-8) assay
Cells were seeded into 96-well plates at a density of 2 × 10 4 cells/well. Subsequently, 10 μL CCK-8 reagent was added to each well and incubated for 4 h at 37°C. The optical density (OD) was determined using a microplate reader at 450 nm [19].
Dual-luciferase reporter assay
The wild-type (WT) and mutant (Mut) sequences of NEAT1 were sub-cloned into pmirGLO vectors (Promega). Then, NC mimics or miR-495 mimics were co-transfected with these vectors into HMEEC cells, and the luciferase activity was detected by Dual-luciferase Reporter System (Promega).
Enzyme-linked immunosorbent assay (ELISA)
The levels of inflammatory cytokines were detected using ELISA kits (R&D Systems, USA) according to the manufacturer's instructions.
Flow cytometry
To detect apoptosis of HMEEC cells, Annexin V-FITC Apoptosis Detection Kit (Yeasen, China) was utilized for flow cytometry assay. Treated HMEEC cells were washed in PBS and resuspended in binding buffer. Thereafter, the cell suspension was supplemented with Annexin V-FITC solution (5 μl) and PI solution (5 μl) followed by 15 minutes incubation in darkness. The cell apoptotic rate was analyzed and was detected by FACSCalibur Flow Cytometer (Becton Dickinson) [20].
Western blotting
Total protein was extracted from cells using RIPA buffer, resolved on SDS-PAGE, and transferred onto a PVDF membrane. The membrane was cultured with primary antibodies against anti-p-p38 and anti-p-38 and anti-β-actin at 4°C. Then, the membrane was incubated with the secondary antibody for another 2 h. Finally, the protein blots were visualized by ECL kit (Pierce Chemical, USA) [21].
Statistical analysis
All experiments were conducted with three replicates. SPSS 17.0 (SPSS, USA) was employed for statistical analysis, and the data are presented as mean ± standard deviation. Statistical differences were analyzed using Student's t-test or one-way ANOVA. Statistical significance was set at P < 0.05.
Results
In the current study, we explored the role and molecular mechanism of NEAT1 in OME. Functional assays revealed that NEAT1 promoted the inflammation and apoptosis of OME through targeting miR-495 and activation of p38 MAPK signaling pathway, suggesting that NEAT1 might be a novel target for the diagnosis and treatment of OME.
NEAT1 and inflammation levels are elevated in OME
To examine the expression pattern of NEAT1 in OME, RT-qPCR was employed, and the results indicated that NEAT1 expression was upregulated in serum and middle ear effusion of OME. (Figure 1a). Moreover, the levels of IL-1β, IL-6, IL-8, and TNF-a were increased in OME compared with that in healthy controls (Figure 1b).
NEAT1 regulates inflammation, cell proliferation, and apoptosis in LPS-treated HMEECs
To investigate the effect of NEAT1 on OME, an in vitro OME model was established by HMEECs treated with LPS. NEAT1 expression level was markedly upregulated in LPS-treated HMEEC compared to that in the control group (Figure 2a). Furthermore, the levels of inflammatory cytokines were enhanced in LPS-treated HMEECs (Figure 2b). Subsequently, LPS-treated HMEECs were transfected with shNC or shNEAT1. RT-qPCR indicated that NEAT1 was downregulated by NEAT1 knockdown in LPStreated HMEECs (Figure 2c). Then, ELISA revealed that inflammation levels were decreased in LPS-treated HMEECs transfected with shNEAT1 ( Figure 2d). Moreover, NEAT1 silencing promoted cell proliferation and inhibited apoptosis in LPS-stimulated HMEECs (Figure 2e and f). The above data demonstrated that NEAT1 knockdown inhibited the inflammatory response and apoptosis and promoted cell proliferation in OME.
miR-495 is a target of NEAT1
By using StarBase, the downstream genes of NEAT1 were screened, and the potential binding site between NEAT1 and miR-495 is presented in Figure 3a. Increasing evidence indicated that miR-495 acted as a vital regulator in the inflammation response of various diseases, such as pneumonia [22], inflammatory bowel disease [23], and ankylosing spondylitis [24]. Moreover, RT-qPCR analysis indicated that miR-495 expression was decreased in serum and middle ear effusion of OME (Figure 3b). Besides, we found that miR-495 was decreased in LPSinduced HMEECs (Figure 3c). In addition, miR-495 overexpression suppressed luciferase activity of NEAT1wt but had no effect on the activity of NEAT1mut (Figure 3d). RIP assay indicated that miR-495 and NEAT1 were markedly enriched in the Ago2 group (Figure 3e). Furthermore, RT-qPCR uncovered that miR-495 expression was enhanced by NEAT1 depletion in HMEECs ( Figure 3f). These data indicated that NEAT1 could negatively regulate miR-495 expression.
NEAT1 promotes OME progression by targeting miR-495 in HMEECs
To further investigate whether miR-495 was a downstream regulator of NEAT1 in OME, LPS-treated HMEECs were transfected with shNC, shNEAT1, shNEAT1+ NC inhibitor, and shNEAT1+ miR-495 inhibitor. ELISA indicated that NEAT1 knockdown significantly decreased levels of inflammatory cytokines, while the inhibition of miR-495 restored these effects (Figure 4a). Moreover, the downregulation of miR-495 partially abrogated the effects of NEAT1 silencing on cell proliferation and apoptosis in LPS-treated HMEECs (Figure 4b and c). These results manifested that NEAT1 modulated OME progression via sponging miR-495.
NEAT1 activates p38 MAPK signaling pathway by targeting miR-495 in LPS-treated HMEECs
Previous studies indicated that miRNAs could inhibit the inflammatory response by inhibiting the activation of p38 MAPK signaling [20]. Western blot showed that the upregulation of NEAT1 alone increased the expression of phosphorylated p38 MAPK, while NEAT1 overexpression combined with miR-495 mimics significantly decreased their expressions (Figure 5a). These data revealed that p38 MAPK might be a downstream modulator of NEAT1. To further determine the effects of p38 MAPK on the pathogenesis of OME, p38 MAPK inhibitor, SB203580 (10 μM), was used to treat LPS-treated HMEECs. The results indicated that SB203580 reversed the effects of NEAT1 overexpression on inflammatory cytokine levels, cell proliferation, and apoptosis (Figure 5b-d).
Discussion
Dysregulated lncRNAs are involved in the pathogenesis and development of various human diseases [25]. Previous researches have unveiled that NEAT1 participated in the inflammatory response through various regulatory mechanisms. For instance, NEAT1 promoted the inflammatory reaction via modulating the intestinal epithelial barrier in inflammatory bowel disease [26]. NEAT1 aggravated inflammatory response through HMGB1/RAGE signaling in acute lung injury [27]. Hence, we speculated that NEAT1 might promote the inflammatory response in OME. Herein, it was found that NEAT1 was highly expressed in OME and inhibited cytokine level and cell proliferation but promoted apoptosis in LPS-treated HMEECs.
LncRNAs are widely reported to serve as ceRNAs by specifically adsorbing miRNAs to participate in the pathogenesis of various diseases [28,29]. For example, lncRNA XIST acted as a ceRNA of miR-599 to increase TLR4 expression in atherosclerosis [30]. LncRNA LOC100912373 promoted the occurrence and development of rheumatoid arthritis by sponging miR-17-5p to upregulate PDK1 [31]. In addition, NEAT1 has been reported to target different miRNAs in various diseases, including miR-23a in rheumatoid arthritis [32], miR-34 c in diabetic nephropathy [33], and miR-27a-3p in acute kidney injury [34]. In this study, our results elaborated that miR-495 was a target gene of NEAT1. Zhou et al. indicated that miR-495 inhibited proliferation and inflammation by regulating βcatenin expression in rheumatoid arthritis fibroblast-like synoviocytes [35]. Du et al. reported that miR-495 relieved cardiac microvascular endothelial cell injury and inflammation by repressing NLRP3 [36]. miR-495 attenuated inflammatory reaction and accelerated bone differentiation of fibroblast-like synovial cells by targeting DVL-2 [24]. Herein, we found that NEAT1 knockdown facilitated cell proliferation and suppressed apoptosis and inflammation by targeting miR-495.
As a subgroup of MAPK pathway, p38 is a vital regulator in diverse cellular processes, including cell apoptosis, proliferation, and inflammation [37]. LPS can induce the activation of MAPK pathway through binding to Tolllike receptors, resulting in inflammatory reaction [38,39]. Studies demonstrated that some miRNA could act as an inflammation suppressor gene through targeting p38 MAPK signaling pathways [40,41]. Herein, we demonstrated that NEAT1 activated p38 MAPK in OME by targeting miR-495. In addition, p38 MAPK inhibitor, SB203580 reversed the effects of NEAT1 overexpression on inflammation levels, cell proliferation, and apoptosis. These results indicated that NEAT1 activated p38 MAPK signaling pathway by targeting miR-495 in LPS-treated HMEECs.
Conclusion
Our study verified that NEAT1 promoted inflammation and apoptosis by targeting miR-495 and activation of p38 MAPK signaling. These findings provided a theoretical basis for the possibility of NEAT1 as a novel target for the diagnosis and treatment of OME. However, the limitation of this work is its lack of mechanistic experiments in animal models to confirm the modulatory effects of NEAT1 on OME. Further study should validate the function of NEAT1 in OME in vivo.
Data availability statement
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 2021-11-03T06:17:30.977Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c2a7cf83e04e09fc64dea4692291d6ea9ed029ad",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2021.1982842?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3b20a210c8e54c359c021512cbd9f15badfbdcb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252832095 | pes2o/s2orc | v3-fos-license | Accelerated Quantitative 3D UTE-Cones Imaging Using Compressed Sensing
In this study, the feasibility of accelerated quantitative Ultrashort Echo Time Cones (qUTE-Cones) imaging with compressed sensing (CS) reconstruction is investigated. qUTE-Cones sequences for variable flip angle-based UTE T1 mapping, UTE adiabatic T1ρ mapping, and UTE quantitative magnetization transfer modeling of macromolecular fraction (MMF) were implemented on a clinical 3T MR system. Twenty healthy volunteers were recruited and underwent whole-knee MRI using qUTE-Cones sequences. The k-space data were retrospectively undersampled with different undersampling rates. The undersampled qUTE-Cones data were reconstructed using both zero-filling and CS reconstruction. Using CS-reconstructed UTE images, various parameters were estimated in 10 different regions of interests (ROIs) in tendons, ligaments, menisci, and cartilage. Structural similarity, percentage error, and Pearson’s correlation were calculated to assess the performance. Dramatically reduced streaking artifacts and improved SSIM were observed in UTE images from CS reconstruction. A mean SSIM of ~0.90 was achieved for all CS-reconstructed images. Percentage errors between fully sampled and undersampled CS-reconstructed images were below 5% for up to 50% undersampling (i.e., 2× acceleration). High linear correlation was observed (>0.95) for all qUTE parameters estimated in all subjects. CS-based reconstruction combined with efficient Cones trajectory is expected to achieve a clinically feasible scan time for qUTE imaging.
Introduction
MRI has been an important diagnostic imaging modality, owing to its noninvasiveness and lack of risk of exposure to ionizing radiation while providing excellent visualization of anatomical structures and physiological functions. However, the major limitation encountered in clinics is a relatively longer scan time than other imaging modalities such as CT, X-ray, and ultrasound. MRI requires that patients remain still during scanning to avoid motion that can cause imaging artifacts and misregistration between images. Moreover, prolonged MR exams may be impractical for pediatric patients or those in pain. Therefore, accelerated MR acquisition is crucial to minimizing patient discomfort by reducing stationary periods. In addition, accelerated MRI scans may directly help reduce healthcare costs by allowing for higher patient throughput.
Despite achieving excellent soft tissue contrast, clinical MRI has a limitation in imaging short T 2 tissues such as tendons, cortical bone, ligaments, and menisci. Due to their rapid signal decay over free induction decay (FID), these anatomical structures remain undetected with conventional MR pulse sequences that have long echo times (TEs). Ultrashort echo time (UTE) MR sequences allow for direct imaging of tissues with short T 2 values as they utilize significantly shortened TEs (on the order of microseconds), which cannot be achieved by clinical MR sequences (with TEs on the order of milliseconds) [1][2][3][4][5]. Threedimensional UTE imaging typically utilizes a non-Cartesian center-out radial encoding scheme in which a 3D spherical k-space is encoded with rotating half-radial projection lines.
Theory
The theory behind compressed sensing stems from the idea to intentionally violate the Nyquist sampling rate by acquiring a lower number of measurements. In MRI, a desired MR image is reconstructed by calculating the inverse discrete Fourier transform of fully sampled k-space data. Undersampling of the input k-space data reduces acquisition time at the cost of aliasing artifacts that occur in the spatial domain due to Nyquist violation in k-space (i.e., reduced field of view (FOV)) [23]. CS in MRI utilizes a randomized undersampling scheme to ensure that the aliasing artifact is noise-like (i.e., incoherent) in either the native spatial domain or DWT domain. The generic CS MR inverse problem can be given as:x where x is the complex image, y is k-space data that are undersampled, and R is a regularization parameter. The sensing matrix A is composed as: where P denotes a sampling operator, F denotes a discrete Fourier transform operator, and S denotes coil sensitivity information [24]. The first term in the right-hand side of Equation (1) ensures data fidelity, and the second term promotes sparsity in a certain domain (e.g., DWT domain). For accelerated MRI with undersampled k-space data, the regularization term, R, in Equation (1) may be critical to suppress the noise-like aliasing artifact, which typically utilizes gradient information in the image domain, such as total variation (i.e., finite difference). In literature, DWT-based CS combined with PI has proven to be successful in MRI reconstruction [25,26]. By employing DWT as a sparsifying transform, the data representation becomes much sparser, thereby enabling CS to achieve its full efficiency.
Pulse Sequence
The UTE technique allows for direct acquisition of an FID signal with short T 2 values lasting only a few hundreds of microseconds after radiofrequency (RF) excitation, before major T 2 * decay has occurred. Figure 1a shows a pulse sequence diagram of 3D UTE-Cones sequence used for imaging in this study. The 3D UTE-Cones sequence employs a short rectangular radiofrequency (RF) pulse for nonselective excitation of spins, followed by data acquisition with efficient 3D spiral Cones trajectories. For the 3D Cones imaging, a 3D spiral arm is first generated [27] for each Cone (e.g., for~100-300 total polar angles to cover the 3D k-space through kz-axis). Then, the spiral arm is rotated around the kz-axis to generate different spiral arms and hence cover the kx-ky plane in each Cone, yielding 10,000-40,000 spokes to encode the 3D spherical k-space. UTE-Adiab-T 1ρ [12] and UTE-qMT [15] can be achieved by applying an additional pulse sequence module designed to achieve the desired signal contrast in front of the UTE-Cones sequence. Figure 1b,c show the signal preparation module used for UTE-Adiab-T 1ρ and UTE-qMT imaging, respectively. The 3D Adiab-T 1ρ UTE-Cones sequence used a train of adiabatic full-passage pulses to generate T 1ρ contrast, followed by 3D UTE-Cones data acquisition. For UTE-qMT, a Fermi pulse was used for MT preparation. After the pulse preparation, multiple spokes were sampled to speed up data acquisition (N sp : number of spokes per preparation). Figure 1d illustrates a simplified example of the 3D Cones-based k-space trajectory.
where P denotes a sampling operator, F denotes a discrete Fourier transform operator, and S denotes coil sensitivity information [24]. The first term in the right-hand side of Equation (1) ensures data fidelity, and the second term promotes sparsity in a certain domain (e.g., DWT domain). For accelerated MRI with undersampled k-space data, the regularization term, R, in Equation (1) may be critical to suppress the noise-like aliasing artifact, which typically utilizes gradient information in the image domain, such as total variation (i.e., finite difference). In literature, DWT-based CS combined with PI has proven to be successful in MRI reconstruction [25,26]. By employing DWT as a sparsifying transform, the data representation becomes much sparser, thereby enabling CS to achieve its full efficiency.
Pulse Sequence
The UTE technique allows for direct acquisition of an FID signal with short T2 values lasting only a few hundreds of microseconds after radiofrequency (RF) excitation, before major T2* decay has occurred. Figure 1a shows a pulse sequence diagram of 3D UTE-Cones sequence used for imaging in this study. The 3D UTE-Cones sequence employs a short rectangular radiofrequency (RF) pulse for nonselective excitation of spins, followed by data acquisition with efficient 3D spiral Cones trajectories. For the 3D Cones imaging, a 3D spiral arm is first generated [27] for each Cone (e.g., for ~100-300 total polar angles to cover the 3D k-space through kz-axis). Then, the spiral arm is rotated around the kzaxis to generate different spiral arms and hence cover the kx-ky plane in each Cone, yielding ~10,000-40,000 spokes to encode the 3D spherical k-space. UTE-Adiab-T1ρ [12] and UTE-qMT [15] can be achieved by applying an additional pulse sequence module designed to achieve the desired signal contrast in front of the UTE-Cones sequence. Figure 1b,c show the signal preparation module used for UTE-Adiab-T1ρ and UTE-qMT imaging, respectively. The 3D Adiab-T1ρ UTE-Cones sequence used a train of adiabatic full-passage pulses to generate T1ρ contrast, followed by 3D UTE-Cones data acquisition. For UTE-qMT, a Fermi pulse was used for MT preparation. After the pulse preparation, multiple spokes were sampled to speed up data acquisition (Nsp: number of spokes per preparation).
Recruitment
In this study, 20 healthy volunteers were recruited for knee MRI following human study guidelines issued by our Institutional Review Board. Average age of the included volunteers was 38 years, with range between 24 and 60 years of age. The cohort included 7 male and 13 female subjects. Additionally, a female subject aged 27 years with a previous anterior cruciate ligament injury was recruited as a pathologic case to demonstrate the feasibility in extending the technique to abnormal dataset as well.
Imaging Parameters
MRI was performed in a clinical 3T MRI scanner (MR750, GE Healthcare, Milwaukee, WI, USA). An 8-channel knee coil was used for both RF transmission and signal reception. In UTE-Cones imaging, both RF and gradient spoiling were performed to remove the remaining transverse magnetization after each data acquisition. The 3D UTE-Cones sequences, which were previously reported in [7,12,15], were used for qUTE-Cones imaging. The k-space data were acquired with 23,869 spokes in a full Nyquist sampling rate using the imaging parameters listed in Table 1. Table 1. This table provides all the parameters used in VFA-UTE-T 1 [7], UTE-Adiab-T 1ρ [12], and UTE-qMT [15]. Additional B 1 correction was performed using actual flip angle imaging (AFI) technique [28]. For qMT, a Fermi pulse was used for MT saturation.
Compressed Sensing-Based Image Reconstruction
The k-space data were retrospectively undersampled at four different levels (i.e., 25%, 35%, 50%, and 65%) using a pseudorandom bit-reversal ordering scheme. This was followed by iterative density compensation [29], which is crucial due to the non-Cartesian nature of Cones trajectory that does not possess uniform distribution of data points. Coil sensitivity was estimated using the complex image reconstructed with zero-filling in each channel using nonuniform FFT [30].
The objective function of CS reconstruction based-based on l 1 norm and l 2 norm was posed as below: where F is Fourier transform, P is a sampling operator, S is coil sensitivity, x is an image to be reconstructed, y is k-space data, λ is a regularization parameter, and φ is discrete wavelet transform operator. Berkeley Advanced Reconstruction Toolbox (BART) [31] was used to perform DWTbased CS reconstruction. The regularization parameter, λ, was empirically optimized to accommodate the current application and remained constant for the control group. The reconstructed images were input to the subsequent qUTE parameter quantification process. Figure 1e shows a complete block diagram representation for the entire flow.
Data Analysis
Structural similarity (SSIM) was utilized to assess the reconstruction performance for morphological UTE-Cones imaging [32]. The SSIM combines three different components to yield the overall metric that includes luminance, contrast, and structure. A mean SSIM (MSSIM) index evaluates the overall image quality between completely sampled and partially sampled reconstruction for various quantification parameters, as given by the following equation: where A is the reference (fully sampled reconstructed image) and B is the CS-reconstructed image, respectively; a j and b j are the image contents at the jth local window; and M is the number of local windows in the image. A mean of MSSIM was calculated from the entire sample set for describing the achieved reconstruction performance. Pearson's correlation was calculated between qUTE parameters estimated with fully sampled and undersampled data, based on the mean values from all ROIs in all subjects (i.e., 20 subjects × 10 ROIs = 200 data points).
Morphological Assessment
CS reconstruction was compared with zero-filled reconstruction at various undersampling levels (i.e., 25, 35, 50, and 65%). In all 20 subjects, CS provided a discernible morphological improvement of the reconstructed images while zero-filled reconstruction showed pronounced streaking artifacts at high undersampling rates.
In Figure 2, the UTE-T 1 images reconstructed at different undersampling rates using zero-filling or CS are shown. At 25% undersampling, the images were visually similar to those with fully sampled data. At 35% undersampling, a mild intensity change and contrast variation were pictured throughout the sagittal slice of femur in the zero-filled reconstruction, while such changes were hardly visible using CS-based reconstruction (yellow arrows). At 50% and 65% undersampling, both CS and zero-filling introduced streaking, but CS showed improved image quality with higher details. Similar tendencies were observed in UTE-Adiab-T 1ρ and UTE-qMT. In UTE-Adiab-T 1ρ (Figure 3), streak artifacts were manifest in the tibial bone (yellow arrows) at 35%, 50%, and 65% undersampling rates for the zero-filled reconstruction. In particular, at a high undersampling rate (i.e., 65%) the images reconstructed with zero-filling showed blurred, compromised high-frequency details, while images with CS showed much-improved details. As seen in Figure 4, UTE-MT images obtained at a 50% undersampling rate using CS performed much better than the images reconstructed using the zero-filling method. Artifacts were consistently exhibited at higher undersampling rates in the zero-filled reconstruction as opposed to the CS reconstruction. than the images reconstructed using the zero-filling method. Artifacts were consistently exhibited at higher undersampling rates in the zero-filled reconstruction as opposed to the CS reconstruction. . Image reconstructed using entire k-space data is shown as the reference. Reconstructed images using incomplete (i.e., undersampled) k-space via zero-filling carries streaking artifacts that are resolved using CSbased reconstruction, as marked by the yellow arrows. than the images reconstructed using the zero-filling method. Artifacts were consistently exhibited at higher undersampling rates in the zero-filled reconstruction as opposed to the CS reconstruction. Comparison of UTE-Adiab-T1ρ-reconstructed images at different undersampling rates between zero-filled reconstruction (top row) and CS-based reconstruction (bottom row). Image reconstructed using entire k-space data is shown as the reference. Reconstructed images using incomplete (i.e., undersampled) k-space via zero-filling carries streaking artifacts that are resolved using CSbased reconstruction, as marked by the yellow arrows. Comparison of UTE-Adiab-T 1ρ -reconstructed images at different undersampling rates between zero-filled reconstruction (top row) and CS-based reconstruction (bottom row). Image reconstructed using entire k-space data is shown as the reference. Reconstructed images using incomplete (i.e., undersampled) k-space via zero-filling carries streaking artifacts that are resolved using CS-based reconstruction, as marked by the yellow arrows.
The MSSIMs from the zero-filled reconstruction and CS reconstruction with the total dataset of 20 subjects are presented in Table 2. In all qUTE techniques, the MSSIM dropped with an increased undersampling rate due to exacerbated reconstruction error. For most cases, the MSSIM with the CS-based reconstruction was over 0.9 with up to 50% undersampling while the zero-filled reconstruction recorded much lower value (~0.77). The MSSIMs from the zero-filled reconstruction and CS reconstruction with the total dataset of 20 subjects are presented in Table 2. In all qUTE techniques, the MSSIM dropped with an increased undersampling rate due to exacerbated reconstruction error. For most cases, the MSSIM with the CS-based reconstruction was over 0.9 with up to 50% undersampling while the zero-filled reconstruction recorded much lower value (~0.77). Table 2. This comparison of zero-filled (ZF) and CS reconstruction using MSSIM measure at various undersampling rates. MSSIM values for various flip angles of UTE-T1 show that with increase in undersampling rate, MSSIM falls below 0.9 for CS images and much lower value (~0.7) for ZF-reconstructed images. For MSSIM metric value of UTE-Adiab-T1ρ at different spin lock times, an undersampling rate of 65% still yields significantly higher MSSIM measure (around 0.95), which proves the efficiency of CS-based reconstruction, whereas ZF-reconstructed images average at 0.8. In UTE-qMT with two different MT powers (1500 and 500 degree) at multiple frequency offsets (2, 5, 10, 20, and 50 kHz) a mean of 0.93 at 50% undersampling has been recorded, proving the CS reconstruction strategy to be quite more reliable than ZF. The recorded MSSIM for CS is closer to fully sampled data (<10% error) than ZF (approx. 30% error). In UTE-qMT with two different MT powers (1500 and 500 degree) at multiple frequency offsets (2, 5, 10, 20, and 50 kHz) a mean of 0.93 at 50% undersampling has been recorded, proving the CS reconstruction strategy to be quite more reliable than ZF. The recorded MSSIM for CS is closer to fully sampled data (<10% error) than ZF (approx. 30% error).
qUTE Parameter Mapping
To illustrate the pixelwise mapping of T 1 , T 1ρ , and MMF parameters, four ROIs representing four different tissue types (i.e., PT, AFC, PCL, and PLM) out of a total of 10 ROIs were selected. The results of quantitative parameter mapping in the selected ROIs are visualized in Figures 5-7. Figure 5 shows the results of UTE-T 1 mapping at various undersampling levels where the estimated parameters fall under the admissible range, as reported in [4]. Figures 6 and 7
qUTE Parameter Mapping
To illustrate the pixelwise mapping of T1, T1ρ, and MMF parameters, four ROIs representing four different tissue types (i.e., PT, AFC, PCL, and PLM) out of a total of 10 ROIs were selected. The results of quantitative parameter mapping in the selected ROIs are visualized in Figures 5-7. Figure 5 shows the results of UTE-T1 mapping at various undersampling levels where the estimated parameters fall under the admissible range, as reported in [4]. Figures 6 and 7 display the results of UTE-Adiab-T1ρ and UTE-qMT mapping at different undersampling levels.
qUTE Parameter Mapping
To illustrate the pixelwise mapping of T1, T1ρ, and MMF parameters, four ROIs representing four different tissue types (i.e., PT, AFC, PCL, and PLM) out of a total of 10 ROIs were selected. The results of quantitative parameter mapping in the selected ROIs are visualized in Figures 5-7. Figure 5 shows the results of UTE-T1 mapping at various undersampling levels where the estimated parameters fall under the admissible range, as reported in [4]. Figures 6 and 7 display the results of UTE-Adiab-T1ρ and UTE-qMT mapping at different undersampling levels. Table 3 provides the mean and standard deviation of all qUTE parameters estimated in all tissue ROIs at different undersampling levels, as well as the corresponding mean percentage errors with respect to the reference values obtained from fully sampled data. The estimated parameters from CS-reconstructed images exhibited a percentage error below 5% for up to 50% undersampling in most of ROIs. PCL and PLM exhibited relatively higher errors, presumably due to the high partial volume effect and the small size of ROI, which is more susceptible to the reconstruction errors. Table 3. Parameter quantification for 10 ROIs with mean and standard deviation along with percentage error. A total of 20 subjects were evaluated for the current study. Error estimates include either an underestimation or overestimation from reference value denoted by the sign. Abbreviations of each ROI are presented on the left side of table. Mean error can be estimated as less than 5% with exceptions in PCL, which is a difficult region to quantify due to the small ROI size and strong partial volume effect through planes (ACL = anterior cruciate ligament, AFC = anterior femoral cartilage, MFC = medial femoral cartilage, PC = patellar cartilage, PFC = posterior femoral cartilage, TPC = tibial plateau cartilage, ALM = anterior lateral meniscus, PLM = posterior lateral meniscus, PCL = posterior cruciate ligament, and PT = patellar tendon). Figure 8 depicts the scatter plots of mean qUTE parameters estimated in all 200 ROIs from all subjects, as well as the Pearson's correlation between fully sampled and undersampled CS with various undersampling levels. Each ROI for each tissue type is coded with a different color to identify the outlier points. The high degree of linearity indicates that the quantitative parameters from undersampled data (y-axis in scatter plots in Figure 8) correlate well with the corresponding parameters from fully sampled data (x-axis in scatter plots in Figure 8), despite the reduced data. Among all ROIs, qUTE parameters estimated in PCL (orange dots) included many outliers due to the abovementioned reasons (i.e., partial volume effect and small ROI). Figure 9 portrays a special case with a 27-year-old female patient who has tibial tunnel from a previous ACL reconstruction, which is clearly depicted in the CS-reconstructed images (red arrows). Susceptibility artifacts are shown in the tibial tunnel (red arrows) on the distal portion, as shown in the sagittal UTE MRI data of the knee. Similar to the healthy control dataset, the presence of artifacts and blurring of information is visible in various sections of images from a zero-filled reconstruction. CS reconstruction, on the other hand, provides a consistent and reliable image quality up until 65% undersampling (yellow arrows).
CS Reconstruction from a Degenerated Knee
with a different color to identify the outlier points. The high degree of linearity indicates that the quantitative parameters from undersampled data (y-axis in scatter plots in Figure 8) correlate well with the corresponding parameters from fully sampled data (x-axis in scatter plots in Figure 8), despite the reduced data. Among all ROIs, qUTE parameters estimated in PCL (orange dots) included many outliers due to the abovementioned reasons (i.e., partial volume effect and small ROI). Figure 9 portrays a special case with a 27-year-old female patient who has tibial tunnel from a previous ACL reconstruction, which is clearly depicted in the CS-reconstructed images (red arrows). Susceptibility artifacts are shown in the tibial tunnel (red arrows) on the distal portion, as shown in the sagittal UTE MRI data of the knee. Similar to the healthy control dataset, the presence of artifacts and blurring of information is visible in various sections of images from a zero-filled reconstruction. CS reconstruction, on the other hand, provides a consistent and reliable image quality up until 65% undersampling (yellow arrows). . CS reconstruction in a patient with a degenerated knee. As seen, the presence of an abnormality (tibial tunnel from an ACL reconstruction as shown by red arrows) is clearly depicted in images with CS reconstruction. Streaking artifacts in zero-filled reconstruction are dramatically reduced in CS reconstruction (yellow arrows).
Discussion
In this work, the feasibility of accelerated qUTE-Cones imaging using the CS reconstruction method was explored for the first time to our best knowledge. The qUTE parameter mappings inherently require multiple repeated acquisitions of images to achieve different contrasts used in signal modeling and parameter fitting. For example, T1ρ mapping requires the acquisition of multiple images with different TSLs, resulting in different degrees of T1ρ weighting (i.e., based on exponential decays) to fit the relaxation parameter. This in turn imposes a longer scan time, which is especially problematic in in vivo applications. A rudimentary approach to reducing the image acquisition time of any mapping is to limit the number of sequences (i.e., data points in the parameter domain), which could compromise accuracy and precision of the quantitative parameter estimation. Hence, it is vital to develop an accelerated qUTE technique that can reduce an acquisition time while generating parameter maps without significantly compromising the accuracy.
In this study, morphological image quality and accuracy of quantification maps were assessed to show the effectiveness of CS reconstruction in three qUTE techniques. In the morphological assessment, CS achieved the robust reconstruction of UTE images with highly reduced reconstruction errors with 50% undersampling (MSSIM > ~0.9), as shown in In the experiment with the quantification of pixelwise qUTE parameters, CS yielded a percentage error below 5% and Pearson's linear correlation over ~0.95 with Figure 9. CS reconstruction in a patient with a degenerated knee. As seen, the presence of an abnormality (tibial tunnel from an ACL reconstruction as shown by red arrows) is clearly depicted in images with CS reconstruction. Streaking artifacts in zero-filled reconstruction are dramatically reduced in CS reconstruction (yellow arrows).
Discussion
In this work, the feasibility of accelerated qUTE-Cones imaging using the CS reconstruction method was explored for the first time to our best knowledge. The qUTE parameter mappings inherently require multiple repeated acquisitions of images to achieve different contrasts used in signal modeling and parameter fitting. For example, T 1ρ mapping requires the acquisition of multiple images with different TSLs, resulting in different degrees of T 1ρ weighting (i.e., based on exponential decays) to fit the relaxation parameter. This in turn imposes a longer scan time, which is especially problematic in in vivo applications. A rudimentary approach to reducing the image acquisition time of any mapping is to limit the number of sequences (i.e., data points in the parameter domain), which could compromise accuracy and precision of the quantitative parameter estimation. Hence, it is vital to develop an accelerated qUTE technique that can reduce an acquisition time while generating parameter maps without significantly compromising the accuracy.
In this study, morphological image quality and accuracy of quantification maps were assessed to show the effectiveness of CS reconstruction in three qUTE techniques. In the morphological assessment, CS achieved the robust reconstruction of UTE images with highly reduced reconstruction errors with 50% undersampling (MSSIM >~0.9), as shown in In the experiment with the quantification of pixelwise qUTE parameters, CS yielded a percentage error below 5% and Pearson's linear correlation over~0.95 with 50% undersampling in most of ROIs. Therefore, it is expected that CS will shorten the total acquisition time of the qUTE techniques by a factor of 2 without dramatic degradation of the reconstructed image quality and accuracy in qUTE parameter estimation. In the current qUTE imaging protocol, with the 50% undersampling, CS reconstruction is expected to shorten the scan time down to 3 min 4 s, 4 min 27 s, and 4 min 49 s for UTE-T 1 , UTE-Adiab-T 1ρ , and UTE-qMT, respectively, which is more suitable to add in the clinical MSK MRI workflow.
Despite being a highly useful tool for rapid qUTE imaging, the downside of CS is the need for finding an optimal regularization parameter involved in the reconstruction process, which is typically performed manually, and thus is time-consuming. CS is based on an information theory that allows for data reconstruction from highly undersampled measurements. The application of CS for clinical scenarios requires the estimation of optimal regularization parameter λ (i.e., Lagrange multiplier in the objective function to minimize). In this work, we empirically identified the best-suited regularization parameter that was maintained the same for all the cases and for all different qUTE techniques, which yielded a decent quality of image reconstruction, as demonstrated in the experimental results. This implies that the regularization parameter tuning for CS reconstruction is insensitive to the image contrast in qUTE-Cones.
This study has a limitation that only 20 healthy volunteers were included, without the inclusion of a cohort of patients. Although we presented a special case of one patient with abnormality in the knee joint, the inclusion of more cases with different pathologies will show the clinical feasibility in the general population. In future studies, we propose to recruit more patients with osteoarthritis to evaluate the diagnostic power of CS-based qUTE-Cones imaging.
Conclusions
We have demonstrated the feasibility and efficacy of CS reconstruction for quantitative 3D UTE-Cones imaging of the knee. The results showed that images can be reconstructed from undersampled data with a highly reasonable imaging standard to allow for robust qUTE parameter mapping with reduced error. | 2022-10-12T15:37:42.529Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "d72976d0655202352c3a70d976d9646fe4ec140d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/19/7459/pdf?version=1665380671",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f8edecb4e0919569442a1fbc283895337f209d4",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
62595381 | pes2o/s2orc | v3-fos-license | Expectations and Effectiveness Using CD-ROMs: What Do Patrons Want and How Satisfied Are They?
As academic libraries wrestle with how to provide more CD-ROMs and more online remote access to databases to keep pace with demands, they first need to evaluate the effectiveness of their current services. This study identifies students(cid:146) preferences and effectiveness using CD-ROMs and assesses the whole CD-ROM environment at one university library. A questionnaire was used to ask students their preferences, confidence levels, skills, and training in searching CD-ROM databases, as well as their satisfaction with CD-ROM services (see the appendix). Analyses of the data revealed that users are satisfied and prefer CD-ROM databases over the print indices. Although students indicated they are confident searching, they admitted that they need to know basic search strategies and that they want more personal assistance, hands-on training, and remote online access to databases. Moreover, how can academic libraries meet increasing user demands for more data-base access and services when their resources are not increasing? ince
cial constraints and staff shortages in the 1990s, how are they going to respond to the increasing demands of users in light of decreasing resources?
A recent study on effective library use at the University of Rhode Island (URI) revealed that students most frequently request better availability of materials and more assistance and training, especially on CD-ROM databases. 1Thus, this study is a follow-up investigation of students' expectations, effectiveness, and satisfaction with CD-ROM services at the URI Library.
During spring semester 1995, training sessions on CD-ROM databases were provided for URI students in Writing 101.The goal of the training was to produce effective and independent searchers on CD-ROM products.Students in these sessions were surveyed to assess the training they received.In addition, all users of CD-ROM databases at the URI Library during the month of April 1995 were asked to complete a survey to evaluate their needs, skills, and satisfaction using CD-ROM products and services.All responses were voluntary and anonymous.
The CD-ROM Environment at the University of Rhode Island Library
The process of acquiring CD-ROMs at URI began in 1987 as a joint purchase of Compact Disclosure with the Business School.Because of the popularity of this first CD-ROM, the reference librarians evaluated the library's DIALOG and BRS online services in order to justify the purchase of additional CD-ROM databases.After an evaluation of user statistics, it became apparent that the most popular and frequently requested online databases were MEDLINE, PsycLit, and ERIC.Based on these findings, a CD-ROM policy was created that addressed the priorities for purchasing reference CD-ROM databases.The reference librarians recommended the purchase of MEDLINE, PsycLit, ERIC, the Government Documents Catalog Service, MLA, and UMI's Periodical Abstracts as the first tier of CD-ROM databases.
During the following three years, the library gradually added four of the Wilson titles and additional workstations.In 1990, LANTASTIC was acquired and ten workstations were networked.The popularity of the CD-ROMs skyrocketed and students queued up to use them, even refusing to use the print versions.Multiple-user subscriptions had to be purchased to ease the congestion.A declining budget was stretched to purchase more hardware and multiple-user licenses.
In 1992, Sylvia Krausse wrote a grant proposal to equip a CD-ROM room with appropriate workstations and networked printers.The Champlin Foundations granted the proposal and provided funding for a local area network (LAN), connecting eighteen workstations with two servers and several CD-ROM towers.The library added more subscription databases, namely UMI's Newspaper Abstracts, Dissertation Abstracts, Newsbank, CINAHL, PAIS, Biological Abstracts, Engineering Index, Textile Technology Digest, America: History and Life, Historical Abstracts, and Moody's International.
In the fall of 1993, a LAN of CD-ROM workstations was officially connected; and by the spring of 1995, there were twenty-two CD-ROM databases networked using LANTASTIC.Ever since the first tier of CD-ROMs was acquired, statistics showed a steady increase in CD-ROM use.With the addition of the CD-ROM room and more requests for assistance, it became apparent that during the busiest hours of the day full staffing of the CD-ROM room was a necessity.With the opening of the CD-ROM room, students were hired between the hours of eleven in the morning and two in the afternoon.Because of the pressing demand for assistance in the room, one hour was added each semester until the CD-ROM room was fully staffed from ten in the March 1997 morning until five in the afternoon, and again from six to nine in the evening during the week and on the weekend during the majority of the hours that reference service was provided.
In addition to personal assistance, every two workstations share an instructional notebook that contains one-page instructional guides by database as well as by interface (e.g., Silverplatter, UMI, or Wilson).During the times that the room is not staffed, patrons either use these binders or go to the Reference Desk for additional help.It was soon evident that although users want assistance or instruction at point of use, the librarians also would need to offer additional training sessions in a classroom setting to larger groups to ease the demand in the CD-ROM room for one-on-one instruction.Thus, general hands-on training sessions were offered in addition to the instruction during the regular bibliographic instruction classes.
Literature Review
A survey of the literature reveals that the introduction of CD-ROMs in academic libraries has created an increase in demand for point-of-use assistance by the reference staff.With the technology explosion in libraries, the number of workstations, databases, and queries for assistance have multiplied in reference departments, but the number of staff has not increased accordingly. 2Moreover, the increases in CD-ROM products and the on-demand requests for training may be affecting the quality of reference services and staff morale, and may even cause staff burnout. 3In one research study on bibliographic instruction methods on CD-ROM data-bases, Dorothy F. Davis offered a practical solution to alleviate staff stress: "librarians must provide more effective group instruction or offer self-instructional alternatives." 4endors, however, have promoted their products for end users, claiming little or no assistance is needed from professional staff.Although these products are relatively easy for end users, most patrons are not familiar with the multitude of subject-specific databases and/ or do not know which one(s) to select.Students faced with multiple databases are often overwhelmed by the many choices and need to seek help.CD-ROM products, however, do offer a cost-effective way for users to search databases compared to mediated online searching, which is usually costly and time-consuming.Thus, several studies have addressed the increased demands for assistance using CD-ROMs and identified the impact of CD-ROM products or services on libraries and staff. 5ther CD-ROM studies have evaluated specific training or bibliographic instruction programs in academic libraries. 6Some studies have investigated user perceptions about, or satisfaction with, using specific CD-ROM products. 7On the questionnaires used to assess CD-ROM use, most respondents revealed a high degree of user satisfaction.A study titled "Value-added Bibliographic Instruction" by Stanley D. Nash and Myoung Chung Wilson showed that although students were satisfied with their searches, an analysis of their searches revealed that more than one-third of the citations retrieved by students were useless or inappropriate for their topics. 8lthough these studies contributed to a broader understanding of user needs and satisfaction with CD-ROMs, professional concerns continue to grow about these products.Without a standard for CD-ROM database products, each vendor develops a unique interface system, code, terminology, and search strategy Thus, general hands-on training sessions were offered in addition to the instruction during the regular bibliographic instruction classes.
that is not easily transferred from one to the other by users.Students are often confused about which databases to use in order to find appropriate sources for their research needs.Although some interfaces provide easier search strategies, many undergraduates are unaware of specialized subject databases and are familiar only with one general CD-ROM database from their high school or public library experiences.Furthermore, they are unfamiliar with the differences in vendor or interface systems, serials indexed, or thesauri.The literature review provided insights from numerous studies addressing some of these concerns related to CD-ROM use and instruction, but none of the investigations assessed the total CD-ROM environment in an academic library.
One significant investigation, however, assessed the CD-ROM environment (fifteen commercial products networked) at the University of North Carolina at Chapel Hill and provided the most broad-based perspective. 9The goal of that study was to evaluate the CD-ROM "system's success to gather information that would enable the staff to evaluate and improve the service." 10 Although an electronic survey was used, apparently no further inquiry was performed to assess either user training or satisfaction with CD-ROM products and services.
Utility of Study
What follows, therefore, is an attempt to build on these previous studies by evaluating the whole CD-ROM environment at one academic library at URI.The goal of this study is to determine users' preferences and needs, their confidence in searching, the effectiveness of their training, and their overall satisfaction with CD-ROM services.The authors hope that this study will help library staff to evaluate their CD-ROM environment and to plan improvements in the total electronic delivery of databases.
Design and Methodology
This study is the result of a collaboration between the principal investigators, Cheryl McCarthy and Sylvia Krausse.Arthur Little, a doctoral student, performed statistical analyses of the data from the questionnaires and provided input to the Design and Methodology section.
This study was designed as a needs assessment for users of CD-ROM databases at the URI Library.After an initial study indicating student demands for more instruction using CD-ROMs, the reference librarians initiated a series of handson CD-ROM training sessions for freshmen in Writing 101 classes in the library's new electronic classroom (ECL). 11A survey was designed to evaluate both CD-ROM needs and services for students in CD-ROM training classes, as well as for a sample of students using the CD-ROM services during the spring of 1995.
The specific research objectives for this study were: 1. to determine users' preferences in searching CD-ROM databases versus print indices; 2. to assess users' confidence in their searching skills; 3. to identify what users need to become more effective searchers on CD-ROMs; 4. to assess effectiveness of CD-ROM training sessions; 5. to assess users' satisfaction with CD-ROM services.
An eleven-item questionnaire addressing these objectives was pretested and revised to eliminate ambiguity in language and meaning.This process provided both a reliability and a validity check to ensure that the survey items reflected the research objectives of the principal investigators.A total of 750 questionnaires were distributed-to both the students in the Writing 101 classes who attended the training and everyone who used the CD-ROM facility during the month of April 1995.A return box was March 1997 provided at the CD-ROM desk, and 489 completed questionnaires were returned.No attempt was made to survey either nonusers or users of print indices in other locations.
Data Analysis
Analyzing the questionnaire data, using both quantitative and qualitative measures, provided the researchers with an in-depth interpretation of the five research objectives.Quantitative statistical methods were used to evaluate the numerically measurable survey variables.These included demographic and usage information, database preference information, and three five-point Likert scales to measure perceived CD-ROM confidence levels (question five), satisfaction levels (question eleven), and effectiveness of training (question ten-applied to Writing 101 groups).Qualitative means were used to analyze open-ended followup queries to the Likert scale questions.These follow-up items were phrased in the form of "Please explain" or "Why?" Content analysis was chosen as one way to analyze these data because of its advantages "in making inferences by objectively and systematically identifying specified characteristics of messages" into categories. 12One advantage in using content analysis is that specified characteristics occurring in open-ended questions can be quantitatively identified, studied, and rank-ordered to assess common traits or problems.By employing both qualitative and quantitative procedures in content analysis, insights can be gleaned from both what is seen and not seen, and from what is reported and not reported.Moving between the quantitative statistical analysis and the qualitative content analysis, the investigators gained insights into the full meaning of the data.Techniques for the coding, displaying, and verifying of data are adapted from recommendations by Ole Holsti. 13The principal investigators also made unobtrusive observations of CD-ROM users.Thus, a process of triangulation was used to achieve "intersubjectivity" and full insight into the meaning of the data. 14Furthermore, the investigators were able to draw inferences into students' effectiveness in and satisfaction with training and CD-ROM services.
The quantitative data from the 489 completed questionnaires were converted into machine-readable form and imported into the statistical analysis package SPSS/PC+.The data were then tabulated and examined for accuracy and for "out-of bounds" conditions, which were then corrected.For example, twenty of the questionnaires were completed by nonstudents, so they were removed from the analysis.The responses from the fivepoint confidence scale were collapsed into the following three categories so that a two-way contingency table using this variable would have no expected frequency below five for any cell: "Less Confident" combined responses one and two; "Moderately Confident" represents response three; and "More Confident" combined responses four and five. 15Prior to gathering these data, the investigators set their alpha level to .05 to reduce the probability of a Type I error.In order to test the degree of association between preferences for the CD-ROM databases and their equivalent print indices, the number of responses for each database was summed.These sets of values were then ranked for each medium and a Spearman's Rank Order Correlation Coefficient (Spearman's Rho, r s ) between the two sets of ranked databases was calculated using the formula r s = 1-((6D^2)/ (N(N^2-1))). 16he principal investigators used content analysis to code responses from the The principal investigators also made unobtrusive observations of CD-ROM users.
open-ended questions and observations.The notes were coded into categories adapted from the responses.Categories were collapsed to combine similar comments into broader subject areas.To ensure intercoder reliability, data coded by one principal investigator were verified by the other.
Interpretation of Data User Profile
Of the 489 users, 79 percent were undergraduate students, 17 percent were graduate students, and 4 percent were faculty, staff, or other.Of the undergraduates, 50 percent were upper level and 29 percent were lower level.Thus, the survey respondents reflect the overall university population of approximately 80 percent undergraduates, who were the intended target audience for this study.When asked if this was the first time using CD-ROM databases this academic year, 16 percent indicated yes, compared to 71 percent who said they had used the databases between two and ten times this academic year and 29 percent who had used them more than eleven times.Thus, with the exception of the freshmen in Writing 101, most students surveyed were experienced CD-ROM users, having used them previously this year (see tables 1 and 2).
User Preferences in Searching CD-ROMs
To determine users' needs and preferences in searching CD-ROM databases versus print indices, the questionnaire asked them to identify the CD-ROM databases or print indices most frequently used from the list of the twenty-two versions.Table 3 shows the rank order of databases and print indices selected and the Spearman's Rho comparing the strength of the relationship between the CD-ROM version and the print version.The Spearman's Rho shows a high level of agreement between the two database sets indicating a clear pattern of similar database preferences, which is independent of format.It is also curious to note that students indicated using a print version of Disclosure, despite the fact that no print version exists.
The most frequently cited CD-ROM databases were similar to the print versions, revealing the top eight preferences as Social Sciences Index and Periodical Abstracts, closely followed by Humanities Index, ABI Inform, PsycLit, Newspaper Abstracts, Newsbank, and ERIC.Although the rank order was similar, the CD-ROM versions were cited at least three times more frequently than the print versions.For example, the number one CD-ROM database preferred was Social Sciences Index cited by 104 users, whereas the print version was the sec-ond highest preferred and cited by only twenty-three users.No print version was cited more frequently than its comparable CD-ROM version.
Furthermore, when users were asked which version they prefer, 85 percent chose the CD-ROM version compared to 11 percent for print, 4 percent checked both.The most frequently cited reasons given for CD-ROM preference by 77 percent of respondents include: easier to use, better subject access, faster and more efficient search process.An additional 8 percent of respondents preferring CD-ROMs gave the following reasons: can get a printout of sources, provides abstracts, can download, provides more 5 that as students progress in class level through college and have more experience searching, their confidence level also increases when searching CD-ROM databases.
User Needs to Become More Effective Searcher
When users were asked "What more do you need to know to help you become more effective at searching CD-ROMs?" student responses were rank-ordered as "How to develop a search strategy," "How to choose the right CD-ROM database," "How to use the various software interfaces," and "How to limit search" (see table 6).When observing students using CD-ROMs, one of the most frequently asked questions was "Which one (database) do I use?"Some students appear to search the databases randomly CD-ROM databases.When users were asked in question 8 what types of training they would like, the rank-ordered responses were: personal assistance from reference staff, hands-on workshop, online Help screen, and printed guidelines.The least frequently desired item was lecture/demonstration (see table 7).
It is not surprising that patrons want help at the point of need.Ironically, personal assistance at point of need is the most costly, most time-consuming, and most demanding on reference staff.Some studies indicate that these new demands are leading to reference staff burnout, yet it is the item most desired by students.Users' attitudes seem to suggest that they want immediate gratification and onestop shopping when using CD-ROMs.Students complain that they must perform several steps including checking citations with serial holdings and retrieving journals on microform or hard copy.They would prefer full text of articles on CD-ROM in addition to citations.The results confirm the expectation that when it comes to using electronic databases, users find more citations and demand more resources, faster delivery, and more hands-on assistance.
Effectiveness of CD-ROM Training Sessions
From the Writing 101 classes that attended lecture/demonstration sessions early in the spring semester of 1995, five instructors decided to return in April, when the new ECL opened, for handson instruction on how to use CD-ROMs.The ECL is equipped with fifteen student workstations and one teacher station with a 42-inch monitor.The instructors and students were introduced to four Wilson CD-ROM databases: Social Sciences, Humanities, Applied Science and Technology, and Biological and Agricultural Index.The purpose of the session was to enable students to become more effective searchers.Of the eighty-one students, twenty students indicated the training was less than effective, thirtyseven students indicated it was moderately effective, and twenty-four students felt it was highly to extremely effective in helping them search (see figure 1).
When asked to explain their comfort levels after the training session, students offered a variety of responses.Their reasons were split, with twenty-nine students commenting positively and twenty-eight commenting negatively.The positive responses indicative of comfort level with training included: easier to understand this database, feel comfortable with CD-ROMs, know how to search, clear explanation, good demonstration, easier to use, helpful.The negative comments indicative of discomfort after training included: still need to know how to use CD-ROMs, need to know terms to search, still confused, too much information, and need hands-on training to use the computer.Although the majority of students felt more comfortable after the training, at least one-quarter of the students still needed and wanted more training with hands-on computer use.One can infer that one hour of training is not enough, especially considering the fact that students in each class have varying levels of computer expertise.When teaching new computer skills, learners need to be very familiar with the keyboard and have several hands-on sessions in order to master the skills presented.This
FIGURE 1 Effectiveness of CD-ROM Training Sessions
Question 10: "How effective was the training in helping you become a more effective searcher?"methodology is consistent with the users' preference for either personal assistance or hands-on workshops over the lecture/ demonstration method.Thus, it appears that students know what they need in order to become more effective searchers, and they expect training and assistance in academic environments.
Overall User Satisfaction with CD-ROM Services
Users of URI's CD-ROM databases were overwhelmingly positive when asked "How satisfied are you with the CD-ROM services?"Ninety-three percent indicated they are moderately to extremely satisfied, whereas only 7 percent indicated dissatisfaction.Figure 2 confirms other studies revealing that CD-ROM users are mostly satisfied with products and services with 284 out of 305 students moderately to extremely satisfied (see figure 2).
Although 93 percent said they were satisfied, only about 50 percent gave positive comments in response to this question.Negative comments included complaints about services, equipment or systems hassles, and suggestions for improvements to CD-ROM services.For evaluating and planning future resources and services, the comments are most useful for the reference staff.
When asked to explain their level of satisfaction, 141 users commented.Comments were collapsed into four categories: positive, negative, equipment hassles, and a wish list.The following views were categorized as positive comments: very helpful staff; someone is available to help; good variety of databases; good access to information; great tool for research; easy to use; convenient, fast, efficient, and effective services.The following opinions by thirty-two users were coded as March 1997 confident searching, although upperlevel undergraduate and graduate students expressed a higher confidence level than lower-level undergraduates.To become more effective searchers, students identified what they need to know: how to develop a search strategy, how to choose the right CD-ROM database, how to use the various software interfaces, and how to limit searches.Furthermore, when asked what training methodology they would prefer, the majority selected personal assistance at point of need or hands-on workshops.
Even though the majority of users were satisfied with current services, a small percentage did offer suggestions to improve services, such as providing more journals linked to CD-ROM citations, more online access to databases with full text of journal articles, and more staff to assist with searches.
The question now remains: How realistic are these requests for the URI Library or other academic libraries where budgets are shrinking and staff is decreasing?In January 1996, the library was forced again to cancel more journals; approximately $200,000 worth of journals were eliminated and future increases in projected revenue will be insufficient to pay for projected inflationary increases on the library's serial subscriptions.Therefore, further reductions in the number of subscriptions is likely in the future.Ironically, although the budget has decreased, the CD-ROM and electronic database services have increased-but at the cost of eliminating other resources, especially print indices that are available in CD-ROM or online format.
One answer to dwindling budgets and increasing demands for more electronic access is more cooperation and shared resources.Since the formation of a consortium of five academic libraries in Rhode Island, HELIN (Higher Education Library Information Network), the URI Library has moved one step further in that direction.The OPAC called HELIN was purchased by the consortium consisting of the University of Rhode Island, Rhode Island College, Community College of Rhode Island, Providence College, and Roger Williams University.The consortium also negotiated its first joint online database, the Expanded Academic Index (EAI) 2000, which includes 1,500 journals with one-third also available in full text.This cooperative venture enabled URI to offer remote online access to a popular database because of the shared cost among consortium members.In addition, two laser printer workstations are dedicated for printouts from the EAI database.Under further consideration is a networked print solution for all electronic databases, including the OPAC.
With the added ability to search the EAI database remotely, congestion and requests for assistance in the CD-ROM room have been reduced.Because of the popularity of remote access to databases, the HELIN consortium continues to negotiate access to other resources available from Internet providers.After a threemonth trial period in early 1996, several databases are now also offered via FirstSearch.As a member of other consortia, URI also acquired Britannica Online and the Engineering Village.
With more demand for online access to databases and full-text journal articles, the library will need to continue to evaluate which services are essential and which resources are to be eliminated.Innovative ways to fulfill these demands have to be provided.Current library policy calls for the elimination of duplicate resources.Thus, some CD-ROM databases and print indices would be can-Ironically, although the budget has decreased, the CD-ROM and electronic database services have increased.
celed if they were also provided by the comprehensive database EAI.To ease the loss of canceled journals, the library also purchased UNCOVER's customized gateway to provide effective document delivery service and to subsidize unmediated article ordering for faculty and graduate students.This service is not currently available for undergraduate students.
The librarians continue to offer handson training sessions for the OPAC, CD-ROM databases, EAI, and other online services via the Internet, such as FirstSearch and UNCOVER, which in turn should alleviate some demand for personal assistance in the CD-ROM room at point-of-use.The library will need to develop a public relations campaign to inform the university community about its changing resources and services, including both CD-ROM and on-line sources.In addition, staffing the public services area may require creative approaches, such as internships and reshifting of personnel from other units.Furthermore, academic libraries need to rethink reference services as advocated by David W. Lewis's call for organizational change in academic libraries. 17ture Implications CD-ROM databases were much in demand for the past decade, and they continue to be a popular choice among students and researchers in academic libraries.With the expansion of database access by Internet providers, however, there is an increasing demand for more online remote access to full-text databases via Internet providers.It appears obvious that academic libraries are currently wrestling with the following unresolved questions regarding electronic access to databases: 1. Cost: Can libraries afford to provide remote access to electronic databases with more subscriptions and more site licenses while maintaining and increasing CD-ROM networks?Should librarians abandon CD-ROM networks in favor of remote access to databases via the Internet?
2. Medium: What formats will be available in the future for libraries in order to meet the demands of users?Will libraries be able to provide a variety of formats including duplication of resources?Is it advisable to rely on the Internet, or would it be more prudent to develop campus-based tape loads with more expensive equipment?
3. Long-range implications: What formats will or will not be available in ten years, and how can libraries ensure adequate archival resources if they are leasing services they do not own?Do libraries own the data when a CD-ROM subscription ends or vendors change?4. Service: What changes will occur in the delivery of reference services?Will academic libraries offer more personal assistance at point of need, more intensive bibliographic instruction including hands-on training with new technologies, or more remote online services?Will librarians provide mediated online searching or electronic reference queries to help users with their online searches?Will libraries offer an amalgamation of services, including CD-ROM, tape loads, Internet, LAN, WAN, and print indices?Or will there be another unforeseen technological opportunity?
Academic libraries and librarians are at a crossroads and will need to choose what resources to offer, regardless of medium.Libraries will not be able to afford a full array of formats for all resources, and thus they will need to decide which resources and which media to offer in the future.Cooperative collection development and shared resources among libraries in consortia seem to be one of the best routes for academic libraries to take.
TABLE 3 Rank Order Preference of Databases Question
4 presented a list of 22 CD-ROM and print databases, and the participants were asked to indicate which they used most often.The responses were summed for each database by type of medium and then rank ordered.A Spearman Rank Correlation Coefficient (r s ) was computed to find the association between the format: r s = +.954,p<.001.
TABLE 5 Contingency Table-"Confident" by "Academic Status"
Question 6: "How confident are you about your searching skills on the CD-ROM databases you use most often?" | 2019-02-14T14:19:13.506Z | 1997-03-01T00:00:00.000 | {
"year": 1997,
"sha1": "4a4a9b13302fc0933176ae54b7123267b03927b7",
"oa_license": "CCBYNC",
"oa_url": "https://crl.acrl.org/index.php/crl/article/download/15114/16560",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4a4a9b13302fc0933176ae54b7123267b03927b7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239758867 | pes2o/s2orc | v3-fos-license | Long-term dependency slow feature analysis for dynamic process monitoring
: Industrial processes are large scale, highly complex systems. The complex flow of mass and energy, as well as the compensation effects of closed-loop control systems, cause significance cross-correlation and autocorrelation between process variables. To operate the process systems stably and efficiently, it is crucial to uncover the inherent characteristics of both variance structure and dynamic relationship. Long-term dependency slow feature analysis (LTSFA) is proposed in this paper to overcome the Markov assumption of the original slow feature analysis to understand the long-term dynamics of processes, based on which a monitoring procedure is designed. A simulation example and the Tennessee Eastman process benchmark are studied to show the performance of LTSFA. The proposed method can better extract the system dynamics and monitor the process variations using fewer slow features.
INTRODUCTION
Modern industrial processes are large scale, highly complex systems with many units and equipment.A large number of measurements are set to monitor process operation, and much process data is automatically collected.To operate the process systems stably and efficiently, developing process models based on process data to describe the process characteristics is crucial.The process data has two distinctive characteristics.The first is the serious collinearity caused by mass and energy balances and the compensation effects of the closed-loop control systems.Secondly, the process system always shows significant time-dependent characteristics.For instance, a transition process would occur while state switching due to the long settling time.Similarly, oscillations would occur after a disturbance introduced because of recycle streams, heat integrations, and other connections of materials, energy, and information (Shardt et al., 2012).
Multivariate statistical process monitoring (MSPM) approaches have been extensively studied to deal with the above two problems (Qin, 2012).The most widely used method is principal component analysis (PCA) (Gajjar, Kulahci, & Palazoglu, 2018;Wise, Ricker, Veltkamp, & Kowalski, 1990), which extracts uncorrelated lower dimensional latent variables to concisely describe the main variance structure of the process observation space.Independent component analysis (ICA) recovers the independent latent variables from complex process data by leveraging high order moment information (Kano, Tanaka, Hasebe, Hashimoto, & Ohno, 2003).For the description of time-dependent characteristics, dynamic PCA (DPCA) performs the standard PCA on the augmented data with time lags to extract process dynamics (Ku, Storer, & Georgakis, 1995).However, it fails to decouple the dynamics and static variance information.Slow feature analysis (SFA) is used to concurrently monitor the process by describing the stationary distribution and dynamic behaviours separately (Shang et al., 2015).However, assuming the standard Markov property, namely that the one-step time dependency is sufficient, limits the performance of the standard SFA and has to be improved in the same way as DPCA in practice.
Thus, this paper proposes a new long-term dependency slow feature analysis (LTSFA) method that can extract longer time dependencies for the design of a process monitoring method.This approach is validated using a simulated example and the Tennessee Eastman process.
LINEAR SLOW FEATURE ANALYSIS
Given an m-dimensional ergodic observation signal () = [ 1 () ⋯ ()], SFA seeks to find a latent signal () = [ 1 () ⋯ ()] ⊤ with the slowest variation, also named slow features (SFs), to describe its time varying characteristics (Wiskott & Sejnowski, 2002), that is, 2) 1 3) 0 j j g j j i j s t s t , s t , i j, s t s t where ( ) j g is the input-output function that needs to be found, is the temporal averaging operator, and ( ) is the first-order derivative or difference with respect to time and can be approximated for discrete time series as ( ) ( ) ( ) Objective (1) uses the average squared temporal difference to define the signal slowness.Constraints 1) and 2) simplify the problem without loss of generality, and constraint
INTRODUCTION
Modern industrial processes are large scale, highly complex systems with many units and equipment.A large number of measurements are set to monitor process operation, and much process data is automatically collected.To operate the process systems stably and efficiently, developing process models based on process data to describe the process characteristics is crucial.The process data has two distinctive characteristics.The first is the serious collinearity caused by mass and energy balances and the compensation effects of the closed-loop control systems.Secondly, the process system always shows significant time-dependent characteristics.For instance, a transition process would occur while state switching due to the long settling time.Similarly, oscillations would occur after a disturbance introduced because of recycle streams, heat integrations, and other connections of materials, energy, and information (Shardt et al., 2012).
Multivariate statistical process monitoring (MSPM) approaches have been extensively studied to deal with the above two problems (Qin, 2012).The most widely used method is principal component analysis (PCA) (Gajjar, Kulahci, & Palazoglu, 2018;Wise, Ricker, Veltkamp, & Kowalski, 1990), which extracts uncorrelated lower dimensional latent variables to concisely describe the main variance structure of the process observation space.Independent component analysis (ICA) recovers the independent latent variables from complex process data by leveraging high order moment information (Kano, Tanaka, Hasebe, Hashimoto, & Ohno, 2003).For the description of time-dependent characteristics, dynamic PCA (DPCA) performs the standard PCA on the augmented data with time lags to extract process dynamics (Ku, Storer, & Georgakis, 1995).However, it fails to decouple the dynamics and static variance information.Slow feature analysis (SFA) is used to concurrently monitor the process by describing the stationary distribution and dynamic behaviours separately (Shang et al., 2015).However, assuming the standard Markov property, namely that the one-step time dependency is sufficient, limits the performance of the standard SFA and has to be improved in the same way as DPCA in practice.
Thus, this paper proposes a new long-term dependency slow feature analysis (LTSFA) method that can extract longer time dependencies for the design of a process monitoring method.This approach is validated using a simulated example and the Tennessee Eastman process.
LINEAR SLOW FEATURE ANALYSIS
Given an m-dimensional ergodic observation signal () = [ 1 () ⋯ ()], SFA seeks to find a latent signal () = [ 1 () ⋯ ()] ⊤ with the slowest variation, also named slow features (SFs), to describe its time varying characteristics (Wiskott & Sejnowski, 2002), that is, where ( ) j g is the input-output function that needs to be found, is the temporal averaging operator, and ( ) is the first-order derivative or difference with respect to time and can be approximated for discrete time series as ( ) ( ) ( ) Objective (1) uses the average squared temporal difference to define the signal slowness.Constraints 1) and 2) simplify the problem without loss of generality, and constraint 1. INTRODUCTION Modern industrial processes are large scale, highly complex systems with many units and equipment.A large number of measurements are set to monitor process operation, and much process data is automatically collected.To operate the process systems stably and efficiently, developing process models based on process data to describe the process characteristics is crucial.The process data has two distinctive characteristics.The first is the serious collinearity caused by mass and energy balances and the compensation effects of the closed-loop control systems.Secondly, the process system always shows significant time-dependent characteristics.For instance, a transition process would occur while state switching due to the long settling time.Similarly, oscillations would occur after a disturbance introduced because of recycle streams, heat integrations, and other connections of materials, energy, and information (Shardt et al., 2012).
Multivariate statistical process monitoring (MSPM) approaches have been extensively studied to deal with the above two problems (Qin, 2012).The most widely used method is principal component analysis (PCA) (Gajjar, Kulahci, & Palazoglu, 2018;Wise, Ricker, Veltkamp, & Kowalski, 1990), which extracts uncorrelated lower dimensional latent variables to concisely describe the main variance structure of the process observation space.Independent component analysis (ICA) recovers the independent latent variables from complex process data by leveraging high order moment information (Kano, Tanaka, Hasebe, Hashimoto, & Ohno, 2003).For the description of time-dependent characteristics, dynamic PCA (DPCA) performs the standard PCA on the augmented data with time lags to extract process dynamics (Ku, Storer, & Georgakis, 1995).However, it fails to decouple the dynamics and static variance information.Slow feature analysis (SFA) is used to concurrently monitor the process by describing the stationary distribution and dynamic behaviours separately (Shang et al., 2015).However, assuming the standard Markov property, namely that the one-step time dependency is sufficient, limits the performance of the standard SFA and has to be improved in the same way as DPCA in practice.
Thus, this paper proposes a new long-term dependency slow feature analysis (LTSFA) method that can extract longer time dependencies for the design of a process monitoring method.This approach is validated using a simulated example and the Tennessee Eastman process.
LINEAR SLOW FEATURE ANALYSIS
Given an m-dimensional ergodic observation signal () = [ 1 () ⋯ ()], SFA seeks to find a latent signal () = [ 1 () ⋯ ()] ⊤ with the slowest variation, also named slow features (SFs), to describe its time varying characteristics (Wiskott & Sejnowski, 2002), that is, where ( ) g is the input-output function that needs to be found, is the temporal averaging operator, and ( ) is the first-order derivative or difference with respect to time and can be approximated for discrete time series as ( ) ( ) ( ) Objective (1) uses the average squared temporal difference to define the signal slowness.Constraints 1) and 2) simplify the problem without loss of generality, and constraint
INTRODUCTION
Modern industrial processes are large scale, highly complex systems with many units and equipment.A large number of measurements are set to monitor process operation, and much process data is automatically collected.To operate the process systems stably and efficiently, developing process models based on process data to describe the process characteristics is crucial.The process data has two distinctive characteristics.The first is the serious collinearity caused by mass and energy balances and the compensation effects of the closed-loop control systems.Secondly, the process system always shows significant time-dependent characteristics.For instance, a transition process would occur while state switching due to the long settling time.Similarly, oscillations would occur after a disturbance introduced because of recycle streams, heat integrations, and other connections of materials, energy, and information (Shardt et al., 2012).
Multivariate statistical process monitoring (MSPM) approaches have been extensively studied to deal with the above two problems (Qin, 2012).The most widely used method is principal component analysis (PCA) (Gajjar, Kulahci, & Palazoglu, 2018;Wise, Ricker, Veltkamp, & Kowalski, 1990), which extracts uncorrelated lower dimensional latent variables to concisely describe the main variance structure of the process observation space.Independent component analysis (ICA) recovers the independent latent variables from complex process data by leveraging high order moment information (Kano, Tanaka, Hasebe, Hashimoto, & Ohno, 2003).For the description of time-dependent characteristics, dynamic PCA (DPCA) performs the standard PCA on the augmented data with time lags to extract process dynamics (Ku, Storer, & Georgakis, 1995).However, it fails to decouple the dynamics and static variance information.Slow feature analysis (SFA) is used to concurrently monitor the process by describing the stationary distribution and dynamic behaviours separately (Shang et al., 2015).However, assuming the standard Markov property, namely that the one-step time dependency is sufficient, limits the performance of the standard SFA and has to be improved in the same way as DPCA in practice.
Thus, this paper proposes a new long-term dependency slow feature analysis (LTSFA) method that can extract longer time dependencies for the design of a process monitoring method.This approach is validated using a simulated example and the Tennessee Eastman process.
LINEAR SLOW FEATURE ANALYSIS
Given an m-dimensional ergodic observation signal () = [ 1 () ⋯ ()], SFA seeks to find a latent signal () = [ 1 () ⋯ ()] ⊤ with the slowest variation, also named slow features (SFs), to describe its time varying characteristics (Wiskott & Sejnowski, 2002), that is, where ( ) j g is the input-output function that needs to be found, is the temporal averaging operator, and ( ) is the first-order derivative or difference with respect to time and can be approximated for discrete time series as ( ) ( ) ( ) Objective (1) uses the average squared temporal difference to define the signal slowness.Constraints 1) and 2) simplify the problem without loss of generality, and constraint 1. INTRODUCTION Modern industrial processes are large scale, highly complex systems with many units and equipment.A large number of measurements are set to monitor process operation, and much process data is automatically collected.To operate the process systems stably and efficiently, developing process models based on process data to describe the process characteristics is crucial.The process data has two distinctive characteristics.The first is the serious collinearity caused by mass and energy balances and the compensation effects of the closed-loop control systems.Secondly, the process system always shows significant time-dependent characteristics.For instance, a transition process would occur while state switching due to the long settling time.Similarly, oscillations would occur after a disturbance introduced because of recycle streams, heat integrations, and other connections of materials, energy, and information (Shardt et al., 2012).
Multivariate statistical process monitoring (MSPM) approaches have been extensively studied to deal with the above two problems (Qin, 2012).The most widely used method is principal component analysis (PCA) (Gajjar, Kulahci, & Palazoglu, 2018;Wise, Ricker, Veltkamp, & Kowalski, 1990), which extracts uncorrelated lower dimensional latent variables to concisely describe the main variance structure of the process observation space.Independent component analysis (ICA) recovers the independent latent variables from complex process data by leveraging high order moment information (Kano, Tanaka, Hasebe, Hashimoto, & Ohno, 2003).For the description of time-dependent characteristics, dynamic PCA (DPCA) performs the standard PCA on the augmented data with time lags to extract process dynamics (Ku, Storer, & Georgakis, 1995).However, it fails to decouple the dynamics and static variance information.Slow feature analysis (SFA) is used to concurrently monitor the process by describing the stationary distribution and dynamic behaviours separately (Shang et al., 2015).However, assuming the standard Markov property, namely that the one-step time dependency is sufficient, limits the performance of the standard SFA and has to be improved in the same way as DPCA in practice.
Thus, this paper proposes a new long-term dependency slow feature analysis (LTSFA) method that can extract longer time dependencies for the design of a process monitoring method.This approach is validated using a simulated example and the Tennessee Eastman process.
LINEAR SLOW FEATURE ANALYSIS
Given an m-dimensional ergodic observation signal () = [ 1 () ⋯ ()], SFA seeks to find a latent signal () = [ 1 () ⋯ ()] ⊤ with the slowest variation, also named slow features (SFs), to describe its time varying characteristics (Wiskott & Sejnowski, 2002), that is, 2) 1 3) 0 where ( ) g is the input-output function that needs to be found, is the temporal averaging operator, and ( ) is the first-order derivative or difference with respect to time and can be approximated for discrete time series as ( ) ( ) ( ) Objective (1) uses the average squared temporal difference to define the signal slowness.Constraints 1) and 2) simplify the problem without loss of generality, and constraint 2) avoids the trivial solution = constant.Constraint 3) guarantees that the SFs are statistically uncorrelated so that they carry different information.It also implicitly implies that the extracted SFs are sorted by their slowness.In the linear case, the input-output function set where [ ] For a mean centred signal ( ) the objective function of SFA is equivalent to the following formulation with the same constraints, that is, Using Lagrange multipliers, this optimization problem can be translated into the generalized eigenvalue decomposition (GED) problem, that is where j λ is the generalized eigenvalue of matrix pair , and ( ) is the symmetric version of the -step τ autocovariance matrix of ( ) 3. LONG-TERM DEPENDENCY SFA
Optimization problem for long-term dependency SFA
From Equation (4), it can be concluded that minimizing the average squared temporal difference of SF is equivalent to maximizing its one-step lagged autocorrelation.However, in practical systems, one-step lagged autocorrelation is far from sufficient to describe the dynamics, especially manufacturing processes with typical long-term dependency.Hence, it is necessary to improve SFA to possess long-term dependency modelling ability.
From a predictability perspective, the long-term dependency can be represented as where l is the maximum time delay.Thus, the objective of the long-term dependency SFA can be formulated as is the time delay.Then, the -step τ covariance matrix can be written as Substituting Equation ( 11) into Equation ( 9), gives ( ) ( ) where [ ] Then, the optimization objective of LTSFA can be rewritten as 2) 1 where constraint 1) guarantees that the extracted SFs have unit variances to avoid the trivial solution.It is clear that LTSFA will reduce to SFA if the time delay is set to one.
Analysis of the optimization formulation
Introducing Lagrange multipliers p λ and β λ , Equation ( 13) can be converted into ( ) ( ) Taking derivatives with respective to p and β , and setting them to zero, gives where ⊗ is the Kronecker product, and i Γ is a selection matrix: Premultiplying Equations ( 15) and ( 16) by ⊤ and ⊤ respectively, the relationship From Equations ( 15) and ( 18), the optimal value of Equation (13) J * is the largest generalized eigenvalue of the matrix pair ( ) The optimal solution * p is the corresponding generalized eigenvector.However, p and β are coupled together and thus cannot be solved using analytical methods.
Iterative solution for one slow feature
Let ( ) ( ) be the SFs at time instant t, then we can form the following data matrices, ( ) ( ) [ ] where i X is defined as in Equation ( 10).Then, Equation ( 15) can be rewritten as Thus, we have where ( ) is the Moore-Penrose inverse.Equation ( 16) can be rewritten as Then, the iterative solution is: (1) Scale X to zero mean and unit variance.
(2) Initialize p with a random unit vector.
(3) Iteratively solve p and β until the objective function J converges to the optimal value, that is, If the predefined stopping condition is fulfilled, the obtained p and β are the optimal solution corresponding to the SF.
Multiple slow feature extraction through matrix deflation
After the first SF is obtained, the next one can be extracted by applying Step (3) in Section 3.2.2 to the residual data excluding the previous SF information.The residual data can be obtained by matrix deflation where the loading vector = Xs q s s is the solution of Iteratively performing the above procedure can obtain all the k SFs in the descending order of slowness or predictability.The different SFs are orthogonal to each other, which is the same as in the standard SFA.
Model developed for historical slow feature extraction and prediction
After all the SFs have been extracted, the optimal projection vectors, loading vectors, and regression weights are collected as [ ] Let ( ) i X be the residual data matrix after the i-th deflation, ( ) be the -th i SF, the observation space can be divided using Equation ( 24) into the SF subspace and residual subspace where is the k SFs of the historical data X , and is also a set of the orthogonal basis of the SF subspace.Since the residuals ( ) belong to the nullspace of the projection matrix P , we have ( ) ( ) Hence, the historical SF model is ( ) For an observation ( ) Given the historical data ( ) ( ) the one-step ahead prediction model can be obtained as where ( ) ( ) ( ) ( ) ( ) is a blocked diagonal matrix deduced from S , and S is the slow feature matrix of X , that is, ( ) Furthermore, one-step ahead observation can be predicted by ( ) ( ) In summary, the detailed procedure of long-term dependency SFA algorithm is given in Table 1.( ) ( ) Scale X to zero mean and unit variance 2.
Initialize p with a random unit vector
3.
Iterate the following steps until convergence to obtain the optimal parameter pair ( ) Step 3 to extract the next SF until all k SFs have been obtained.
. Determining the hyperparameters
The model construction of LTSFA needs to determine two hyperparameters: the number of SF k and dynamic order l .In machine learning, the data set is always split into three subsets: training set, validation set, and test set.The model performance of each pair of hyperparameters is validated by the validation set after model training using the training set to select the optimal hyperparameters with the best performance.
For LTSFA, after all k SFs have been extracted, the residual ( ) is expected to be white, and thus the autocorrelation and cross-correlation for nonzero lags will be approximately zero.The 95% confidence interval (CI) of the correlations (Shardt, 2015) will be used to verify model performance.The optimal hyperparameters correspond to the least violation for the 95% CI.
Observation-space partition
The residual after performing LTSFA on the observation is essentially white.However, it contains a static structure, which can be further explored using standard PCA.Hence, the observation space can be decoupled into a slow feature subspace, a static principal component subspace, and a residual subspace, that is, where is the loading matrix and is the score matrix of the principal components.The number of static principal components is determined by the method of the cumulative percentage of variance (CPV) (Valle, Li, & Qin, 1999).
Process monitoring indices design
After partitioning the space, we can monitor the variations in the subspaces to detect abnormal situations using Hotelling's 2 T and the squared prediction error (SPE), that is, where
. The above three indices actually monitor the stationary variations in the subspaces.The dynamic behaviour anomaly can be detected by using the innovation as where ( ) ( ) ( ) is the singular value decomposition of the innovation covariance.
Under an assumption of a Gaussian distribution, the control limits for the indices can be determined as follow given a significance level α (Chiang, Russell, & Braatz, 2000;Qin, 2003) ( ) ( ) where χ 2 and χ ℎ i i r If the indices of an observation exceed the control limits, then it is considered that a fault has occurred.
Simulation example
A second-order autoregressive model is designed ( ) independent and identically distributed random processes.Three thousand data samples are generated and split into training, validation, and test set with 2000, 500, and 500 data samples respectively.Using the method described in Section 3.2.5, the hyperparameters are determined as l = 2 and k = 3.The autocorrelation and cross-correlations of the original data () and residual �() are shown in Figure 1 and Figure 2. It can be seen that the dynamics of the data, which is indicated by both autocorrelation and cross-correlation with nonzero delays, is filtered by LTSFA. Figure 3 is the residual correlations of the standard SFA.Assuming a maximum time delay of 20 samples, the percentage of times a point lies outside the 95% CI was 3.2% for LTSFA SFs, but 40.4% for the standard SFA.The correlations of the latent variables () and the innovation () in Figure 4 show that LTSFA can also uncover the dynamics of the latent states, while the standard SFA does not give an explicit expression of latent state dependency.The Tennessee Eastman process (TEP) is a widely used benchmark in process control (Downs & Vogel, 1993).The revised version is adopted in this study (Lyman, Georgakis, & engineering, 1995).It contains 12 manipulated variables (XMV (1-12)) and 41 measurement variables which consist of 22 process variables (XMEAS (1-22)) and 19 quality variables ).In this paper, we use 11 manipulated variables (XMV (1-11)) and all 22 process variables.Five hundred normal samples sampled at 3-min intervals are used to train the models.The hyperparameters of LTSFA are determined as = 3 and = 19.The dynamic SFA provided in reference (Shang et al., 2015) is built to compare the monitoring results.The optimal hyperparameters of dynamic SFA are selected as = 2 and = 0.55 .The monitoring results of LTSFA can give more detailed information of the process variations.For instance, Figure 5 shows that the variation of IDV(4) mainly occurs in the SF subspace and the residual subspace, while the static principal component subspace is less affected.Figure 6 shows that the anomaly caused by the high-frequency oscillation of IDV( 14) can also be monitored in the SF subspace immediately.Furthermore, LTSFA only uses 19 SFs, while the dynamic SFA 46.Thus, the dynamic SFA may not be able to perform dimension reduction, especially in the case of large delays.This paper presented a new model called LTSFA to overcome the Markov assumption of the original SFA and improve the long-term dynamics modelling ability.The objective of LTSFA is established based on a high-order regressive model and an iterative algorithm is developed to solve the objective.In addition to the better dynamic modelling capability, LTSFA also gives an explicit expression of the long-term temporal dependency of latent states.A process monitoring strategy is developed using this approach.It is tested on a simulated example and the Tennessee Eastman process.It is shown that the proposed method can better extract the system dynamics and give more detailed process variations using fewer slow features than the standard SFA and the dynamic SFA methods.Further work will consider extending the results to nonlinear systems.
Figure 5 :Figure 6 :
Figure 5: Monitoring results for IDV(4) using left LTSFA and right the dynamic SFA
Table 1 :
The algorithm for LTSFA | 2021-10-21T15:54:05.448Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "8bf9bdc162e4ae4f5f16244b85e09ee54627a2d9",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2021.08.278",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3913a269a83bc6de0d62e6ccefb788013ca1704f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252578434 | pes2o/s2orc | v3-fos-license | Developing a DNA logic gate nanosensing platform for the detection of acetamiprid
This paper reports a novel fluorescence and colorimetric dual-signal-output DNA aptamer based sensor for the detection of acetamiprid residue. Acetamiprid is a new systemic broad-spectrum insecticide with high insecticidal efficiency that is widely used worldwide, but there is a risk of adverse neurological reactions in humans and animals. The dual-mode output principle designed in this paper, consisting of a fluorescence signal and colorimetric signal, is based on the relevant reaction of the special domain of a G-quadruplex, bidding farewell to a classical single-signal output, with a target-recognition cycle used to complete signal amplification through a hybridization chain reaction. Upgraded detection sensitivity and the qualitative and semi-quantitative detection of acetamiprid are achieved based on the fluorescence signal output and visual discrimination observations during colorimetric experiments. This model was applied to the determination of acetamiprid residue in fruits and vegetables. The dual-detection platform further reduced systematic error, with a detection limit of 27.7 pM. When applied in a comparative detection study using three different pesticides, the system shows excellent discrimination specificity and it performs well in actual sample detection and has a fast response time. Designing DNA logic gates that operate in the presence of targets and molecular-switch-based detection platforms also involves the intersection of biology and computational modeling, providing new ideas for biological platforms.
Introduction
Acetamiprid (Ace) is a representative new nicotine insecticide. It is oen used to control herbivorous pests affecting fruits and vegetables, such as garlic, celery, tea, and strawberries. The control effect is more than 90%, it is compatible with the environment, and the shelf life is about 20 days. 1 Acetamiprid has biologically toxic effects on the human body; 2-4 it can enter food and human tissue during daily routines, stimulate neurotoxicity caused by acetylcholine receptors, penetrate human peripheral blood lymphocytes, and cause DNA damage. 5,6 Acetamiprid acts as an agonist of nicotinic acetylcholine receptors on neuronal postsynaptic membranes and it inhibits normal conduction in the central nervous system. 7,8 In addition to acute toxicity, acetamiprid has potential long-term and chronic toxic effects, including developmental neurotoxicity, genotoxicity, and other toxicities. 9 Therefore, there is an urgent need to design an efficient, fast, and inexpensive detection method. For acetamiprid, common official methods for its determination are high-performance liquid chromatography (HPLC), 10 liquid chromatography-mass spectrometry (LC-MS), 11,12 high-performance liquid chromatography (HPLC)-UV, 13 and gas chromatography-mass spectrometry (GC-MS). 14,15 It is well known that although these methods can achieve high sensitivities and low detection limits, they all have limitations, such as cumbersome pretreatment methods, long detection times, and the need for expensive instruments, time-consuming sample preparation methods, specialized equipment, and experienced laboratory personnel. In addition, electrochemical immunoassay and enzyme-linked immunosorbent assay (ELISA) 16 antibodies have short shelf lives, are susceptible to interference, and are very expensive and time-consuming to produce, severely limiting their daily application. 17,18 Therefore, there is an urgent need to develop a reliable, effective, and rapid method for the detection of aminopyralid in the environment and agricultural products 19 to ensure food safety for human health. To improve the above-mentioned conventional assays, we have constructed a biosensor to achieve assay optimization, substantially reduce assay costs, and improve the ease of operation. The biosensor utilizes the high affinity of the target and aptamer using the systematic evolution of ligands by exponential enrichment (SELEX) method 20 to screen ligands with high affinity and specicity from a random oligonucleotide library synthesized in vitro. The aptamer for the target pesticide acetamiprid in this study was selected using the SELEX method. 21,22 Via a hybridization chain reaction (HCR), 23 signal amplication of the biosensor is achieved; this is a process that does not require the involvement of proteases or DNA enzymes, and only a strand-hybridization process occurs. The reaction cannot occur at room temperature due to short-loop interference, and it requires primers to activate a stable strand for hybridization. In a HCR, once the target DNA is inserted, two synthetic DNA hairpins coexist in solution and hybridize into continuous linear DNA. HCR has distinct advantages, such as high amplication rates, 24 controlled kinetics, and nonenzymatic properties. 25 An enzyme-free labeling technique is preferred in this sensor because it avoids the expensive and complex process of uorescent labeling and the side-reactions generated during labeling. We used an enzyme-free labeling technique in our study, and special DNA structures (Gquadruplexes) were introduced using the HCR principle. 26,27 A G-quadruplex is a specialized DNA structure consisting of a sequence of guanine-rich nucleic acids that has received signicant attention due to growing evidence for its role in important biological processes and as a therapeutic target. 28-31 A G-quadruplex is formed via the repetitive folding of a single polynucleotide molecule or of two or four molecules. The structure consists of stacked G-quadruplexes, which are rectangular coplanar arrays with four guanine bases each. 27 Metalloporphyrins are widely found in nature, and they are used by organisms as cofactors for various enzymes and other specic proteins. Metalloporphyrins are versatile and they can be involved in oxygen transfer, electron transfer, and various redox chemical reactions, such as those related to catalase, peroxidase, and monooxygenase. N-Methylmethylporphyrin IX (NMM) is a water-soluble asymmetric porphyrin with excellent optical properties and it is sensitive to G-quadruplexes; it has been studied in relation to structural interactions, [32][33][34] and NMM is particularly selective for G-quadruplexes compared to doublestranded DNA, and it can be used for uorescence signal output in the presence of ions, with exponentially increased output signal results 35,36 . It has a wide range of applications in biology and chemistry and, therefore, NMM is known as a ligand for G-quadruplexes. The detection of the uorescence signal in this study was done via exciting the sensor system in the presence of the target to produce G-quadruplexes, which led to a uorescence signal change upon interaction with NMM.
Colorimetric aptamer sensors are favored by researchers due to the remarkable advantages of easy operation, low cost, and easy observation with the naked eye. 37 Aptamers can modulate the catalytic performance of DNase, thereby allowing the visual detection of pesticide residues. 38 3,3 ′ ,5,5 ′ -Tetramethylbenzidine (TMB) is a substrate of a DNA enzyme catalytic system consisting of a guanine tetrameric DNA molecule and hemin, [39][40][41] involving a class of catalytically active nucleic acid, a commonly studied deoxyribonuclease. The deoxyribonuclease consists of hemin and a guanine (G) tetraspanin DNA sequence bound to hemin. Hemin itself is a potent cofactor, and G-quadruplexes with low peroxidase-like activity can bind to hemin and greatly enhance the hemin activity 42 . During the oxidation of hydrogen peroxide by the G-quadruplex-hemin complex, TMB can act as a hydrogen donor along with a peroxidase, such as horseradish peroxidase, to reduce it to water. The oxidation of TMB was shown to give a blue color to the solution due to the oxidation of TMB to 3,3 ′ ,5,5 ′ -tetramethylbenzidine diimine. 43 The design of DNA logic gates has allowed the outstanding use of DNA Boolean logic gates in the eld of biosensors. 44 This has shown great potential in the area of biosensing because logic analysis can be performed on multiple targets at the same time. The pattern change of a sensor biosignal is very similar to the input and output of a logic-based system. 45 In this paper, we design "AND" gates and "OR" gates based on the input and output of different detection signals when the target is input into the logic system. DNA logic gates are promising multi-input analysis tools, laying a foundation for intelligent diagnosis, and new ideas will be provided for the development of traditional logic gates.
An enzyme-free label-free biosensor was designed to achieve the detection of acetamiprid residue, and we used the uorescent signal of G-quadruplexes/NMM, which was further ampli-ed using a HCR. More importantly, we performed a series of computer simulations prior to the bioassay to simplify the subsequent experimental steps and eliminate some negative effects. In the presence of aminopyralid, two designed DNA hairpins are triggered by the reaction of the trigger strand S, which sequentially opens H 1 and H 2 and self-assembles into an embedded G-quadruplex, thus signicantly enhancing the uorescence signal in the presence of NMM; also, color analysis can be accomplished based on the reaction of the G-quadruplex with TMB, leading to color development, and both approaches lead to a signal output aer the experiments are completed. Our method is sensitive, rapid, and selective, and it has practical application in detection processes, where it may open new avenues for the application of biosensors in food control and quality testing. Via analyzing the signals of biological gates, basic logical "OR" and "AND" gates are constructed to convert the input and output into logical signals, and the conversion of biological signals to digital information is realized.
Instrumentation
The uorescence spectra of NMM were measured using a uorescence scanning spectrometer with an excitation wavelength of 399 nm and an emission wavelength of 610 nm using a Bio-Tek H 1 multifunctional microplate reader from Berton Instruments, Inc., USA. TMB absorbance was measured with an OD scanning spectrometer emitting at 650 nm using a BioTek FX multi-function microwave detector from BioTek FX Multifunction Microplate Detector, USA.
Experimental operation detection
Before experiments, a diluted solution containing H 1 (1 mM) and H 2 (1 mM) was heated at 95 C for 10 min for a PCR reaction, and then slowly cooled to room temperature to form a hairpinlike structure. Also, Apt (1 mM) and S (1 mM) were mixed to form a double-stranded structure. Then, different concentrations of acetamiprid and Apt-S (100 nM) buffer were mixed and incubated at 37 C for 30 min, and then H 1 (350 nM) and H 2 (350 nM) were added to the above reaction solution for selfassembly, with incubation at 37 C for 90 min. Aer the postaddition process was completed, one half of the solution was added to NMM (1.5 mM) and incubated at 25 C for a further 20 min, and the other half was added to hemin (10 mL) and further incubated at 25 C for 30 min. Finally, 50 mL of the product was added to 450 mL of TMB/H 2 O 2 and mixed for 15 min. The two different solutions were transferred to two 96well plates, and the uorescence intensities were detected and recorded using the above-mentioned instrument.
Principle of the detection mechanism
The biosensing detection response is based on signal recognition in the system rst. When the target acetamiprid does not appear in the system, the single strand of the aptamer will be paired with the complementary base of the trigger strand, and the trigger strand cannot be released into the reaction system ( Fig. 1). There will be no follow-up effects. However, the appearance of the target molecule acetamiprid will lead to spatial coupling with the special structure of the aptamer, a single chain of the aptamer will be pulled out from the doublestranded structure in the form of the trigger chain S (green), and the trigger chain will be released into the reaction system. When the trigger strand S is added to the H 1 /H 2 cycle, it rst searches for a foothold (green segment) for complementary base pairing at the 5 ′ end of the hairpin H 1 , and it gradually displaces the "stem" of the initial H 1 hairpin during the pairing process. Complementarily, in this process, the original 3 ′ end foothold of the H 1 hairpin is opened to free exposure (orange fragment), which will remove the original H 1 "signal", and the special design of the end segment sequence will be the 5 ′ end of the H 2 chain. The segment sequence is opened to complete the dynamic opening of the two hairpins. When the H 1 -H 2 double chain is formed, the original partially paired trigger chain S is replaced, so that the S chain returns to the reaction system to stimulate a new round of hairpin formation. Clip opening and closing was based on this. Since guanine-rich bases were present by design in the "stem" of the hairpin at the beginning of the process, H 1 was not activated alone in the reaction system, and G-quadruplexes could not be formed when H 2 was turned on, unless both hairpins were turned on. At the same time, the base structures at the ends of the two strands can form a double-stranded structure with G-quadruplexes (red) at both ends. Two types of experimental data, uorometric and absorbance, were measured using a multi-function microwave detector. The uorescence data from the G-quadruplexes in the experimental reactions were measured at excitation and emission wavelengths of 399 and 610 nm and used for analysis; the absorbance of TMB/H 2 O 2 oxidised with haemoglobin was measured with an OD scanning spectrometer at 650 nm via absorbance experiments, and a visual qualitative judgement can be made based on a colour reaction and the absorbance data (Table 1).
Sequence design and analysis
All sequences used in the experiments are designed, and the designed hairpin DNA is preliminarily predicted via NUPACK soware thermodynamic simulations based on thermodynamic and concentration simulations. As can be seen from the data in Fig. 2 and 3, we used hairpin structures and reaction conditions that may exist in a solution system to simulate concentrations and thermodynamics for preliminary analysis regarding viability. Both are simulated in an environment at 37 C. The free energy of the secondary structure of the hairpin H 1 is −10.58 kcal mol −1 and the free energy of the secondary structure of the hairpin H 2 is −18.69 kcal mol −1 . The hairpin DNA structures are relatively stable at 37 C, but as the design relies on a difference for the reaction sequence, we set the stability of H 2 to be greater than that of H 1 , because H 1 as an intermediate product needs to be combined with the shorter trigger chain S and open, while H 2 needs to be activated only aer H 1 is fully opened. If it is not stable, the hairpin may open out of sequence, leading to the possibility of increased background uorescence.
Putting H 1 -H 2 (Fig. 4) into the simulated environment at the same time, the secondary structure has a free energy of −43.58 kcal mol −1 and very stable hairpin binding; this assists in triggering the elimination of the chain S by the hairpin aer it triggers the reaction, and S reenters the cycle to continue the cycle. These computer simulation results provide data to support the empirical feasibility of this method.
DNA logic gate construction
A biologic gate is an intelligent probe capable of responding to biological conditions with behavior similar to that of a computer logic gate, where the input information can be relevant information relating to physical or biological characteristics, such as the response to laser or ultrasonic illumination or structural changes caused by changes in biological properties. The logic gate output can be classied as the release of cargo, for example, the release of a secured or inactive drug, a measurable change in state, a uorescent signal, or nanoparticle aggregation. Because biological conditions involve many parameters, leading to many possible inputs and desired outputs, any logic gate development needs to rst identify the target conditions or target information to be evaluated based on logic manipulation. Therefore, although biologic gates use logic operations similar to those performed at Fig. 2 The simulated structure diagram (left) and secondary structure thermodynamic free energy plot (right) of the hairpin probe H 1 . the molecular level in terms of computer behavior, the input and output signals cannot be simply dened as conversions of electrical signals, as in computer logic, and the designer needs to indicate in advance the type of input and the ascribed Boolean value of the output signal. Fluorescence or absorbance signals are used in this paper as outputs to construct biologic gates. In logical binary operations, 1 and 0 are the optimal values to enable logic gates to adapt themselves to different application requirements. In logical binary operations, the input is encoded using 1 and 0, and 1 and 0 are assigned to the presence and absence of the target, respectively. On the output side, 1 is a high signal and 0 is a low signal.
In the design in this paper, the universal platform is based dually on the uorescent-based binding of G-quadruplexes and NMM and a colorimetric interaction involving G-quadruplexes and TMB/H 2 O 2 . To achieve the objective of detecting aminopyralid via a dual "AND" and "OR" uorescence/colorimetric detection logic platform, the main design used the presence of the specic target, DNA probe, and NMM/TMB as input signals to the logic gates and the detection results as output signals. The principle of the constructed dual "AND" biologic gate is shown in Fig. 5. The input signal (input) of the logic platform is divided into three components: acetamiprid, Apt-S, H 1 /H 2 , and NMM/hemin; the output signal (output) is based on the uorescence intensity or TMB colorimetric output, with "0"/ "1" as the signal result. The specically designed aptamer is the basic building element of the logic gate platform, and the logic structure output is based on the presence of the specically designed aptamer. When the signal from acetamiprid is missing in the system (input 1 is "0"), there is no obvious uorescence signal. When Apt-S, H 1 , or H 2 is missing (input 2 is "0") or when NMM/hemin is missing (input 3 is "0"), then the signal output is "0". When acetamiprid, Apt-S, H 1 , H 2 , and NMM/hemin enter the logical platform at the same time (input 1, input 2, and input 3 are "1"), a signicant uorescence/ colorimetric signal is expressed, and the output "1".
The goal of the uorescence/colorimetric detection of acetamiprid is achieved through the "OR" logic platform, which is mainly designed for a specic signal molecule, and the "OR" gate principle is shown in Fig. 6. The input signal (input) of the logic platform is divided into two components: NMM and hemin; the output is based on the uorescence intensity/TMB colorimetric response, with "0" and "1" as possible signal results. The specically designed aptamer is the basic building block of the logic gate platform, and the output of the logic structure is based on the presence of the specically designed aptamer. When hemin is missing in the system but NMM is present (input 1 is "1" and input 2 is "0"), then a distinct uorescence signal appears in the system, and the output uorescence signal is "1". When NMM is missing but hemin is present (input 1 is "0", input 2 is "1"), the output colorimetric signal is "1". When NMM and hemin are both missing (input 1 and input 2 are both "0"), the output of the dual-signal platform is "0". When both NMM and hemin enter the logical platform (input 1 and input 2 are both "1"), signicant uorescence and colorimetric signals are expressed, and the output is "1".
Biological experiments to verify its feasibility
Based on uorescence intensity scans at 610 nm, it is not difficult to see the differentiating characteristics of the ve obtained sets of data (Fig. 7). When only the hairpin structure H 1 or H 2 appears in the reaction system (Fig. 7, purple and green curves, respectively), it is not difficult to see that the uorescence output is not high, indicating that the hairpin structure in the system is stable and there is no obvious self-formation of Gquadruplexes. When the two kinds of hairpin, H 1 and H 2 , and the Apt-S double-stranded structure are present in the reaction system, the uorescence signal intensity shows a small increase; this is because when H 1 and H 2 coexist, this will inevitably occur due to weak interactions resulting in a small amount of H 1 /H 2 double-stranded structure containing Gquadruplexes at both ends due to hybridization, possibly because of the stability of the hairpin structure in the early stage and the reaction conditions. In simulations, there are few occurrences of this, and it will not affect the feasibility of the experiment too much. The principle of using Apt-S, designed in this experiment, is that in the presence of the target acetamiprid, through spatial coupling between the target and the special structure of the aptamer, the trigger chain S is released, and a subsequent cycle involving H 1 and H 2 can be completed. In the experiment, the target trigger chain S is directly introduced into the reaction system, which can directly let the trigger chain S complete the above experimental process, without the need for the spatial coupling reaction between the target and the aptamer trigger chain Apt-S, and proceed with the sequence. The uorescence intensity signal obtained is shown in the pink curve in Fig. 7, and the strong uorescence intensity clearly indicates that the trigger chain S is capable of completing the experimental path and accurately completing the two-step process of opening the hairpin to form a G-quadruplex structure; this allows the catalytic hairpin self-assembly process to be achieved. When the target acetamiprid enters the detection system, the trigger chain S should be released through spatial coupling between the target and the aptamer. The experiments show that the trigger chain S can indeed be released through target recognition, and the uorescence intensity data indicates the feasibility of the reaction to detect the target acetamiprid.
In addition to the uorescence signal intensity output, we also used the chromogenic reaction due to the binding of the Gquadruplex and hemin as a qualitative detection method for acetamiprid visualization with the naked eye (Fig. 8). Depending on differences in the DNase activities at different concentrations in the designed experimental test tubes, the colors of samples are different, and the presence of the target acetamiprid can be visualized (blue color). We designed 7 sets of experimental test samples, labeled 1-7. No. 1 was an experimental control sample; No. 2 was the Apt-S double-stranded structure; No. 3 contained Apt-S and the target acetamiprid; No. 4 contained hairpins H 1 and H 2 ; No. 5 contained the Apt-S double-stranded structure and hairpins H 1 and H 2 ; No. 6 contained the Apt-S double-stranded structure, hairpins H 1 and H 2 , and acetamiprid (50 nM); and No. 7 contained the Apt-S doublestranded structure, hairpins H 1 and H 2 , and acetamiprid (100 nM). The absorbance intensity data indicated the feasibility of this method for the detection of acetamiprid.
Condition optimization
The reaction was rst optimized for the K + concentration, based on the use of ions to allow the embedded space of the Gquadruplex to provide an ionic basis for experimental NMM staining; as shown in Fig. 9(A), with an increase in the ion amount, there is a general trend for the uorescence output to be enhanced. This is based on the F-F 0 uorescence signal output response data, where the SD is obtained based on three repeated experiments. Finally, an ion concentration of 25 mM was used for experimental detection.
Subsequently, condition optimization was carried out relating to the concentrations of the hairpins (H 1 , H 2 ), as shown in Fig. 9(B). The hairpins are the core of the cyclic amplication model for biosensing. In the cascade reaction, a concentration that is too low will affect the opening and assembly of the double hairpin structure, but a too-high concentration will produce a small probability of the weak opening of the double hairpin, which will enhance the background noise. Therefore, 350 nM is ultimately selected as the optimum concentration through experimental analysis.
In addition, the hairpin reaction time is also a parameter related to cyclic amplication that can affect the self-assembly efficiency, as shown in Fig. 9(C). When the time is too short, the trigger chain S cannot be fully released due to the need for the target and inadequate double chain reactions, resulting in an inability to ensure that hairpins H 1 and H 2 are triggered by the trigger chain S; when the time is too long, as shown by the ratio F/F 0 , the trigger efficiency decreases and the time cost increases, which is not conducive to comprehensive sensor performance. Therefore, a hairpin reaction time of 90 min is considered the optimal hairpin self-assembly time to complete biosensor detection experiments.
The concentration of NMM and the reaction time of NMM with the G-quadruplex structures have great inuence on the uorescence intensity. In these experiments, as shown in Fig. 9(D) and based on the results of F-F 0 data analysis, an NMM concentration of 0.75 mM (1.5 mL) resulted in the highest uorescence intensity value. A too-low concentration of NMM cannot provide sufficient uorescence intensity from the Gquadruplexes generated during the reaction, and the higher the concentration of NMM, the higher the background uorescence signal. Also, based on time-gradient detection, the reaction time of NMM is very important in the test itself. The effect is slight, so 15 min was ultimately adopted as the reaction time for NMM.
Sensitivity and specicity
Based on the optimal experimental conditions obtained from the above single-factor experiments, we evaluated the sensitivity of the sensor via changing the concentration of acetamiprid and further analyzed the detection performance. As shown in Fig. 10(A), from the purple curve to the red curve, the concentrations of acetamiprid samples to be tested were set to 0 nM, 5 nM, 10 nM, 15 nM, 20 nM, 25 nM, 30 nM, 50 nM, and 100 nM; increasing the concentration of acetamiprid resulted in a gradual increase in the uorescence signal. Through further observation, it was found that the uorescence response of NMM at 610 nm was linearly related to the logarithm of the acetamiprid concentration between 0 nM and 30 nM ( Fig. 10(B)). The linear regression equation is expressed as y ¼ 331.31x + 3543.1 (x and y are the acetamiprid concentration and uorescence intensity, respectively), and the correlation coefficient R 2 is 0.9858. In addition, the acetamiprid limit of detection (LOD) calculated according to the formula 3s/S (s is the standard deviation of the control solution and S is the slope of the linear regression equation) was 27.7 pM. When the acetamiprid concentration of the solution being tested is outside the linear range (i.e., 27.7 pM to 30 nM), it is sufficient to dilute the actual sample with buffer to a concentration range that satises the linearity conditions, and the target concentration of acetamiprid can be estimated quantitatively based on the dilution factor. SD was calculated based on three replicate experiments.
The visual colorimetric method was then used to determine the sensitivity of the colorimetric sensor; different concentrations of acetamiprid were input, and the color change and biosensor detection performance were further analyzed. As shown in Fig. 11, from the purple curve to the red curve, the acetamiprid concentrations of the samples to be tested were 0 nM, 0.1 nM, 0.5 nM, 5 nM, 25 nM, 50 nM, 250 nM, and 500 nM. An increase in the acetamiprid concentration led to the enhancement of the absorbance signal. Color gradient analysis revealed that the color contrast of the biosensor was signicant as the concentration of acetamiprid increased, indicating that colorimetric visualization detection could be achieved with the biosensor.
The key to visualized absorbance experimental detection lies in naked eye identiability, so the concentration range and gradient were chosen to be larger when selecting the test sample concentration. This is to facilitate the qualitative and semiquantitative rapid detection of acetamiprid, which can be used as an auxiliary means in addition to accurate uorescence detection. Fig. 12 shows the color changes for the chromogenic TMB reaction in response to different selected concentrations, and the target concentration can be roughly distinguished in this range based on the color change.
To examine the effects of color vision bias of the naked human eye with respect to the TMB colorimetric response at different concentrations, as shown in Fig. 13, we selected ve standard concentrations of acetamiprid samples for naked eye and (E) 0 nM. These researchers could divide the acetamiprid samples into ve color gradients, and the experimenter could judge which concentration a sample solution is closest to using the naked eye and without using absorbance testing based on an enzyme standard; this can allow the semi-quantitative detection of acetamiprid through visualization. This method does not need to use an instrument to distinguish the different concentration gradients, it is easy and fast to carry out, and it can be used as an auxiliary detection method.
To verify the specicity of the established sensor for acetamiprid (Ace), the sensor was also tested in the presence of other comparable novel pesticides, chlorpyrifos (Cla) and imidacloprid (Imi), under the same conditions. Imidacloprid and chlorpyrifos, which are structurally comparable to acetamiprid (Fig. 14), may cause severe interference. 46 We experimentally developed ve sets of samples for assays. As shown in Fig. 15, the concentrations of Cla (2) and Imi (3) in experiments were 100 nM and the concentration of acetamiprid (1) was 10 nM. It is evident from the uorescence measurements that even if other insecticides were present at concentrations 10 times higher than that of acetamiprid, only small changes in the uorescence intensity were observed, and the intensities were not substantially different from the blank control (5) group. The presence of acetamiprid induced a signicant increase in uorescence intensity. In addition, the uorescence signal from a mixture of acetamiprid and the other insecticides (4) was comparable to that of the acetamiprid group. The results of all the above experiments implied that the specicity of the sensing method for acetamiprid detection was substantial.
Application in practical samples
To ensure the application of the sensor in actual samples, we selected crops commonly exposed to acetamiprid, and cucumber and cabbage were used for testing the recovery rate of the biosensor platform, as shown in Table 2. The recovery is the ratio of the result obtained upon adding a quantitative amount of standard to a sample matrix and analyzing it according to the sample processing steps to the amount of standard added; recovery is an indicator of the stability of a sensor. In this paper, we subjected samples of known concentrations of acetamiprid to recovery testing in our experiments, and in order to ensure experimental stability, we took an average of three measurements to calculate the nal recovery rate. Both of the above vegetables were purchased from local farmers' markets and were not otherwise processed. 1 g of cucumber or cabbage was placed in an ultrasonic crusher for 5 min in 10 mL of buffer, and the supernatant was extracted aer centrifugation for 10 min. During the detection process, 10 nM or 50 nM of acetamiprid was added to the cucumber and cabbage samples, and the experimental results were measured. The recovery rates were between 98.7% and 103.6%, and the RSD (relative standard deviation) values were between 0.89% and 3.2%. The results show that the biosensor has good accuracy during sample detection.
This method has been compared with common methods used to detect acetamiprid published in the literature, as shown in Table 3. For comparison with the biosensor detection platform designed in this paper, the differences between detection limits were analyzed and detection sample types were compared. In this study, due to the design of the strand displacement cyclic amplication method via a double-hairpin self-assembly model, the target detection limit can be reduced about 6-fold and the detection range is wider than those for detection methods without cyclic amplication technology previously studied by researchers. This method has a higher degree of recognition for trace levels of acetamiprid and, compared with other traditional detection methods, this detection method has certain advantages. The test samples herein are most common fruits and vegetables, which are also the most likely sources of acetamiprid residue absorbed by the human body.
Conclusions
In this paper, we designed a novel sensor for the nicotine insecticide acetamiprid, which features dual uorescence and colorimetric detection signal outputs and is based on a dual hairpin cascade amplication reaction. This detection method is different from single-signal detection, as naked-eye and instrument-based detection complement each other. The sensitivity of the aptamer biosensor, designed based on Gquadruplexes and a hybrid chain reaction, was studied under the optimal test conditions, and the linear regression equation can be expressed as y ¼ 331.31x + 3543.1 (x and y are the Ace concentration and uorescence intensity, respectively) with a correlation coefficient R 2 of 0.9858. In addition, according to the formula 3s/S (where s is the standard deviation of the control solution and S is the slope of the linear regression equation), the limit of detection (LOD) toward acetamiprid was calculated to be 27.7 pM. Furthermore, the colorimetric absorbance output enables qualitative target determination without depending on instruments. This sensing platform is a low-cost exible platform that can satisfy the demands of future practical applications. Genomics technology could be utilized to amplify and increase the detection signal. In addition, the platform could be used to detect other targets via modifying the aptamer and target pair. The sophisticated design of this biosensor means it will have potential uses in chemistry, biomedicine, and environmental monitoring.
Conflicts of interest
There are no conicts to declare. | 2022-09-29T15:17:34.770Z | 2022-09-22T00:00:00.000 | {
"year": 2022,
"sha1": "31c5de5dba74e59c1374056407bb54b9dfd42327",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d2ra04794b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "160c4d14327ef3a36311c086325b367539b1905c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9264838 | pes2o/s2orc | v3-fos-license | Qualitative and Quantitative Performance of 18F-FDG-PET/MRI versus 18F-FDG-PET/CT in Patients with Head and Neck Cancer
BACKGROUND AND PURPOSE: MR imaging and PET/CT are integrated in the work-up of head and neck cancer patients. The hybrid imaging technology 18F-FDG-PET/MR imaging combining morphological and functional information might be attractive in this patient population. The aim of the study was to compare whole-body 18F-FDG-PET/MR imaging and 18F-FDG-PET/CT in patients with head and neck cancer, both qualitatively in terms of lymph node and distant metastases detection and quantitatively in terms of standardized uptake values measured in 18F-FDG-avid lesions. MATERIALS AND METHODS: Fourteen patients with head and neck cancer underwent both whole-body PET/CT and PET/MR imaging after a single injection of 18F-FDG. Two groups of readers counted the number of lesions on PET/CT and PET/MR imaging scans. A consensus reading was performed in those cases in which the groups disagreed. Quantitative standardized uptake value measurements were performed by placing spheric ROIs over the lesions in 3 different planes. Weighted and unweighted κ statistics, correlation analysis, and the Wilcoxon signed rank test were used for statistical analysis. RESULTS: κ statistics for the number of head and neck lesion lesions counted (pooled across regions) revealed interreader agreement between groups 1 and 2 of 0.47 and 0.56, respectively. Intrareader agreement was 0.67 and 0.63. The consensus reading provided an intrareader agreement of 0.63. For the presence or absence of metastasis, interreader agreement was 0.85 and 0.70. The consensus reading provided an intrareader agreement of 0.72. The correlations between the maximum standardized uptake value in 18F-FDG-PET/MR imaging and 18F-FDG-PET/CT for primary tumors and lymph node and metastatic lesions were very high (Spearman r = 1.00, 0.93, and 0.92, respectively). CONCLUSIONS: In patients with head and neck cancer, 18F-FDG-PET/MR imaging and 18F-FDG-PET/CT provide comparable results in the detection of lymph node and distant metastases. Standardized uptake values derived from 18F-FDG-PET/MR imaging can be used reliably in this patient population.
protocols are established, which include chemotherapy, radiation therapy, and surgical resection. 4 Accurate staging and re-staging are essential for selection of the appropriate treatment approach in individual patients.
Human whole-body combined or hybrid PET/MR imaging has been recently introduced in the clinical arena. The rationale for PET/MR imaging in head and neck cancer is based on the role of MR imaging and PET/CT in the work-up of this patient population. In head and neck cancer, MR imaging is an attractive technique due to its superior soft-tissue contrast and lower occurrence of metallic dental implant artifacts. 5 PET/CT, however, is capable of depicting regional adenopathy, small tumors, and distant metastases in a whole-body approach. PET/CT is an established technique for staging, re-staging, and treatment-response assessment in head and neck cancer. 6,7 PET/MR imaging, however, might also be attractive for head and neck cancer imaging because it combines the superior soft-tissue and functional information of MR imaging with the ability of PET to deliver information about metabolism on a cellular level. 8,9 Uptake of PET tracer can be visualized and quantified by using standardized uptake values (SUVs). Early comparisons of SUVs between PET/MR imaging and PET/CT from our group and others in healthy tissue of oncology patient population have shown good agreement. 10,11 The logical next step is to analyze correlations of SUV in a variety of pathologic conditions in order to use the values derived from PET/MR imaging data reliably in patients for quantification of studies.
The purpose of this prospective study was to systematically analyze lymph node and distant metastases detection qualitatively on whole-body 18 F-FDG-PET/MR imaging versus 18 F-FDG-PET/CT in patients with head and neck cancer. A further aim of this study was to compare SUVs of 18 F-FDG-avid lesions as a quantitative measure of tracer uptake in whole-body 18
Patient Population
The Health Insurance Portability and Accountability Act-compliant study was approved by the local ethics committee. All patients gave written informed consent before enrollment in the study. Fourteen patients with head and neck cancer (13 men; mean age, 54.7 Ϯ 8.2 years) were included in the study. The baseline characteristics of the patients are listed in Table 1. The patients were referred for a clinically indicated 18 F-FDG-PET/CT and then underwent a 18 F-FDG-PET/MR imaging. Two of the 14 patients underwent PET/MR imaging first followed by PET/CT. PET/CT was performed 63 Ϯ 6 minutes and PET/MR imaging was conducted 100 Ϯ 34 minutes after 18 F-FDG injection. The mean injected dose of 18 F-FDG was 11.7 Ϯ 1.4 mCi.
Only patients with head and neck cancer 18 years or older who were referred by their physicians for clinical PET/CT were eligible for the study. All consecutive patients who gave written informed consent were included in the analysis. Exclusion criteria were as follows: 1) patients who were not able to give informed consent (cognitive impairment), 2) pregnant women, 3) implanted metallic or electronic devices, 4) hip or other joint replacements, and 5) a history of kidney disease, with high creatinine levels or a low glomerular filtration rate.
PET/CT Studies
The PET/CT studies were performed, as previously described, according to a standard clinical protocol. 12 In short, a large-bore PET/CT with time-of-flight technology was used (Gemini TF PET/CT; Philips Healthcare, Best, the Netherlands). PET/CT images were acquired in 9 -10 bed positions, with 90 -120 seconds per bed position. For attenuation-correction and anatomic localization purposes, a low-dose CT protocol without contrast administration was acquired (parameters: 120 kV; 100 mAs; dose modulation; pitch 0.813; slice thickness 5 mm). The overall imaging time for PET/CT was 17 Ϯ 2 minutes. 18 F-FDG-PET/MR imaging was performed on a combined current-generation time-of-flight PET and a 3T MR imaging system (Ingenuity TF PET/MR; Philips Healthcare). PET/MR images were acquired in 9 -11 bed positions, with 120 -150 seconds per bed position to compensate for radiotracer decay in those cases in which PET/MR imaging was performed after PET/CT. The MR imaging component was performed with an integrated radiofrequency coil and a multistation protocol. In the protocol, the slab size was 6 cm and the maximum FOV was 46 cm. For attenuation correction, a whole-body free-breathing 3D T1-weighted spoiled gradient-echo sequence (sequence parameters: TE, 2.3 ms; TR, 4 ms; 10°flip angle) was acquired. The automatic attenuation-correction procedure was performed according to a previously published method, leading to a 3-segment model with differentiation of air, lung, and soft tissue. 13 The overall time for PET/MR imaging was 29 Ϯ 31 minutes.
Data Analysis
Qualitative reading and quantitative measurements were performed with commercially available software (MIM, Version 5.2; MIM Software, Cleveland, Ohio).
For the qualitative reading, a 2-step approach was used. Initially, 2 independent groups of readers (1 radiologist and 1 nuclear medicine physician per group) read the 28 examinations (14 PET/CT and 14 PET/MR imaging) in a blinded and randomized fashion. After an initial assessment on interreader agreement, a consensus reading was performed in those cases in which the 2 groups disagreed. This second step was also performed in a randomized and blinded fashion and included 18 examinations, corresponding to 9 patients. Clinical follow-up based on patient charts confirmed the presence or absence of head and neck cancer. Quantitative analysis of the detected lesions was performed by an experienced board-certified nuclear medicine physician (J.L.V.-C.) through the placement of spheric ROIs over the lesions in 3 different planes (axial, sagittal, and coronal). Maximum SUVs of the lesions, both in PET/CT and PET/MR imaging, were recorded.
Statistical Analysis
Unless otherwise specified, continuous data are reported as mean Ϯ SD. For the first reading, inter-and intrareader agreement was assessed for the number of lesions by using weighted statistics. For the first reading, inter-and intrareader agreement was also assessed for the presence or absence of distant metastases (metastases yes or no) by using an unweighted statistics. Interreader agreement for PET/CT and PET/MR imaging was evaluated between groups 1 and 2. Intrareader agreements for groups 1 and 2 were evaluated between PET/MR imaging and PET/CT. For the second reading, intrareader agreement for the number of lesions was assessed between PET/MR imaging and PET/CT by using a weighted statistics. In this reading, the number of lesions ranged from 0 to 9, and they were categorized as 0, 1, 2, 3, 4, 5 or more. For the second reading, intrareader agreement for the presence or absence of distant metastases (metastases yes or no) was evaluated as well between PET/MR imaging and For the assessment of maximum standardized uptake value (SUV max ), a clinical (primary tumor, lymph node, and distant metastases) and a localization (for distant metastases: bone, liver, and lung lesions) based assessment were performed. Spearman correlation coefficients between SUV max in PET/MR imaging and PET/CT were calculated. Differences between SUV max in PET/MR imaging and PET/CT were tested by using the Wilcoxon signed rank test. Interpretation of the Spearman correlation coefficients was performed per established standards as previously described 10 : A value Յ0.35 represented a weak correlation; a value between 0.36 and 0.67 represented a moderate correlation; a value between 0.68 and 1.0 represented a high correlation, where a value of Ն0.90 represented a very high correlation. For differences in the Wilcoxon signed rank test, P Ͻ .05 was considered significant.
Agreement on the Presence or Absence of Metastases. For the first reading, interreader agreement between groups 1 and 2 was almost perfect for PET/MR imaging and was substantial for PET/ CT: Unweighted values were 0.85 (95% CI, 0.56 -1.00) and 0.70 (95% CI, 0.33-1.00) for PET/MR imaging and PET/CT, respectively. For the first reading, intrareader agreement between PET/MR imaging and PET/CT was substantial for group 1 and almost perfect for group 2: Unweighted values were 0.69 (95% CI, 0.30 -1.00) and 0.84 (95% CI, 0.55-1.00) for groups 1 and 2, respectively.
For the second reading, intrareader agreement between PET/MR imaging and PET/CT was substantial: The unweighted value was 0.72 (95% CI, 0.38 -1.00).
Quantitative Analysis
The statistical analysis of the clinical-based assessment of the SUV max values is shown in Table 2. The scatterplot for visualization of SUV max correlation of 18 F-FDG-avid lymph node lesions is shown in Fig 2. Because there were only 4 primary tumors, a scatterplot for the primary tumors was not created. Very high correlations between SUV max in PET/MR imaging and PET/CT could be found for primary tumors and lymph node and metastatic lesions (Spearman r ϭ 1.00, 0.93, and 0.92, respectively). The absolute values were higher in PET/MR imaging compared with PET/CT for primary tumor and lymph node and metastatic lesions. This increase in SUV max for PET/MR imaging versus PET/CT was statistically significant for lymph node and metastatic lesions (P Ͻ .0001 for both), whereas the difference did not reach the significance level for the primary tumor lesions (P ϭ .875). The SUV max values are strongly based on the attenuation-correction maps, and the attenuation-correction maps are dependent on the anatomic location. The primary tumor and the lymph node lesions are both in the head and neck region. Because the metastases are in different anatomic areas, however, a location-based SUV max analysis was performed, and the results are demonstrated in Table 3. The scatterplots for visualization of SUV max correlations of 18 F-FDG-avid metastatic lesions in the lung, liver, and bone are shown in Fig 2. High correlations between SUV max in PET/MR imaging and PET/CT could be found for bone and liver lesions (Spearman r ϭ 0.89 and 0.88, respectively), whereas a very high correlation was found for lung lesions (Spearman r ϭ 0.93). The absolute SUV max values were higher in PET/MR imaging compared with PET/CT for bone, liver, and lung lesions. These increases in SUV max for PET/MR imaging versus PET/CT were statistically significant for bone and lung lesions (P ϭ .034 and P ϭ .0006, respectively), whereas the difference did not reach the significance level for the liver lesions (P ϭ .098).
DISCUSSION
With a dedicated reading session, including radiology and nuclear medicine physicians, detection of lymph node and distant metastases by using whole-body 18 F-FDG-PET/MR imaging in patients with head and neck cancer was equal to PET/CT. Regarding the lesion-based analysis, the intrareader agreement was substantial for the first and second readings, and the inter-reader agreement (first reading) was moderate. Regarding the analysis of the presence or absence of metastatic disease, the intrareader agreement for the first and second readings was substantial to almost perfect, and the interreader agreement (first reading) was almost perfect. Very high correlations between SUV max from 18 F-FDG-avid lesions in PET/MR imaging versus PET/CT were found. In the quantitative analysis part, we evaluated SUV correlation per type of lesion (primary tumor or lymph node or distant metastases) and per anatomic localization. As mentioned previously, it is important to conduct a localization-based analysis due to the crucial role of attenuation correction and its dependency on the anatomic region. In both analyses, very high correlations were demonstrated, thus allowing the reliable use of the values in patients with head and neck cancer for quantification of tracer uptake and lesion assessment.
Furthermore, the high number of distant metastases detected demonstrates the crucial role of a whole-body approach for staging and re-staging in this patient population.
PET/MR imaging is promising for head and neck cancer imaging because these tumors frequently require both PET/CT and MR imaging during patient work-up. Evidence from this study and from recently published studies reveals head and neck cancer as one of the future indications for PET/MR imaging. 15 In one of the initial human feasibility studies with 8 patients with head and neck cancer, the simultaneous PET/MR imaging brain prototype was used after standard-of-care PET/CT imaging. 16 For attenuation correction, a previously published combined atlas registration and pattern-recognition approach was applied. 17 The authors reported an excellent image quality with minor streak artifacts in the PET datasets of PET/MR imaging not affecting the assessment of the malignancy. When comparing the tracer uptake of the tumor in PET/MR imaging versus PET/CT, a higher 18 F-FDG uptake was detected in PET/MR imaging. A high correlation coefficient was found for the mean and maximum metabolic ratios of the tumors between both imaging technologies. 16 Our study differs from this early PET/MR imaging feasibility investigation using the brain prototype. This prototype had a limited craniocaudal FOV of 19 cm, hence enabling visualization only to the level II neck lymph node regions. 16 We used a wholebody approach, which is of clinical relevance in patients with head and neck cancer. According to a screening study in preoperative patients with head and neck cancer, 17% had distant metastatic disease. 18 A whole-body PET/MR imaging approach also helps in visualizing second primary malignancies, which is important in this patient population. 19,20 A recently published feasibility study using 18 F-FDG-PET/MR imaging for initial staging of 20 patients having head and neck squamous cell carcinoma confirmed the possibility of applying the hybrid imaging technology to this patient population without compromising image quality. 21 18 F-FDG-PET/MR imaging was compared, in this study, with a traditional 18 F-FDG-PET scanner. Malignancies could be detected with PET/MR imaging, PET alone, and MR imaging alone in 17, 16, and 14 of the 20 cases investigated. Significantly more 18 F-FDG-avid lymph nodes were detected in PET/MR imaging versus PET. Furthermore, the SUVs in tumor and the cerebellum were significantly higher when comparing PET/MR imaging versus stand-alone PET. 21 This observation is in accordance with our study in which the absolute SUV max were higher in PET/MR imaging versus PET/CT as well. In comparison with their study, the current study compared whole-body 18 F-FDG-PET/MR imaging versus 18 F-FDG-PET/CT instead of a stand-alone PET. We included, in this study, the 2 most common histopathologic types of head and neck cancer, namely squamous cell carcinoma and adenocarcinoma.
Absolute SUV max values were higher in PET/MR imaging compared with PET/CT. The higher SUV max reached statistical significance in the lymph node and metastatic lesions. In primary tumors, the SUV difference was minimal, not reaching statistical significance, but this can be explained by the small number of primary lesions (n ϭ 4) and thus low statistical power. When analyzing the metastatic lesions per region, again this study found higher SUV max in PET/MR imaging versus PET/CT, reaching statistical significance for bone and lung but not liver lesions. We explain the results by the measurement of PET/MR imaging after PET/CT in most patients (12 of 14) leading to an increased tracer uptake with time. However, different attenuation-correction techniques may also contribute to these findings. With regard to the SUV analysis, it is crucial to do correlative studies between PET/MR imaging and PET/CT. In the previously published study of our group comparing SUVs from 18 F-FDG-PET/MR imaging with 18 F-FDG-PET/CT in healthy tissue, high correlations for SUV max and SUV mean were found. 10 In certain tissues, the absolute SUV max and SUV mean differed significantly, but when a good correlation was shown between PET/MR imaging and PET/CT as the standard method, the values from PET/MR imaging could be used reliably in the clinical settings for follow-up comparisons. Attenuation-correction solutions based on MR imaging data are one of the major challenges for PET/MR imaging. 22 Attenuation correction has an impact on both SUV max and SUV mean .
According to our experience with PET/MR imaging, workflow and associated time constraints frequently pose a challenge in PET/MR imaging. In this study, the PET/CT examination lasted 17 Ϯ 2 minutes and the PET/MR imaging study took 29 Ϯ 31 minutes. This study and others demonstrate that that the scan-ning time in PET/MR imaging is slightly longer compared with PET/CT. 21 This may change in the future when a full diagnostic MR imaging protocol is integrated into the PET/MR imaging workflow.
The study has limitations. First, only 14 patients with head and neck cancer were enrolled in this prospective study. Second, we did not have a uniform diagnostic MR imaging protocol. Besides the whole-body T1-weighted sequence for attenuation correction, we acquired, in 12 of the 14 patients, extra MR images, but the protocol was not unified. Third, the first 12 of the 14 patients with head and neck cancer underwent PET/CT, followed by PET/MR imaging. Only the last 2 patients had PET/MR imaging first. A randomization approach during the entire study would have been better from a scientific perspective, but when we started the clinical PET/MR imaging program, we decided, due to ethical reasons, to give the clinically referred PET/CT examination priority, followed by the PET/MR imaging study.
CONCLUSIONS
The detection of adenopathy and distant metastatic disease is reliable with whole-body 18 F-FDG-PET/MR imaging compared with state-of-the art 18 F-FDG-PET/CT imaging in head and neck cancer. This is reflected by the moderate-to-substantial inter-and intrareader agreements with regard to head and neck lesion detection (pooled across regions) and by the substantial to almost perfect inter-and intrareader agreements with regard to detection of the presence or absence of metastatic disease. Very high correlations between SUVs in PET/MR imaging versus PET/CT of 18 | 2018-04-03T00:05:57.255Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "d6fafd15a4ee76cbc69ff40751db76082b28e931",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/35/10/1970.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4e1492ca965655d029f6e14ea90ae3a4290f1925",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3706632 | pes2o/s2orc | v3-fos-license | Postnatal development of the small intestinal mucosa drives age-dependent, regio-selective susceptibility to Escherichia coli K1 infection
The strong age dependency of neonatal systemic infection with Escherichia coli K1 can be replicated in the neonatal rat. Gastrointestinal (GI) colonization of two-day-old (P2) rats leads to invasion of the blood within 48 h of initiation of colonization; pups become progressively less susceptible to infection over the P2-P9 period. We show that, in animals colonized at P2 but not at P9, E. coli K1 bacteria gain access to the enterocyte surface in the mid-region of the small intestine and translocate through the epithelial cell monolayer by an intracellular pathway to the submucosa. In this region of the GI tract, the protective mucus barrier is poorly developed but matures to full thickness over P2-P9, coincident with the development of resistance to invasion. At P9, E. coli K1 bacteria are physically separated from villi by the mucus layer and their numbers controlled by mucus-embedded antimicrobial peptides, preventing invasion of host tissues.
the small intestine, mucus forms a single porous layer and contains an array of antimicrobial peptides secreted by crypt Paneth and epithelial cells that further enhance its defensive capacity. Colonic mucus is thicker and consists of an inner layer impenetrable to bacteria and an outer layer in which commensal bacteria reside 14 .
Although genes encoding mucus glycoproteins are expressed in embryonic and foetal intestine 15 , GI tissues undergo significant postpartum development. In the neonatal rat, substantial changes in developmental gene expression occur between two (P2) and nine (P9) days postpartum 16 , including upregulation of genes encoding antimicrobial α-defensin peptides: following E. coli K1 GI colonization, α-defensin genes defa24 and defa-rs1 were found to be up-regulated in P9, but not P2, pups. Conversely, developmental expression of Tff2 was massively dysregulated in P2 but not P9 rats and was accompanied by a decrease in the amount of colonic Muc2 16 . These changes will compromise innate GI defenses in susceptible P2 animals but enhance protection in older resistant pups and contribute to the age-related susceptibilities to E. coli K1 infection.
In this study, we investigated the impact of postnatal mucus development on systemic infection in the neonatal rat following E. coli K1 GI tract colonization. Colonizing bacteria formed an intimate association with the epithelium in the mid-section of the P2 small intestine; they translocated by a transcellular route to the blood via the mesenteric lymph nodes in P2, but not P9, animals. There was a strong correlation between susceptibility to systemic infection, the thickness of the mucus layer, and the capacity of bacteria to associate with the epithelial surface. In vivo ablation of Paneth cells or ex vivo disruption of the mucus layer allowed E. coli K1 to invade normally resistant P9 small intestinal tissue, demonstrating cooperation between mucus and antimicrobial secretions in the prevention of E. coli K1 invasive infection.
Dynamics of colonization and infection of neonatal rats by E. coli A192PP. Oral administration of
E. coli K1 strain A192PP to P2 rat pups resulted in GI tract colonization within 24-48 h (Fig. 1a); all animals subsequently developed E. coli K1 bacteraemia and succumbed to infection within one week. The incidence of bacteraemia and mortality were significantly reduced in P2 rats receiving an oral dose of E. coli K1-specific phage K1E (10 9 PFU) at 12 or 36 but not 50 h after initiation of colonization (Fig. 1a), indicating that invasion of the blood circulation occurs within the first two days of colonization. We previously determined that oral administration Survival, colonization and bacteraemic status of neonatal rats dosed orally with 4-8 × 10 6 CFU E. coli A192PP at P2. Colonized animals were fed vehicle buffer (no phage) or 10 9 PFU K1E bacteriophage 12, 36 or 50 h after receiving colonizing bacteria. n = 12 for each group; data from at least two independent experiments. For bacterial quantification, 2 cm segments of proximal (PSI), mid-(MSI) and distal (DSI) small intestine and colon were excised from P2 and P9 neonatal rat pups. Colonization of small intestinal regions 24 and 48 h after oral dosing of neonatal rats with E. coli A192PP at P2 or P9 was determined by culture-dependent enumeration of E. coli K1 bacteria (b); n = 4-12 animals; line is median, *P < 0.05. (c) Total bacterial load in small intestinal and colonic tissue segments as determined by qPCR of 16S rRNA genes; median ± range, *P < 0.05 (Tukey's multiple comparison). of ~10 6 CFU E. coli A192PP to P2, P5 and P9 pups resulted in a stable GI tract E. coli K1 population of 10 7 -10 8 bacteria/g GI tissue 16 . There were no significant differences between the age groups at any sampling point over a 120 h period, but there were relatively more E. coli K1 in the colon of P9 compared to P2 pups.
To examine the spatial distribution in more detail, segments (2 cm) from the proximal (PSI), mid-(MSI) and distal small intestine (DSI) as well as sections of the colon were excised as described earlier 17 . The E. coli K1 content of segments from colonized pups was established 24 and 48 h after colonization by viable counting (Fig. 1b) and confirmed by qPCR (E. coli K1-specific neuS gene; Fig. S1). Significantly more E. coli K1 were found in the PSI and MSI from P2 compared to P9 animals 24 h after oral administration of A192PP, although these differences were not evident 24 h later ( Fig. 1b and S1). Quantification of the total bacterial load by 16S qPCR in non-colonized neonates demonstrated that the P9 colon contained significantly more bacteria than corresponding tissues from P2 pups. Otherwise, there were no significant differences in the distribution of bacteria along the GI tract (Fig. 1c), indicating that differences in K1 content between P2 and P9 intestinal sections ( Fig. 1b and S1) were not due to age-dependent differences in overall bacterial load. Impact of colonization by E. coli A192PP on GI tract integrity. The mammalian neonatal gut is more permeable to ions, small molecules and macromolecules than the adult gut, a phenomenon that is largely attributable to macropinocytosis of luminal content by neonatal enterocytes 18,19 . Pathogens may utilise macropinocytosis as a mechanism to invade host cells; thus, age-dependent alterations in intestinal permeability may play a role in the age-dependency of E. coli K1 infection. In order to assess differences in permeability over the 48 h infection window we fed non-colonized neonates small molecule (FITC, 389 Da), polymeric (FITC-dextran, 4 kDa), and particulate (1 µm diameter beads) fluorescent probes 24 h (Fig. 2a) and 48 h (Fig. 2b) after P2 and P9. Quantification of serum fluorescence 1, 2 and 4 h after oral administration of the probes showed that probe uptake was more rapid in the P2 neonates, but that the P9 gut was also permeable to the probes. This indicated that the development of resistance to E. coli K1 infection over the P2-P9 period was unlikely to be due to developmental alterations in intestinal integrity.
Comparison of serum fluorescence between non-colonized and E. coli K1-colonized neonates provided no evidence that colonization induced any increase in intestinal permeability (Fig. 2a,b), indicating that E. coli K1 colonization did not cause epithelial cell damage. Permeability to the fluorescent microspheres was at least two to three orders of magnitude lower than for the soluble probes. There was a small but significant decrease in GI permeability 24 h after E. coli K1 colonization (Fig. 2a) that was not evident at 48 h, at which point any uptake of the particulate probe was below the level of detection. Lack of damage to the epithelium was supported by examination of H&E-stained PSI, MSI and DSI tissue sections, which revealed a normal neonatal intestinal architecture, including highly vacuolar DSI enterocytes, with no evidence of tissue damage or inflammation (Fig. 2c). Thus, pathogen-induced damage to the epithelial barrier did not play a role in systemic E. coli K1 infection. Intracellular E. coli K1 are found exclusively in mid-small intestine. E. coli K1 bacteria transit from the GI tract to the blood with low frequency after attaining a threshold level in the gut 6 . We examined temporal and spatial relationships between colonizing E. coli A192PP and cell populations in the small intestine and colon 24 and 48 h (Fig. 3a) after initiation of colonization. Sections were fixed to preserve the integrity of the mucus layer and colonizing bacteria were visualized with O18 polyclonal antibody. No bacteria reacted with the antibody in GI tissues from non-colonized P2 and P9 neonates, indicating that bacteria expressing the O18 antigen were not part of the microbiota of these pups. Large quantities of O18 antigen, representing E. coli A192PP, were found in the colon 24, 48 and 60 h after colonization of P2 and P9 pups. At these time points, O18-positive bacteria were maintained at a distance from the apical surface of the epithelium in all regions of the small intestine of animals colonized at P9. In contrast, large numbers of O18-positive bacteria were seen in close proximity to the gut epithelium in the MSI at 24, 48 ( Fig. 3a) and 60 h (not shown) after colonization at P2. External surfaces of the villi in the MSI of animals administered E. coli A192PP at P2 were colonized with E. coli K1 (Figs 3b and 4a). At 24 and 48 h bacteria were located within intracellular vesicles of the villi (Fig. 3c) and progressed through the epithelial barrier by the transcellular route between 24 and 60 h after colonization (Fig. 4b,c). A high proportion of O18 LPS stain was associated with rod-shaped elements, but a significant fraction appeared with a diffuse staining pattern (Fig. 3c). FISH staining confirmed the presence of intact intracellular bacteria (Fig. 4c). Rod-shaped bacteria in contact with the epithelial surface were frequently observed by scanning electron microscopy of the mid small intestine of P2 but not P9 pups (Fig. 4a). Intact bacteria could also be visualized within intracellular compartments of enterocytes in this region of the GI tract using transmission electron microscopy (Fig. 4b). This data indicates that the MSI region provides a transcellular portal of entry into the blood circulation and this conclusion is supported by the presence of significantly greater numbers of E. coli K1 bacteria in mesenteric lymphatic tissue from P2 compared to P9 colonized rat pups (Fig. 4d).
Paneth cell ablation and susceptibility to E. coli K1 infection. Few Paneth cells are present in the small intestine at birth but numbers increase substantially with age 20 . Bacteria induce these cells to secrete substantial amounts of α-defensins 21 , which serve as characteristic markers of Paneth cell activity. Paneth cells are rich in zinc 22 and bind diphenylthiocarbazone (DTZ), a cytotoxic compound with high affinity for zinc that selectively ablates Paneth cells in neonatal rats 23 . We used DTZ-mediated Paneth cell ablation in Sprague Dawley pups (Supplementary files) to determine the contribution of mucus-embedded antimicrobial α-defensins in protection against E. coli K1 systemic infection. Pups received 40-50 mg/kg DTZ at P8 and a second dose at P9 followed by oral challenge with E. coli A192PP 6 h later (Fig. 5a); maximal Paneth cell destruction occurs around 6 h after DTZ administration 24 . As Paneth cells begin to regenerate 24 h after DTZ administration 24 , pups were dosed with DTZ every 24 h until the scheduled end point. Paneth cell ablation in the small intestine was monitored by determination of defa-rs1 and defa24 expression with qRT-PCR. Levels of both α-defensins were suppressed by Colonization with E. coli K1 does not alter the integrity of the intestinal epithelium. Age dependency of intestinal permeability in non-colonized P2 and P9 pups 24 h (a) and 48 h (b) after E. coli A192PP colonization or sham colonization (broth vehicle alone); plasma fluorescence was determined at indicated time points after oral administration. In addition, neonatal rats were colonized at P2 or P9, fluorescent probes administered orally 24 h or 48 h later and plasma fluorescence determined. Plasma fluorescence was expressed as log 10 relative fluorescence units (RFU). (c) Micrographs of H&E stained PSI, MSI and DSI tissue obtained from neonates 48 h after sham (non-colonized) or A192PP (K1-colonized) feeding at P2. In all experiments, n = 6-8 animals; median ± range, n.s., not significant, *P < 0.05, **P < 0.01 (Student's t). # Indicates significant difference between sham-colonized and E. coli A192PP-colonized pups. approximately 40% at 6 h after the initial DTZ dose. Suppression of defa-rs1 and defa24 rose to 60-70% at 20 h and these levels were maintained for at least three days after initiation of E. coli A192PP colonization of DTZ-treated P9 pups (Fig. 5b). Administration of DTZ significantly decreased the number of Paneth cells located in MSI crypts over the P8-P11 period (Fig. 5c), as determined by confocal microscopy following staining of small intestinal tissue sections with antiserum targeting the Paneth and goblet cell-specific Muc2 precursor form (Fig. 5d). Stained tissue sections revealed that DTZ treatment did not affect the number of goblet cells or the mucus layer ( Fig. S2a-c). H&E stained sections showed that DTZ treatment did not cause any inflammation in the PSI or MSI, although transient epithelial shedding and inflammation was observed in DSI sections (Fig. S2d).
DTZ administration had no impact on the spatial relationship between colonizing E. coli K1 and the epithelial surface in the PSI or DSI (Fig. 5e). In contrast, DTZ exposure enabled colonizing bacteria to gain access to the enterocyte surface in the MSI (Fig. 5e,f). In similar fashion to E. coli A192PP-colonized P2 animals, bacteria were located between the villi of the MSI 48 h after initiation of colonization and were detected within intracellular vesicles of MSI enterocytes. E. coli K1 was observed in the lamina propria, indicating bacterial translocation across the epithelial barrier in this region of the GI tract (Fig. 5f). Survival of pups in DTZ and control groups was high (83.4% and 100% respectively) over the seven day observation period but in contrast to the control group 75% of animals receiving DTZ developed E. coli K1 bacteraemia (Fig. 5g,h).
Maturation of the neonatal mucus barrier coincides with resistance to E. coli K1 infection. All
P2 pups succumb to invasive infection with E. coli A192PP following GI colonization (Figs 1a and 5g) but become progressively more resistant over the P2-P9 period 16 . Although we have shown that disruption of Paneth cell maturation allows E. coli K1 to invade the P9 MSI, secreted antibacterial peptides are primarily localized within the mucus of the small intestine 25,26 . Little is known of early postnatal development of the mucus barrier or of goblet cells, the primary secretors of Muc2 27 . Fixation of MSI tissue in Methacarn for preservation of mucus followed by immunostaining revealed a substantial increase in secreted Muc2 between P2 and P9 (Fig. 6a), a finding supported by detection of mucus-like extracellular material in P9 but not P2 MSI tissue imaged by SEM (Fig. 6b). We therefore hypothesized that the protective small intestinal mucus layer was absent in E. coli K1-susceptible P2 neonates, but present in resistant P9 neonates. To address this, the thickness of the MSI mucus layer over the P2-P11 period was measured using an ex vivo explant system 28 . No mucus was detected in P2-P3 MSI. However, mucus began to fill the inter-villi spaces at P4 and increased progressively in thickness over P4-P9 until it reached the tips of the villi (Fig. 6c), demonstrating that small intestinal mucus layer formation occurred in the post-natal period and was incremental in nature. Importantly, the timeframe of mucus layer formation almost perfectly matched the period over which neonatal rats become resistant to E. coli K1 infection. Plotting survival of neonatal rats orally dosed with E. coli K1 at different ages from P2-P9 against villus exposure (the proportion of the villus not covered by the mucus layer) resulted in a highly significant negative linear correlation (Fig. 6d). The small intestinal mucus is normally penetrable to bacteria or beads the size of bacteria, unlike colonic mucus which acts as an impenetrable physical barrier to bacteria. To test the penetrability of the neonatal small intestinal mucus, ex vivo MSI explants were overlaid with 1 µm fluorescent beads and imaged by confocal microscopy (Fig. 6e,f). Analysis of the spatial localization of beads between the villi showed a similar distribution pattern in explants from P2 and P9 animals, indicating that the neonatal small intestinal mucus remained penetrable to bacteria-sized beads. To shed light on the basis of the regio-selectivity of translocation across the GI epithelium, the development of the PSI and DSI mucus layers over the P2-P9 period was examined using the same ex vivo approach. Formation of the PSI mucus layer was gradual, with villus tips remaining exposed at P9 (Fig. 7a). In contrast, development of the DSI mucus layer was extremely rapid, with the villi covered in mucus by P3 (Fig. 7b). Plotting villus exposure in the PSI and DSI against susceptibility to E. coli K1 infection showed no correlation in the DSI, but a significant negative correlation in the PSI (Fig. 7c). This data suggested that a differential mucus layer maturation rate could explain the inability of E. coli K1 to invade the neonatal DSI tissue, but could not account for the lack of PSI invasion. Translocation of colonizing E. coli K1 from gut lumen to blood circulation is very inefficient, with systemic infection often due to a single bacterial cell that has traversed the gut epithelium and escaped capture by the mesenteric lymphatics 6 . There is likely to be a threshold level of E. coli K1 colonizers in the small intestine required to facilitate translocation and infection 6,29 . Determination of the relative numbers of E. coli A192PP in the MSI and PSI of individual animals colonized at P2 revealed a median three-to fourfold difference at both 24 and 48 h in the ratio MSI:PSI after bacterial dosing (Fig. 7d), strongly suggesting that whilst the mucus layer in the PSI is not developed sufficiently to prevent E. coli K1 from gaining access to the epithelial surface, colonizing bacteria do not attain the threshold level necessary to effect epithelial translocation.
E. coli K1 invasion occurs following removal of mucus.
Mice deficient in Muc2 lack a mucus layer and bacteria are in contact with the intestinal epithelium in the colon 14 and in the small intestine, where bacteria can be found between the villi and near the crypt openings (Birchenough, unpublished). As Muc2 −/− rats are currently not available, the role of mucus in enabling resistance to E. coli K1 infection cannot be directly ascertained. However, by manual disruption of the mucus layer in isolated tissues we were able to determine the effect of mucus removal on the capacity of E. coli K1 to invade intestinal tissue ex vivo. Chamber-mounted P2 and P9 MSI tissues with intact or disrupted mucus (Fig. 8a) were exposed to E. coli A192PP or non-invasive E. coli K12 for 1 h. Intracellular bacteria were then quantified by gentamicin protection (Fig. 8b); no resident gentamicin resistant or intracellular bacteria were present in non-disrupted tissue samples that had not been exposed to these bacteria. No intracellular bacteria were found in tissues with disrupted mucus that were exposed to E. coli K12. In contrast, intracellular bacteria were cultured from tissue with intact mucus exposed to E. coli K1 and significantly more bacteria were found in P2 compared to P9 explants. Critically, disruption of the P9 mucus layer allowed E. coli K1 to invade P9 tissue to the same extent as P2 tissue.
Soluble mucus proteins (SMP) were extracted from the P9 explants and examined for bacteriolytic activity by incubation with E. coli K1 in the presence of the membrane-impermeable vital dye Sytox Green (Fig. 8c). A concentration-dependent increase in Sytox Green fluorescence was observed in cells treated with SMP but not BSA, confirming the presence of bacteriolytic factors in the SMP. When the bacteria were stained with the cellular dye Syto40, a high proportion of Sytox Green positive cells were observed after exposure to SMP compared to BSA (Fig. 8d). These results support the contention that the mucus layer and the antimicrobial factors it contains are the primary factors preventing access to and invasion of the MSI epithelium by E. coli K1. Thus, postnatal development of the small intestinal mucus layer is likely to play a key role in the determination of age-dependent resistance to systemic E. coli K1 infection.
Discussion
A striking facet of human neonatal bacterial meningitis is the remarkably strong age dependency of these invasive infections. The two main etiologic agents, Streptococcus agalactiae and E. coli K1, are common components of the healthy adult microbiota but their vertical transmission to newborn infants carries the risk of severe systemic disease. That this age dependency can be readily replicated in the rat suggests that the capacity of these bacteria to cause neonatal disease is linked to anatomical, physiological or immunological features associated with the early life of the host.
In the rat, E. coli K1 colonization of the GI tract during the early neonatal period elicited changes in expression of genes involved in gut maturation. Downregulation of Tff2 mRNA expression, accompanied by a decrease in colonic Muc2 16 , contrasts with reports of up-regulation of mucus gene expression induced in mice by colonization with commensals such as Lactobacillus acidophilus 30 and raises the question as to whether E. coli K1 causes more generalized disruption of the epithelial barrier to bacterial invasion. We established that E. coli K1 colonization at P2 and P9 does not lead to decreased epithelial barrier function, although general intestinal permeability was significantly higher in P2 compared to P9 pups. In this context, it has been determined in rats 18 and in infants 31 that intestinal permeability is higher in the first days of life. Our results indicate that it is unlikely that particulates, including bacteria, gain access to the submucosa as a result of E. coli K1-induced changes in gut permeability and a more selective mechanism is likely involved in E. coli K1 translocation across the epithelium.
The bulk of GI bacteria, including colonizing E. coli A192PP 16 , are found in the colon and are separated from the colonic epithelial surface by a two-layered mucus system, where the stratified inner layer physically separates the bacteria from the epithelial cells 14 . The outer colonic mucus layer is formed from the inner layer by the action of proteases and in its expanded nature is penetrable by resident bacteria. The colon is not the site of translocation from gut to blood circulation of colonizing E. coli K1 in the susceptible rat as the bacteria are maintained at a safe distance from the epithelial layer (Fig. 3a).
In contrast, the small intestine is protected by a single layer of mucus that, unlike the inner colonic mucus layer, contains pores that allow the penetration of bacteria and bacteria-sized particles 12 . We found very few bacteria in close proximity to the epithelial cell surface in the PSI and DSI. In contrast, 24 h after initiation of colonization of P2 pups large numbers of O18-positive bacteria were observed in close proximity to the epithelial surface of the MSI, within intracellular enterocyte compartments by 48 h (Fig. 3a) and subsequently within the mesenteric lymphatic system (Fig. 3d). As neither O18 LPS nor intracellular Enterobacteriaceae were observed in the non-colonized MSI, we are confident that the intracellular bacteria in this region of the GI tract of colonized animals were E. coli A192PP cells. Presumably a small number 6 were able to circumvent the filter function of the lymph nodes to gain entry to the blood circulation. We did not investigate mechanisms of cellular uptake of E. coli A192PP but it is well established that macropinocytosis occurs in the GI tract of neonatal rats up to approximately 18 days postpartum 32,33 . This is supported by the numerous vacuolar enterocytes observed in the MSI and DSI and suggests that uptake of E. coli K1 proceeds through the macropinocytic pathway 34 . Indeed, some pathogens, including enterohemorrhagic E. coli 35 and Yersinia pseudotuberculosis 36 , are known to utilize macropinocytosis to cross the intestinal epithelium following colonization of the GI tract. Failure of E. coli A192PP to translocate at the PSI or DSI could be partly due to absence of such pathways at these locations.
We previously showed that P9, but not P2, pups respond to the threat from E. coli A192PP colonization by upregulating genes encoding Paneth cell-derived α-defensins 16 . Now we show that partial ablation of Paneth cells in resistant P9 pups enables the pathogen to make contact with the epithelial surface in the MSI as observed in susceptible P2 pups, and to cause bacteraemia in a significant proportion of animals (Fig. 4). Only a limited number of deaths occurred in DTZ-treated pups, indicating that although bacteraemia is lethal in naturally susceptible P2 pups, other factors determine survival in the older neonates. Partial Paneth cell ablation has also been shown to sensitize P4 neonatal rat pups to E. coli O18:K1:H7 strain Ec5 with significant increases in the number of E. coli in the GI tract, mean illness scores and deaths 23 . Examination of ileal sections from preterm infants with necrotizing enterocolitis revealed low numbers of Paneth cells compared to age-matched samples from infants with spontaneous intestinal perforations; replicating this Paneth cell loss with DTZ treatment of P14-P16 mice combined with exposure to Klebsiella pneumoniae induced a similar pathological state 37,38 . Such studies emphasize the parallels between experimental animal models and human disease in GI tract homeostasis.
Our investigation of mucus maturation in the MSI over the critical period P2-P11 (Fig. 6) provides a physiological basis for the susceptibility of neonates to infection and subsequent development of resistance to disease. MSI mucus displays significant antibacterial activity and it is likely that a proportion of the free LPS that we detected in GI sections represent debris from bacteria lysed by antibacterial peptides confined within the mucosal barrier. Villi in this region of the GI tract almost certainly make frequent contact with luminal bacteria, including colonizing E. coli K1, due to the inadequate protection provided by the thin, developing mucus layer. The newborn face a number of complex and sometimes conflicting immunological demands, including avoidance of excessive inflammatory responses to bacteria and their products, that may leave them at risk of infection 39 . An immature immune system means the neonatal host must rely to a large extent on innate immunity to repel would-be pathogens; our study demonstrates the importance of a mature antibacterial mucus barrier.
As with the MSI, villi in the PSI remain exposed as the mucus layer matures in this region of the small intestine but the burden of colonizing E. coli K1 cells in the PSI is significantly lower than in the MSI. As colonizers of the small intestine need to achieve a threshold level in order to make sufficient contact with epithelial surfaces prior to translocation, we hypothesise that the number of colonizing bacteria in the PSI is insufficient to effect epithelial traversal and translocation to the blood compartment. On the other hand, the mucus layer in the DSI matures early to protect the epithelium and ensures that no bacterial contact with the epithelial surface occurs. These observations would appear to provide a rationale for the regio-selective susceptibility to E. coli K1 invasion from the neonatal GI tract.
Penetration of small intestinal mucus by bacteria and other microparticles is poorly understood. Nutrient uptake is localized to this region of the intestine and a relatively porous mucosal layer may be functionally relevant. There is evidence that lack of close contact between bacteria and epithelial cells in adult animals is due to antibacterial peptides and proteins produced by Paneth cells (for example, ref. 12). The present study shows for the first time that partial Paneth cell ablation, leading to a lower concentration of antibacterial peptides, enables bacteria to gain access to the epithelium, even in the presence of an intact mucus layer. On the other hand, in the absence of mucus the antibacterial peptides will quickly diffuse into the lumen and have limited effects where most needed, close to the epithelial surface. The mucus provides the necessary diffusion barrier for the antibacterial peptides and also slows bacterial penetration. Thus, the antibacterial components and mucus act cooperatively to protect the small intestine. The poorly developed neonatal MSI mucus layer likely enables transcytosis and invasion by the E. coli K1 neuropathogen. Our proposed model for E. coli K1 and MSI interaction in the neonatal rat is shown in Fig. S3. Bacterial enumeration by qPCR. Quantitative PCR was employed to quantify the bacterial constituents of the GI microbiota and to enumerate E. coli K1 in GI tissue segments. Total bacteria were estimated by amplification of 16S rDNA and quantification of E. coli K1 was based on amplification of K1-specific neuS 16 .
Methods
Intestinal permeability. The GI tract permeability of rat pups was assessed using FITC, FITC-dextran (Sigma) and 1 μm fluorescent microspheres (FluoSpheres; Molecular Probes). For FITC and FITC-dextran, pups were fed 15 μl of probe solution (800 mg/ml in sterile PBS) at 24 and 48 h after colonization or sham-colonization (with MH broth vehicle alone) and culled 1, 2 or 4 h after administration. For Fluospheres, pups were fed a 20 μl suspension of microspheres (containing 7.27 × 10 8 beads) and animals culled up to 48 h later. Plasma fluorescence was determined against plasma from control rat pups, and values normalized for total blood volume for each pup.
Tissue collection and processing. Neonatal rats were killed by decapitation. GI tissues were segmented into stomach, mesentery and colon, and 2 cm segments from proximal, middle and distal small intestine excised without washing 17 . Tissues were placed in methanol-Carnoy's fixative (Methacarn) and maintained at room temperature for at least 3 h. For enumeration of E. coli K1 by viable counting and for extraction of DNA, tissues were placed in 2 ml ice-cold phosphate-buffered saline, homogenized and DNA extracted with QIAamp Stool DNA Mini Kit (Qiagen). For RNA extraction, tissues were transferred to RNAlater (Ambion), stored overnight at 4 °C and RNA extracted with RNeasy Midi Kit (Qiagen).
Histology, immunohistochemistry and microscopy. Paraffin-embedded sections were dewaxed and hydrated; some sections were stained with H&E. Muc2 was stained using polyclonal rabbit antiserum raised against either non-glycosylated apo-Muc2 protein (PH497) or mature glycosylated Muc2 protein (Muc2C3); Alexa Fluor 555/488-conjugated goat anti-rabbit IgG (ThermoFisher) was used as secondary antibody. FISH was performed using Alexa Fluor 555-conjugated enterobacterial 16S probe 5′-CCC CCW CTT TGG TCT TGC-3′ 14 Immunofluorescent and FISH sections were counterstained using Hoechst 33258 (Sigma). E. coli A192PP was detected in tissue sections using rabbit polyclonal antibody against O18 LPS surface antigen 8 . Stained sections were mounted in Prolong Anti-fade (Life Technologies). Images were acquired using an LSM 700 confocal microscope equipped with an oil immersion lens and 488 nm and 555 nm lasers (Zeiss). Micrographs of GI tissues were obtained using a JEOL ISM-5300 scanning electron microscope; tissues were processed as described 40 .
Selective ablation of Paneth cells. DTZ (10 mg/ml) was prepared in 25 mM Li 2 CO 3 23 . Neonatal rats received 40-50 mg/kg DTZ by i.p. injection. The minimum bactericidal concentration of DTZ for E. coli K1 exceeded 4 mg/ml. Paneth cell ablation was determined by measurement of reduction of defa-rs1 and defa24 expression with quantitative RT-PCR using RNA extracted from GI tissue segments 16 . Tissues were collected from the small intestine, fixed, sectioned and examined as described above. Administration of DTZ did not alter the appearance of sectioned GI tract tissues (Fig. S2D).
Ex-vivo characterization of the small intestinal mucus layer.
Mucus was characterized using a tissue explant system 28 . Small intestinal tissues were dissected, flushed with Krebs buffer and mounted in a 1.5 mm diameter horizontal perfusion chamber. The mucus thickness was quantified by visualizing the mucus surface using apical addition of 10 µm black polystyrene beads and the distance between the mucus surface and villus base measured using a micropipette viewed through a stereomicroscope. Villus height was quantified by measuring the distance between the villus tip and base. Mucus penetrability was characterized as described 41 .
Chamber-mounted tissue was stained with 15 µM Syto9 dye (ThermoFisher) for 15 min then overlaid with 1 µm crimson Fluosphere beads (ThermoFisher). Beads were allowed to sediment for 10 min and tissue and beads visualized by generating confocal z-stacks using an LSM 700 confocal microscope equipped with a water immersion objective lens and 488 nm and 640 nm lasers (Zeiss). Fluorescence intensity data for individual z-planes was extracted from z-stacks using Zen software (Zeiss) and was used to determine bead distribution in relation to villus tip and base.
Ex-vivo bacterial tissue invasion assay. Small intestinal tissue was mounted in a chamber and the mucus surface visualized using polystyrene beads as described above. In some tissues the mucus layer was disrupted and removed using a micropipette. Mucus removal was confirmed by reapplication of polystyrene beads. E. coli A192PP and K12 strain W3110 (10 6 CFU) were applied apically to tissue explants and incubated at 37 °C for 1 h. Explants were washed with Krebs buffer, transferred to Krebs buffer with 100 µg/ml gentamicin (Sigma) and incubated for 1 h at room temperature. Tissues were washed twice (4 °C; 10 min), homogenized, and viable bacteria determined.
Bacteriolytic activity of soluble mucus protein extracts. Mucus was collected into 10 mM sodium phosphate (SP) buffer from chamber mounted small intestinal explants using a micropipette. Insoluble material was removed, supernatants containing SMP recovered, and protein quantified using BCA assay kit (Pierce). The capacity of SMP to permeabilize E. coli K1 cells was assessed by Sytox Green uptake. Mid-log phase E. coli A192PP cells were suspended in 10 mM SP buffer with 1 µM Sytox Green (ThermoFisher) and incubated in the dark for 10 min. Bacteria were diluted 1:20 into 10 mM SP buffer containing 1 mM EDTA and 2 µM Sytox Green and 100 µl bacteria distributed into wells of a 96-well black plate; 100 µl dilutions of SMP were added to each well and incubated for 5 h at 37 °C with shaking. Sytox Green fluorescence was normalized to wells containing bacteria without SMP or BSA. To visualize cells, suspensions were incubated with 10 µM Syto40 dye for 15 min and 20 µl transferred onto a microscope slide, air dried, mounted and examined using a LSM 700 confocal microscope with an oil immersion objective lens and 405 nm and 488 nm lasers (Zeiss). | 2018-03-06T14:59:03.032Z | 2017-03-06T00:00:00.000 | {
"year": 2017,
"sha1": "7d4f5809ffc0c7715b696a1081089b547c5338d3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-00123-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5afdc2f5fc16dde8d32a6d6ce0f36132051f9ef8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7776453 | pes2o/s2orc | v3-fos-license | The Predictive Effects of Early Pregnancy Lipid Profiles and Fasting Glucose on the Risk of Gestational Diabetes Mellitus Stratified by Body Mass Index
This study aimed at evaluating the predictive effects of early pregnancy lipid profiles and fasting glucose on the risk of gestational diabetes mellitus (GDM) in patients stratified by prepregnancy body mass index (p-BMI) and to determine the optimal cut-off values of each indicator for different p-BMI ranges. A retrospective system cluster sampling survey was conducted in Beijing during 2013 and a total of 5,265 singleton pregnancies without prepregnancy diabetes were included. The information for each participant was collected individually using questionnaires and medical records. Logistic regression analysis and receiver operator characteristics analysis were used in the analysis. Outcomes showed that potential markers for the prediction of GDM include early pregnancy lipid profiles (cholesterol, triacylglycerols, low-density lipoprotein cholesterol/high-density lipoprotein cholesterol ratios [LDL-C/HDL-C], and triglyceride to high-density lipoprotein cholesterol ratios [TG/HDL-C]) and fasting glucose, of which fasting glucose level was the most accurate indicator. Furthermore, the predictive effects and cut-off values for these factors varied according to p-BMI. Thus, p-BMI should be a consideration for the risk assessment of pregnant patients for GDM development.
Introduction
Gestational diabetes mellitus (GDM) is a common complication during pregnancy that occurs in 17.5% of Chinese pregnant women [1]. GDM leads to various adverse pregnancy outcomes not only during the perinatal phase but also in the long term [2,3]. For example, women with GDM display an elevated risk for shoulder dystocia and the need for a caesarean section at birth, whereas the foetuses of women with GDM are prone to excessive intrauterine growth and neonatal hypoglycaemia. Furthermore, both women with GDM and their offspring experience a greater risk of obesity, type 2 diabetes, and cardiovascular disorders in the future.
Like type 2 diabetes, peripheral insulin resistance, combined with inadequate insulin secretion, plays the primary role in the pathophysiology of GDM. However, the exact aetiology of such insulin dysfunction is still not clearly understood. Investigators have indicated that lipid abnormalities serve as an inducing factor for insulin resistance [4]. Furthermore, the triglyceride to high-density lipoprotein cholesterol ratio (TG/HDL-C) has been confirmed as a clinical indicator of insulin resistance [5]. In addition, obesity is an independent risk factor for GDM, and the risk of GDM rises with an increase in the prepregnancy body mass index (p-BMI) [6]. All of these findings suggest that lipid profiles during early pregnancy may have an effect on the development of GDM, and this effect may vary due to p-BMI.
However, studies focused on the role of early pregnancy lipid profiles for the prediction of GDM are very limited and inconsistent [7,8]. In China, in particular, lipid profile tests are not routinely requested at the first prenatal visit, and nearly half of all pregnant Chinese women do not undergo lipid profile testing during the first trimester. Additionally, the fasting glucose level at the first prenatal visit has a strong ability to predict the incidence of GDM later in pregnancy [1]. Therefore, to clarify the predictive role of lipid profiles during early pregnancy and to compare the lipid prediction value with the fasting glucose prediction 2 Journal of Diabetes Research value, this study aimed to evaluate the predictive effect of lipid profiles [cholesterol, triacylglycerols, low-density lipoprotein cholesterol/high-density lipoprotein cholesterol ratio (LDL-C/HDL-C), and TG/HDL-C ratio] and fasting glucose levels during early pregnancy on the risk of GDM in groups stratified by p-BMI. ). All participants provided written informed consent, and the ethics committee approved this consent procedure.
Materials and Methods
All participants in the retrospective study were eligible for the present analysis unless they met one or more of the following exclusion criteria: preexisting diabetes mellitus (DM) (209 patients), multiple births (253 patients), or missing data on early pregnancy lipid and fasting glucose concentrations (9,467 patients). Ultimately, a total of 5,265 participants were included.
Data Collection.
A questionnaire was designed to collect information by interviewing all of the pregnant women who delivered during the study period and by reviewing their medical records from the day after they gave birth. The questionnaire was composed of two main sections: a demographic information section and a case data section. The demographic information section was completed during a face-to-face interview in the hospital room of the patient, whereas the case data section included information such as the glucose and lipid concentrations during early pregnancy and the dates on which they were evaluated as well as 75 g oral glucose tolerance test (OGTT) results. The information included in the latter section was reviewed and extracted from the medical records of each patient by the investigators. Prepregnancy weight was self-reported by each patient and was individually collected by our investigators.
The investigators at each hospital were trained before administering the survey. Each completed questionnaire was verified by an inspector. Data were coded and entered into a specially designed data software program that automatically flagged out-of-range values and logical mistakes. All questionnaires were administered by two individuals independently and were reviewed by a third person.
Definitions
(1) GDM. The GDM diagnostic criteria in this study followed the new criteria amended in August 2014 in China, which recommended that the diagnosis of GDM should be made when any one value met or exceeded a 0 h glucose level of 5.1 mM, a 1 h glucose level of 10.0 mM, and a 2 h glucose level of 8.5 mM after a diagnostic 75-g OGTT between the 24th and 28th week of gestation. A 0-h glucose level of 7.0 mM or a 2-h glucose level of 11.1 mM was considered sufficient to diagnose DM at any time [9], regardless of pregnancy stage.
(2) Definition of Early Pregnancy. It was defined as gestational age of less than 14 weeks.
(3) Prepregnancy BMI (p-BMI). Prepregnancy BMI was calculated as the prepregnancy weight (within three months before pregnancy) in kilograms divided by height in metres squared (kg/m 2 ).
Statistical Analysis.
We first divided the participants into two groups according to their 75-g OGTT results between 24 and 28 weeks of gestation, namely, the healthy group and the GDM group. We then evaluated the differences in the early pregnancy lipid profiles and fasting glucose concentrations between the women in the two groups. Then, based on the p-BMI categories of the participants (underweight: p-BMI < 18.5 kg/m 2 ; normal weight: 18.5 ≤ p-BMI < 24kg/m 2 ; overweight: 24 ≤ p-BMI < 28kg/m 2 ; obese: p-BMI ≥ 28 kg/m 2 ), we further divided the participants into four groups to assess the individual and joint value of early pregnancy lipid and fasting glucose levels for the prediction of GDM in patients stratified by p-BMI.
Data analysis was performed using the SAS 9.2 statistical software package (Peking University Clinical Research Institute). Continuous variables were expressed as the means ± standard deviation, and categorical variables were expressed as numbers and percentages. Differences in means between groups were assessed using the independent samples -test, whereas Pearson's chi-square test was used for categorical variables. The threshold for statistical significance was set at 0.05. A multivariate binary logistic regression analysis was conducted to evaluate potential markers that were significantly associated with GDM after adjusting the effects of age and family history of DM. Receiver operating characteristic (ROC) analysis was performed using MedCalc software, and area under the curve (AUC) statistics were subsequently derived to determine the value of the individual factors and their combined ability to accurately discriminate the subjects of the GDM and non-GDM groups. A 95% confidence interval (CI) for the ROC analysis was calculated using the bootstrap method. The ROC analysis was also used to determine an optimal cut-off point for risk assessment (i.e., a cut-off yielding the best trade-off between sensitivity and specificity). In our study, we selected the optimal cut-off point at point at which the sum of the sensitivity and specificity was highest.
Results
A total of 5,265 pregnant women were recruited for our analysis. Compared with the excluded mothers ( = 9929), the included participants were more likely to have family history of DM ( < 0.05), but their age, p-BMI, education levels, and GDM incidence rate had no significant difference. The baseline characteristics of the study population were summarized in Table 1. Of the study population, 1,062 women (20.2%) were diagnosed with GDM. Age, p-BMI, and family history of DM were all significantly higher in women with GDM. Furthermore, early pregnancy serum fasting glucose, cholesterol, and triacylglycerol levels as well as TG/HDL-C and LDL-C/HDL-C ratios were significantly elevated in women with GDM compared with healthy pregnant women ( < 0.001) ( Table 2). Multivariate binary logistic regression analysis showed that, among women with a p-BMI ≥ 18.5 kg/m 2 , the odds ratio (OR) values of each lipid marker (cholesterol, triacylglycerols, TG/HDL-C ratio, and LDL-C/HDL-C ratio) increased as the p-BMI increased. Additionally, after adjusting for age and family history of DM, early pregnancy cholesterol concentrations were significantly related to the incidence of GDM in normal-weight (OR = 1.13, = 0.02) and obese (OR = 1.55, = 0.02) women; serum triacylglycerol level during early pregnancy was significantly associated with the prevalence of GDM in normal-weight (OR = 1.08, = 0.04) and overweight (OR = 1.33, = 0.01) women; the LDL-C/HDL-C ratio was only significantly associated with the risk of GDM in overweight women (OR = 1.42; = 0.01). However, the TG/HDL-C ratio was a better predictor for GDM, as the associated OR value for the risk of GDM reached statistical significance in normal-weight (OR = 1.43, < 0.001), overweight (OR = 1.63, < 0.001), and obese (OR = 1.89, = 0.03) women. However, fasting glucose levels in the first trimester appeared to be the best predictor for GDM. In each p-BMI category, fasting glucose levels were independently and significantly associated with the development of GDM later during pregnancy, and the OR values increased as the p-BMI level increased (underweight: OR = 2.93, < 0.001; normal weight: OR = 3.15, < 0.001; overweight: OR = 3.48, < 0.001; obese: OR = 3.68, < 0.001) ( Table 3).
Based on the ROC curves and AUC measurements, we determined the optimal cut-off values for early pregnancy fasting glucose, cholesterol, and triacylglycerol levels as well as for the TG/HDL-C and LDL-C/HDL-C ratios in each p-BMI stratification group, after which we analysed the value of combinations of these factors for the ability to predict GDM. All of these results are shown in Table 4 and Figure 1. After adjusting for age and family history of DM, we found that the optimal cut-off values for each indicator varied according to p-BMI level, and the cut-off values for cholesterol, TG/HDL-C ratio, and fasting glucose increased as the p-BMI increased.
Discussion
The results of this study suggest that, compared with normal pregnant women, women with GDM display significantly increased levels of cholesterol, triacylglycerols, and fasting glucose as well as elevated TG/HDL-C and LDL-C/HDL-C ratios during early pregnancy. Moreover, both of early pregnancy lipid profiles and fasting glucose can be potential markers for the prediction of GDM, and their predictive value improved as the p-BMI increased. However, fasting glucose values in early pregnancy appeared to be the best discriminator of GDM. Changes in the early pregnancy lipid profiles in women with GDM were consistent with dyslipidaemia in obesity, which is characterized by elevated cholesterol, triacylglycerols, and LDL-C levels as well as decreased HDL-C levels [11]. These changes in lipid profiles are due to metabolic factors associated with insulin resistance [12], but the precise mechanisms for the associations between early pregnancy dyslipidaemia and the risk of GDM remain unclear. However, some aetiologies have been hypothesized. Some studies showed that excessive lipid accumulation may lead to an elevated oxidative stress, which correlates with insulin resistance [13], whereas other studies have shown that abnormal lipid metabolism can lead to the direct destruction of the function of cells of the pancreas [14]. In addition, obesity not only is an important risk factor for GDM but also exerts a critical influence on lipid profiles, which supports our hypothesis that lipid profiles during early pregnancy can predict the development of GDM, though their predictive effects are not very strong in our study.
Numerous studies have demonstrated a relationship between dyslipidaemia and glucose intolerance as well as type 2 diabetes [15,16]; however, studies focused on the assessment of the association between early pregnancy lipid profiles and GDM risk are limited and inconsistent. Enquobahrie et al. found that women with plasma triacylglycerol levels ≥ 137 mg/dL displayed a 3.5-fold increased risk of GDM, and each 20 mg/dL increase in triacylglycerol levels led to a 10% increase in GDM risk; however, no other associations could be observed between lipid changes and GDM risk [7]. dos Santos-Weiss and colleagues obtained similar results after conducting a matched case-control study. They noted that the TG/HDL-C ratio and triacylglycerol levels between 12 and 13 weeks of gestation were good indicators for the prediction of GDM; however, the TG/HDL-C ratio was a much better predictor [17]. In contrast, another study indicated that only low HDL-C concentrations had a significant and independent role for the prediction of GDM [18].
In our study, the results suggest that the potential for each lipid index to predict GDM varied according to p-BMI. However, the inconsistency may also be due to confounding factors or insufficient sample size. Moreover, we observed that the prevalence odds for each lipid index for the prediction of GDM presented an approximate linear trend between groups. Although some of the OR values were not significant, our data show that patients with higher p-BMI values show a more significant association between early pregnancy lipid profiles and GDM incidence. This point was also demonstrated based on our ROC curve analysis. The AUC values for each lipid marker increased in conjunction with p-BMI. Moreover, similar results were observed for early pregnancy fasting glucose levels and the combined value of fasting glucose levels and lipid markers for the prediction of GDM. Early pregnancy fasting glucose was the most accurate predictor for the development of GDM according to our analysis, which is in accordance with previous studies. Harrison et al. indicated that, after adjusting for confounders, only elevated fasting glucose level but not elevated triacylglycerol level or decreased HDL-C level during early pregnancy was significantly associated with GDM development. The OR for fasting glucose and GDM risk was 10.03 according to the International Association for Diabetes in Pregnancy Study Group (IADPSG) GDM diagnostic criteria, and the AUC was 0.83 (95% CI: 0.77-0.90) [19].
In our present analysis, significant associations between increased early pregnancy fasting glucose level and the risk of developing GDM was present for each p-BMI subgroup. A previous study by our group evaluated the value of fasting glucose testing at the first prenatal visit for the diagnosis of GDM in China, and the results demonstrated that fasting glucose values at the first prenatal visit strongly correlated with the development of GDM (Pearson's 2 = 959.3, < 0.001). Furthermore, Zhu et al. proposed that, for women with fasting glucose levels between 5.10 mM and 6.09 mM at the first prenatal visit, a healthy lifestyle intervention was needed for the prevention of GDM [1].
However, when we tried to improve the predictive ability of early pregnancy lipid profiles and fasting glucose by combining them as a joint risk assessment tool, we found that, in each p-BMI stratification group, using lipid profiles and fasting glucose together just gave a little higher AUC than using fasting glucose as the only predictor for GDM. That is to say, adding early pregnancy lipid profiles to fasting glucose could not dramatically improve its ability to detect GDM. Thus, early pregnancy fasting glucose should still be a significant risk factor that needs to be early screened for effective identification of GDM risk.
Another major contribution from our study was determining the optimal cut-off values for each lipid marker and fasting glucose for the prediction of GDM according to p-BMI. To the best of our knowledge, this is the first study to evaluate the optimal cut-off value for predicting GDM according to the four p-BMI categories. Of note, the cut-off value for each indicator analysed in this study varied according to the p-BMI category, and the cut-off values of the cholesterol level, TG/HDL-C ratio, and fasting glucose level increased as the p-BMI increased. For example, the cut-off values for the TG/HDL-C ratio were 0.63, 066, 1.07, and 1.20 for prepregnancy underweight, normal-weight, overweight, and obese women, respectively. Although the sensitivity and specificity for each lipid marker were not very high, our results indicate that p-BMI is an important factor which should be considered when using early markers to predict GDM. Thus, it is necessary to examine the laboratory results of pregnant women in the context of their p-BMI to determine GDM risk.
Few studies have evaluated the cut-off values of the lipid markers for the prediction of GDM. dos Santos-Weiss et al. noted that when defining the cut-off point of the TG/HDL-C ratio between 12 and 23 weeks of gestation as 0.099, the sensitivity and specificity reached 82.6% and 83.4%, respectively (AUC 0.886, < 0.0001) [17]. Another study performed in Chinese patients established the cut-off value for the TG/HDL-C ratio between gestation weeks 24 and 28 with respect to the detection of GDM at 1.12 (AUC 0.617, 95% CI: 0.548-0.686) [20]. In our study, we restricted the definition of early pregnancy to before gestation week 14 and analysed the optimal lipid and fasting glucose cut-off values in patients divided into four p-BMI categories. Therefore, our study was more rigorously structured than previous studies.
This study was based on a systemic cluster sampling survey and was rationally designed. Additionally, our study was conducted by trained staff, and most of the items included in the questionnaire were based on medical records. This approach ensured the standardization of data collection. Furthermore, our analysis yielded details for patients in each of the four p-BMI categories. However, several limitations of this study should be noted. First, our study was retrospective in nature, and the p-BMI values were selfreported; therefore, group classification errors are possible due to recall bias. Second, lipid profiles and fasting glucose levels are closely related to lifestyle. However, because we did not consider lifestyle factors in our analysis, patient lifestyle could be a confounding factor for some of our analyses. Third, the sample size included in our study may still have been insufficient. Moreover, in this study, we focused on Chinese singleton, nondiabetic, pregnant women; therefore, our results may not be generalizable to the overall population.
In conclusion, our study showed that women with GDM had significantly increased cholesterol levels, triacylglycerol levels, TG/HDL-C ratios, LDL-C/HDL-C ratios, and fasting glucose levels during early pregnancy. Moreover, early pregnancy cholesterol levels, triacylglycerol levels, TG/HDL-C ratios, LDL-C/HDL-C ratios, and fasting glucose levels were associated with GDM risk, while fasting glucose appeared to be the most accurate predictor of GDM. Besides, as the optimal cut-off value for each indicator analysed in this study varied according to p-BMI, and the predictive value of these factors increased in significance along with p-BMI, thus, p-BMI should be considered when using early markers to predict GDM. However, several problems still need to be addressed, such as determining the optimal time during which lipid markers most accurately predict GDM development and determining the precise mechanism between dyslipidaemia and GDM. Thus, future clinical trials and basic research studies should be conducted to address these questions. | 2017-11-17T21:42:29.549Z | 2016-02-15T00:00:00.000 | {
"year": 2016,
"sha1": "95e9e4412c678b47bf2225ddd822157ae6938cca",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jdr/2016/3013567.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b880f51089d731f4ef34de1e27f07fdfd7a1c3a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17166501 | pes2o/s2orc | v3-fos-license | Deletion of the γ-secretase subunits Aph1B/C impairs memory and worsens the deficits of knock-in mice modeling the Alzheimer-like familial Danish dementia
Mutations in BRI2/ITM2b genes cause Familial British and Danish Dementias (FBD and FDD), which are pathogenically similar to Familial Alzheimer Disease (FAD). BRI2 inhibits processing of Amyloid precursor protein (APP), a protein involved in FAD pathogenesis. Accumulation of a carboxyl-terminal APP metabolite –β-CTF- causes memory deficits in a knock-in mouse model of FDD, called FDDKI. We have investigated further the pathogenic function of β-CTF studying the effect of Aph1B/C deletion on FDDKI mice. This strategy is based on the evidence that deletion of Aph1B/C proteins, which are components of the γ-secretase that cleaves β-CTF, results in stabilization of β-CTF and a reduction of Aβ. We found that both the FDD mutation and the Aph1B/C deficiency mildly interfered with spatial long term memory, spatial working/short-term memory and long-term contextual fear memory. In addition, the Aph1BC deficiency induced deficits in long-term cued fear memory. Moreover, the two mutations have additive adverse effects as they compromise the accuracy of spatial long-term memory and induce spatial memory retention deficits in young mice. Overall, the data are consistent with a role for β-CTF in the genesis of memory deficits.
As briefly mentioned above, mutations in BRI2/ ITM2B cause the AD-like autosomal dominant FBD and FDD [5,7]. FBD is characterized by the early onset of personality changes, memory and cognitive deficits, spastic rigidity, and ataxia [5]. FDD patients present early onset cataracts, deafness, progressive ataxia and dementia [7]. BRI2 is a type II membrane protein of 266 amino acids that is cleaved at the C terminus into a peptide of 23 amino acids (Bri23) plus a membranebound mature BRI2 (mBRI2) product [13,14]. In FBD patients, a point mutation at the stop codon of BRI2 results in a read-through of the 3'-untranslated region and the synthesis of a BRI2 molecule containing 11 extra amino acids at the COOH terminus. Cleavage by convertases generates a normal mBRI2 plus a longer peptide, the ABri peptide. FDD is caused by a10-nucleotide duplication before the stop codon of the BRI2 gene, which leads to the synthesis of a longer (277 amino acids) mutant protein [7,15]. Convertase-mediated processing of the Danish mutant protein generates a longer C-terminal fragment, called ADan, and a normal mBRI2 polypeptide. Both ABri and ADan are deposited as amyloid fibrils. Of note, ADan deposits together with APP-derived Amyloid (Aβ42) peptides forming wide-spread amyloid angiopathy in the small blood vessels and capillaries of the cerebrum, choroid plexus, cerebellum, spinal cord, and retina [15]. Overall, FBD and FDD patients present cognitive dysfunctions and neuropathology including neurodegeneration, amyloid, and neurofibrillary tangles [7,[15][16][17], which are similar to those of Alzheimer's patients. www.impactjournals.com/oncotarget Knock-in mice models of FDD and FBD (FDD KI and FBD KI mice) showed that the mutant BRI2 proteins are mainly targeted for degradation, leading to a loss of mBRI2 function and, consequently, increased APP processing. Of note, loss on mBRI2 and increased APP processing was also detected in brain lysates from FDD and FBD patients. These alterations in APP processing, and not amyloid lesions, mediate memory and synaptic plasticity deficits caused by BRI2/ITM2B mutations. In fact, synaptic and memory deficits or FDD KI mice were reduced by inhibition of β-cleavage of APP, which generates the fragments β-CTF and sAPPβ [18], while they were worsened by inhibition of γ-secretase, which cleaves β-CTF into Aβ and AID/AICD [6,[19][20][21][22][23][24][25][26][27] .
Pharmacological evidence suggests a synaptictoxicity of β-CTF. Here, we have tested this hypothesis genetically. If increases in β-CTF prompt learning and memory deficits, Aph1BC -/mice may show deficits similar to those observed in FDD KI mice; additionally, deletion of Aph1BC may worsen the defects of FDD KI mice. On the contrary, if Danish mice develop learning and memory defects due to over-production of Aβ, the Aph1BC -/mutation will ameliorate learning and memory deficits of FDD KI mice.
Young mice carrying the FDD mutation and deletion of Aph1B/C show mild learning and memory deficits
Mice were first tested at four months of age for anxiety-like behavior on the elevated rero maze. We analyzed the percentage of time spent in the open areas of the elevated zero maze during the 5-min testing period. While FDD KI /Aph1BC -/mice spent more time in the open areas on average than mice of the other genotypes, oneway ANOVA revealed no significant effect of genotype, F(3, 60) = 1.94, p = 0.1331. Twelve animals (3 WT, 2 FDD KI , 4 FDD KI /Aph1BC -/-, and 3 Aph1BC -/-) fell off the open areas of the maze during testing and were excluded from the data analysis. In addition, two mice (1 FDD KI and 1 FDD KI /Aph1BC -/-) were excluded due to technical problems with video tracking encountered during testing ( Figure 1).
Next, mice were assayed for general locomotor activity levels and anxiety-like behavior in the open field. Four mice (1 FDD KI and 2 FDD KI /Aph1BC -/-) were dropped from the statistical analysis due to persistent tracking errors caused by their light-colored fur. Since the video tracking system was not able to track these mice consistently, they were not included in the subsequent experiments. In addition, one FDD KI mouse was excluded from the analysis due to a one-time tracking error during testing. Analysis of the mean distance traveled during the 10-min testing period by two-way ANOVA found a significant main effect for day, F(2, 138) = 82.23, p < 0.0001, indicating habituation to the box over the threeday testing period, and a significant main effect for genotype, F(3, 69) = 3.46, p < 0.05, but no significant interaction between genotype and day, F(6, 138) = 1.15, p = 0.3387. Post-hoc comparisons (Dunnett's) showed that the mean distance traveled by FDD KI animals were significantly greater than that traveled by WT mice on the first (p<0.01) and second (p<0.05) days ( Figure 2A). Analysis of the time spent traveling at speed greater than 50 mm/s yielded significant main effects for day, F(2, 138) = 136.2, p < 0.0001, and for genotype, F(3, 69) = 3.97, p < 0.05, but no significant day × genotype interaction, F(6, 138) = 0.95, p = 0.4583, and showed that FDD KI mice spent more time traveling at speed greater than 50 mm/s than WT mice on all three days (p<0.001, Day 1; p<0.05, Days 2 and 3, Dunnett's) ( Figure 2B). Analysis of the mean time spent in the center of the open field showed a significant main effect for day, F(2, 138) = 7.09, p < 0.01, and a significant day × genotype interaction, F(6, 138) = 3.41, p < 0.01, while the main effect for genotype was close to significance, F(3, 69) = 2.47, p = 0.0692 ( Figure 2C). FDD KI mice spent more time in the arena center than WT mice on the first two days (p<0.05, Dunnett's). Analysis of the number of entries into the arena center showed significant main effects for day, F(2, 138) = 37.67, p < 0.0001, and genotype, F(3, 69) = 4.20, p < 0.01, and a near significant interaction between genotype and day, F(6, 138) = 2.04, p = 0.0650. Post-hoc comparisons (Dunnett's) revealed that FDD KI mice entered the arena center significantly more than did WT mice on the first (p<0.001) and second (p<0.05) days ( Figure 2D). Overall these data indicate that FDD KI mice were generally more active and less anxious than WT, Aph1BC -/and FDD KI / Aph1BC -/mice, while no differences were detected between WT, Aph1BC -/and FDD KI /Aph1BC -/animals.
Following the open field test, mice were tested in the MWM for spatial reference memory. One WT and one FDD KI mouse were dropped from the experiments permanently due to severe bite wounds and persistent tracking errors caused by its light fur color, respectively. In addition, one FDD KI /Aph1BC -/mouse died before the task was completed. As shown in Figures 3A and 3B, the visible platform task conducted prior to the reference memory task revealed no significant differences among the four genotype groups in path length traveled, F(3, 66) = 1.43, p = 0.2427, or swim speed, F(3, 66) = 0.60, p = 0.6177, indicating that none of the mutant mice had any visual or motor deficits relative to WT control at this age. Figure 3C depicts the mean path length traveled by animals during the acquisition phase of the hidden platform task. Two-way ANOVA revealed a significant main effect for day on path length, F(5, 330) = 68.20, p < 0.0001, indicating animals' acquisition of reference memory for the platform location. There was also a significant interaction between day and genotype, F(15, 330) = 2.18, p < 0.01, while no significant main effect for genotype was found, F(3, 66) = 2.49, p = 0.0676. Tukey's comparisons revealed some differences among the genotypes on the first day, with Aph1BC -/mice traveling a significantly larger distance than WT mice (p<0.05) and FDD KI mice (p<0.001), and with FDD KI /Aph1BC -/mice than FDD KI mice (p<0.01). On the probe trial conducted two days after the last acquisition session, the analysis of the percentage of time spent in the four quadrants revealed a significant main effect for quadrant, F(3, 198) = 50.71, p < 0.0001, but no significant main effect for genotype, F(3, 66) = 0.55, p = 0.6469, or significant quadrant × genotype interaction, F(9, 198) = 1.15, p = 0.3293 ( Figure 3D). While the percentage of time spent in the target quadrant is the most popular measure of probe trial performance [54], counting the number of times the animal crosses a small area surrounding the former platform position (counter crossings) provides more information on the spatial accuracy with which the exact location of the platform has been encoded [55]. One-way ANOVA did not reveal a significant effect of genotype on the number of counter crossings in the target quadrant, F(3, 66) = 2.00, p = 0.1233 ( Figure 3E). However, a separate unpaired t-test showed a significant difference between WT and FDD KI / Aph1BC -/mice (p = 0.0177, Figure 3F). There were no significant differences in the average proximity to the original platform location among the genotype groups, F(3, 66) = 0.75, p = 0.5282 ( Figure 3G). Thus, at 4 months of age, when present together the FDD mutation and the Aph1BC deficiency mildly interfered with spatial longterm memory by compromising its accuracy.
In the reversal learning task, in which the location of the hidden platform was moved to the quadrant opposite to the original target quadrant, all the genotypes performed the task in a similar manner during the acquisition phase ( Figure 3H), with ANOVA showing a significant main effect for day, F(5, 330) = 102.5, p < 0.0001, but no significant main effect for genotype, F(3, 66) = 0.70, p = 0.5577, or day × genotype interaction, F(15, 330) = 0.71, p = 0.7770. On the probe trial given two days after the last reversal learning session, no differences were found among the genotypes in the percentage of time spent in the quadrants ( Figure 3I The same cohorts of mice were again assessed for spatial reference memory in the Morris water maze at 7-8 months of age. To determine whether genetic manipulations affect learning and memory in an aging dependent manner, it is customary to use different cohorts of animal to perform a task (such as the Morris water maze) at different ages. In this manner, the results are not influenced by learning and/or habituation to the tasks that may occur when the same cohort of mice are subjected to the same task more than once. However, a longitudinal analysis is ethically and economically preferable since it reduces the number of experimental animals. In addition, it eliminates possible confounding effects due to genetic variability between different cohorts of mice. Finally, testing if mice forget, during aging, tasks learned as young adults, mirrors what happens in AD patients. In this new Morris water maze test, the number of daily trials was reduced to three to alter the difficulty of the task. One WT mouse was dropped since it had become too weak to swim. The visible platform task revealed no significant differences among the genotypes in path length traveled, F(3, 65) = 1.36, p = 0.2638 ( Figure 4A), or swim speed, F(3, 65) = 0.27, p = 0.8488 ( Figure 4B). In the reference memory task, as shown in Figure 4C, no significant differences among the genotypes were found during acquisition, with two-way ANOVA showing a significant main effect for day, F(4, 260) = 12.41, p < 0.0001, but no significant main effect for genotype, F(3, 65) = 0.81, p = 0.4932, or day × genotype interaction, F(12, 260) = 0.95, p = 0.4957. The probe trial conducted two days after the last acquisition session did not reveal any differences among the genotypes in the percentage of time spent in the four quadrants ( Figure 4D Figure 4F). Thus, no significant differences were found among the genotypes in the acquisition or retention of spatial reference memory at 7-8 months of age, and the small deficits in counter crossing observed in 4 month-old FDD KI /Aph1BC -/mice was not seen again at 7-8 months of age, possibly due to learning/habituation to the task.
Following the completion of the probe trial at 7-8 months, mice were subjected to a working memory task in the eight-arm radial arm water maze. Mice were initially given four consecutive daily acquisition trials with a 15 seconds inter-trial interval. However, it became apparent after six days of training with this procedure that most mice lacked the physical strength required for maintaining their performance in the maze during the four consecutive acquisition trials, and that their performance was not improving after six days (data not shown). Accordingly, the procedure was modified such that mice would be given a 6-min inter-trial interval in the holding cage, and that three, instead of four, acquisition trials would be given before the retention trial. After a break on the seventh day, mice were tested for five more days with this new procedure and the results pertaining to these last five days are shown in Figure 5. Repeated measures oneway ANOVA (RM one-way ANOVA) found a significant main effect for trial in WT [F (2.065, 33.04) = 18.37, p < 0.0001)], FDD KI [F (2.320, 37.12) = 19.88, p < 0.0001)] and Aph1BC -/-[F (2.312, 39.31) = 22.14, p < 0.0001)], but not in FDD KI /Aph1BC -/mice [F (2.033, 28.46) = 3.218, p=0.0542)]. A post hoc multiple comparison of the mean of each trial to the mean of every other trial (Tukey's multiple comparisons test) indicated that performance by WT and FDD KI mice improved significantly between Trials 1 and 2 (p < 0.05 for WT and p < 0.01 for FDD KI ), Trials 1 and 3 (p < 0.001 for WT and p < 0.01 for FDD KI ) and Trials 1 and R (p < 0.0001 for both genotypes). The performance of Aph1BC -/mice improved significantly between Trials 1 and 3, 1 and R (p < 0.0001) as well as 2 and 3 (p < 0.05),and 2 and R (p < 0.01). On the other hand, this post hoc multiple comparison analysis was not performed for the FDD KI /Aph1BC -/mice since the RM one-way ANOVA analysis did not find a significant main effect for trial in these mice. Overall, these observations indicated that the magnitude of improvement in performance was the smallest in FDD KI /Aph1BC -/mice, compared to WT, FDD KI , or Aph1BC -/mice. This difference may reflect working memory impairment in FDD KI /Aph1BC -/mice, or might have resulted from the fact that FDD KI /Aph1BC -/mice actually made fewer errors than mice of the other genotypes on the first trial, thereby making the gain smaller.
The FDD and Aph1BC -/mutations induce mild deficits in spatial long-term memory in middle age
At 12 months of age, another reference memory task was given. For this task, the number of daily trials was further reduced to two per day, and the length of the probe trial was reduced to 30 s. In addition, the visible platform task was given after the completion of the probe trials. One WT mouse that displayed extensive and persistent thigmotaxis during the reference memory task was excluded from the data analysis. Also, one Aph1BC -/mouse was eliminated because it had developed rectal prolapse. Again, the visible platform task did not find any differences among the genotypes in path length traveled, F(3, 63) = 0.81, p = 0.4911 ( Figure 6A), or swim speed, F(3, 63) = 0.81, p = 0.4908 ( Figure 6B). In the reference memory task, mice of all the genotype learned the location of the hidden platform in a similar manner ( Figure 6C). ANOVA found a significant main effect for day, F(7, 441) = 11.84, p < 0.0001, but no significant genotype main effect, F(3, 63) = 0.10, p = 0.9576, or day × genotype interaction, F(21, 441) = 0.75, p = 0.7773. On the probe trial given two days after the last acquisition session, no significant differences were found among the genotypes in the analysis of the percentage of time spent in the quadrants ( Figure 6D), with a significant main effect for quadrant, F(3, 189) = 35.30, p < 0.0001, but no significant genotype main effect, F(3, 63) = 1.21, p = 0.3123, or quadrant × genotype interaction, F(9, 189) = 0.65, p = 0.7571. However, the analysis of the number of counter crossings in the target quadrant revealed a significant effect of genotype, F(3, 63) = 5.80, p < 0.01 ( Figure 6E). Post-hoc comparisons (uncorrected Fisher's LSD) revealed that WT mice crossed the counter in the target quadrant significantly more often than FDD KI mice (p < 0.001), Aph1BC -/-(p < 0.01) and FDD KI /Aph1BC -/mice (p < 0.05). There was no significant effect of genotype on the average proximity to the former platform location, F(3, Figure 6F). Thus, at 12 months of age, the FDD mutation and the Aph1BC deficiency both mildly interfered with spatial long-term memory by compromising its accuracy.
Mice were tested for spatial working memory in Morris water maze at 15 months of age. As shown in Figures 7A and 7B, the visible platform task given at this age found no significant effect of genotype on path length traveled, F(3, 64) = 0.37, p = 0.7743, or swim speed, F(3, 64) = 0.66, p = 0.5797. In the working memory task, in which the location of the hidden platform was changed daily, we measured the path-length to target ( Figure 7C). RM one-way ANOVA found a significant main effect for trial in WT p=0.14)]. A post hoc multiple comparison of the mean of each trial to the mean of every other trial (Tukey's multiple comparisons test) indicated that only the performance of WT mice improved significantly between Trials 1 and 3 (p < 0.05). This test suggests that, at 15 months of age, the FDD mutation and the Aph1BC deficiency both mildly interfered with working memory.
Learning and memory deficits are caused by the Aph1BC deficiency and the FDD mutation at old age (18-19 months) At 18-19 months of age, mice were given a series of tests. The first test, conducted at 18-19 months of age, evaluated mice in the two-trial Y-maze test task for spatial recognition memory. One FDD KI /Aph1BC -/mouse had died before reaching this age. Figure 8A depicts the mean number of arm entries during the 5-min test trial, which is an index for animals' total activity levels. ANOVA found a small but significant effect of genotype, F(3, 63) = 2.77, p =0.0490 < 0.05, and Fisher's LSD comparison test showed that FDD KI mice were more mobile than WT and FDD KI /Aph1BC -/mice (p < 0.05). Fisher's test was used here because, despite the significant overall genotype effect, no significant differences among the genotypes were detected by corrected comparison tests. Given the significant genotype effect on the number of total arm entries, the percentage of entries into each arm, instead of raw numbers of arm entries, was used to analyze animals' preference for the novel arm vs. the known arm. As shown in Figure 8B, the analysis of the percentage of entries into the novel and known arms found a significant main effect for arm, F(1, 63) = 43.34, p < 0.0001, but no significant genotype main effect, F(3, 63) = 0.78, p =0.5076, or arm × genotype interaction, F(3, 63) = 1.96, p =0.1295. Sidak's multiple comparisons between the arms revealed that the novel arm was entered significantly more than the known arm by WT mice and FDD KI mice, but not by FDD KI / Aph1BC -/mice or Aph1BC -/mice, and that the level of statistical significance was much higher for WT mice (p < 0.0001) than for FDD KI mice (p < 0.01). As can be seen in Figure 8C, the analysis of the time spent in the novel and known arms found a significant main effect for arm, F(1, 63) = 29.63, p < 0.0001, as well as a significant interaction between arm and genotype, F(3, 63) = 2.82, p < 0.05, while showing no significant main effect for genotype, F(3, 63) = 0.43, p = 0.7345. Post-hoc comparisons across the genotypes (Dunnett's) revealed that WT mice spent significantly more time in the novel arm than did FDD KI / Aph1BC -/mice (p < 0.05). In addition, comparisons between the arms (Sidak's) showed that WT mice spent significantly more time in the novel arm than in the known arm (p < 0.0001). FDD KI mice also spent significantly more time in the novel arm than in the known arm, but to a much smaller degree (p < 0.05) than WT mice. By contrast, the difference in time spent between the novel and known arms were not statistically significant for FDD KI /Aph1BC -/or Aph1BC -/mice. In sum, the results of the two-trial Y-maze task demonstrated that WT mice and, to a lesser extent, FDD KI mice distinguished between the novel and familiar arms better than FDD KI /Aph1BC -/or Aph1BC -/mice after a 1-h retention interval, indicating that the Aph1BC deficiency, and to a lesser extent the FDD mutation, leads to impairment of short-term spatial recognition memory.
Following the open field test, the spontaneous alternation test was conducted in the Y-maze to assess animals' spatial working memory. Analysis of the total number of arm entries during the 5-min testing period showed no significant effect of genotype was detected, F(3, 62) = 0.36, p = 0.7811 ( Figure 10A). Since the minimum number of arm entries needed for a complete alternation is three, six mice that had less than three arm entries in total (2 WT, 1 FDD KI , 1 FDD KI /Aph1BC -/-, and mice made significantly more arm entries than WT or FDD KI /Aph1BC -/mice (*p < 0.05). B. Percentage of entries into the novel (N) and known (K) arms. WT and, to a smaller degree, FDD KI mice entered the novel arm significantly more than the known arm, while FDD KI / Aph1BC -/or Aph1BC -/mice did not (**p < 0.01; **** p < 0.0001). C. Time spent in the novel and known arms. WT mice and, to a smaller degree, FDD KI mice spent significantly more time in the novel arm than in the known arm, while FDD KI /Aph1BC -/or Aph1BC -/mice did not (*p < 0.05; **** p < 0.0001). In addition, there was a significant difference in time spent in the novel arm between WT and FDD KI / Aph1BC -/-(*p < 0.05).
Next, we used the elevated zero maze to test for anxiety-like behavior. As shown in Figure 11, while Aph1BC -/mice spent more time in open areas than mice of the other genotypes on average, the genotype effect did not reach statistical significance, F(3, 60) = 1.396, p = 0.2527. Three animals (2 FDD KI /Aph1BC -/and 1 Aph1BC -/-) fell off the open areas of the maze during testing and were excluded from the data analysis.
Mice were then evaluated for spatial working memory in the 6-arm radial arm water maze. RM oneway ANOVA found a significant main effect for trial in WT test) indicated that performance by WT mice improved significantly between Trials 1 and 2 (p < 0.01), 1 and 3 (p < 0.001), 1 and R (p < 0.0001) as well as between Trials 2 and R (p < 0.01). The FDD KI mice improved between Trials 1 and 3 and 1 and R as well as between Trials 2 and 3 and 2 and R (p < 0.01). As for FDD KI /Aph1BC -/and Aph1BC -/mice, their performance improved significantly between Trials 1 and R (p < 0.05). However, it is worth noting that Aph1BC -/mice actually made fewer errors than mice of the other genotypes on the first trial, thereby making the gain smaller. On the other hand, the performance by FDD KI /Aph1BC -/mice significantly improved between Trials 1 and 2,1 and R, but the degree of significance was much smaller (p < 0.05) as compared to WT and FDD KI mice.
We also compared the number of errors made by mice of the four genotypes at Trials 2, 3 and R ( Figure 12B). Ordinary one-way ANOVA found significant effect of genotype at of the mean of each genotype to the mean of every other genotype (Tukey's multiple comparisons test) indicated that WT mice committed significantly fewer errors than FDD KI /Aph1BC -/mice both at Trials 3 (p < 0.05) and R (p < 0.01). The differences between WT vs. FDD KI , WT vs. Aph1BC -/-, FDD KI /Aph1BC -/vs. FDD KI and FDD KI / Aph1BC -/vs. Aph1BC -/were not statistically significant. Altogether, these observations indicated that the degree of improvement in performance across trials during acquisition and retention was the greatest for WT mice, followed by FDD KI , Aph1BC -/and FDD KI /Aph1BC -/mice in the order named. These results further indicate that the FDD mutation and deletion of Aph1BC induces spatial working memory deficits. These data are also in accordance with the results of a previous study showing that the Aph1BC -/mutation induces spatial working memory deficits in mice of the F2 generation of the C57BL/6J-129/Ola hybrids [53].
After the radial arm water maze test, mice were assessed for possible visual or motor deficits in the visible platform task. Ten animals (2 WT, 1 FDD KI , 5 FDD KI / Aph1BC -/-, and 2 Aph1BC -/-) were mistakenly sacrificed prematurely after the radial arm water maze and before the visual platform task. As shown in Figures 12C and 12D, the visible platform task did not reveal any visual or motor deficits at this age. There was no significant effect of genotype on path length traveled, F(3, 53) = 1.13, p = 0.3467, or swim speed, F(3, 53) = 2.19, p = 0.0999.
Finally, mice were tested for contextual and cued fear memory using the fear conditioning paradigm. As is indicated in Figures 13A and 13B, in the test for contextual fear memory conducted 24 h after conditioning, WT mice exhibited freezing behavior significantly more than mice in the other genotype groups. ANOVA revealed a significant effect of genotype on the mean percentage of freezing during the last 3 min of the test session, F(3, 53) = 5.45, p < 0.01, and post-hoc comparisons (Tukey's) indicated that WT mice froze significantly more than FDD KI (p < 0.01), Aph1BC -/-(p < 0.01), and FDD KI / Aph1BC -/-(p < 0.05) mice ( Figure 13A). As can be seen in Figure 13B, the analysis of the time course of the percentage of freezing in 1-min bins during the contextual test also showed a significant main effect for genotype, F(3, 53) = 4.75, p < 0.01, in addition to a significant main effect for time, F(4, 212) = 20.55, p < 0.0001, while finding no significant time × genotype interaction, F(12, 212) = 0.89, p = 0.5557. Tukey's multiple comparison test further indicated that WT mice froze significantly more than FDD KI mice in the last three minutes (p < 0.01 for the fourth min; p < 0.05 for the third and fifth min), more than Aph1BC -/mice in the first minute (p < 0.05) and the last three minutes (p < 0.01 for the third and fourth min; p < 0.05 for the fifth min), and more than FDD KI / Aph1BC -/mice in the third minute (p < 0.05). In the test for cued fear memory conducted 24 h after the contextual test, no significant effect of genotype was found on the percentage of freezing during the presentation of the tone CS, F(3, 53) = 2.27, p = 0.0910 ( Figure 13C). However, as can be seen in Figure 13D, which shows the time course of freezing during tone presentation, while WT mice remained frozen throughout the 3-min presentation of the tone, FDD KI mice, which were initially as frozen as WT mice, became more mobile towards the end of this period, and FDD KI /Aph1BC -/and Aph1BC -/mice froze less than WT mice for the entire period. Yet, the analysis of the time course showed a significant main effect for time, F(2, 106) = 6.97, p < 0.01, but no significant main effect for genotype, F(3, 53) = 2.27, p = 0.0910, or time × genotype interaction, F(6, 106) = 2.09, p = 0.0603. Nevertheless, the time × genotype interaction approached significance, and when one-way ANOVA was performed on the percentage of freezing during the first, second, and third 1-min time periods separately, while no significant genotype effect was detected in the first minute, F(3, 53) = 2.47, p = 0.0720, or the second minute, F(3, 53) = 1.34, p = 0.2712, there was a significant genotype effect in the third minute, F(3, 53) = 3.25, p < 0.05. Dunnett's comparison test further revealed a significant difference between WT and FDD KI /Aph1BC -/mice (p < 0.05), as well as between WT and Aph1BC -/mice (p < 0.05), in the last minute of tone presentation. Lastly, a day after the completion of the test for cued fear memory, all the experimental mice were evaluated for shock sensitivity. As shown in Figure 13E, mice of all the genotypes reacted to electric shock at the four intensity levels in a similar manner, with ANOVA showing a significant main effect for shock level, F(3, 159) = 97.70, p < 0.0001, but no significant main effect for genotype, F(3, 53) = 1.60, p = 0.2010, or shock level × genotype interaction, F(159, 159) = 1.45, p = 0.1709. While the amygdala is required and sufficient for fear conditioning to discrete sensory cues such as a tone, rapid formation of conditioned fear responses to environmental context is additionally mediated by the hippocampus [56][57][58][59]. Thus, our results indicate that both the FDD and Aph1BC -/mutations interfere with mnemonic information processing at the level of the hippocampus for the formation of long-term contextual fear memory at this age, while the Aph1BC -/mutation additionally has adverse effects on fear memory formation in the amygdala.
DISCUSSION
The clinical symptoms of AD include progressive loss of memory, thinking and language skills, as well as other behavioral changes. Short-term memory is the first to fail in AD patients. These clinical symptoms are accompanied by neuronal degeneration. Although it is widely believed that the disease is precipitated by the insurgence of brain lesions, such as Aβ amyloid plaques and neurofibrillary tau tangles, whether these alterations of tau and Aβ metabolism cause the debilitating clinical symptoms of AD and FDD is still unclear.
Because of the uncertainty concerning the pathogenic biochemical mechanisms of AD and related dementias, and considering that to improve the quality of life of AD patients -and caregivers-we need to either reverse or slow down the progression of the clinical symptomatology, we focused our analysis on learning and memory in our animal model of disease. In this study, the same cohort of mice of WT, FDD KI , FDD KI /Aph1BC -/-, and Aph1BC -/mice was assessed longitudinally for possible deficits in learning and memory from ~4 to ~19 months of age. FDD KI /Aph1BC -/mice presented mildly compromise accuracy of spatial long-term memory and working shortterm memory impairments at 4 and 8 months of age, respectively. Deficits in spatial and working memory were, at later ages, observed also in single mutant FDD KI and Aph1BC -/mice. Analysis of older animals showed that spatial recognition memory and long-term contextual fear memory were impaired in FDD KI , FDD KI /Aph1BC -/-, and Aph1BC -/mice. Overall, this study shows that; 1) the Aph1BC -/mutation does not rescue memory deficits in FDD KI mice, as would have been expected if Aβ played a pathogenic role in FDD KI mice; 2) in contrast, the FDD mutation and the Aph1BC -/deletion cause similar behavioral deficits in spatial memory and appear to have a small additive negative effect, at least on spatial longterm memory and working short-term memory, that is more pronounced at younger ages. Performing the twotrial Y-maze and the fear conditioning tasks in younger mice could unveil whether the FDD mutation and the Aph1BC deletion have also a negative additive effect on spatial recognition memory and long-term contextual fear memory. These observations are consistent with a pathological role of β-CTF, which is augmented in FDD KI mice due to increased production and in Aph1BC -/mice due to reduced turnover.
We have previously shown that inhibition of β-processing of APP rescues memory and synaptic impairments of FDD KI mice acutely and transiently, suggesting that increased β-cleavage of APP, perhaps in the synaptic cleft, during synaptic events leading to LTP and memory acquisition, leads to memory/synaptic deficits in FDD KI mice [25,26]. This evidence, together with the data showing memory deficits in Aph1BC -/mice, suggest that either increasing the rate of production (like in FDD KI mice) or decreasing the rate of clearance (as in Aph1BC -/mice) of β-CTF during synaptic transmission might lead to cognitive impairments. Future experiments will have to test whether these hypotheses are correct and whether the additive adverse effects on spatial long-term memory in young mice of the FDD KI and the Aph1BC -/mutations are due to a synergistic effect on ß-CTF levels transiently produced during synaptic transmission.
FDD KI and Aph1BC -/mice show a few distinct phenotypes. In fear conditioning tests, mice carrying the Aph1BC deletion (FDD KI /Aph1BC -/and Aph1BC -/mice) showed a mild impairment of cued fear memory, a task completely dependent on the functional integrity of the amygdala, as compared to FDD KI and WT animals. This result is not surprising since several differences exist between FDD KI and Aph1BC -/mice. First, the Aph1B/C deficiency causes a reduced clearance of β-CTF that leads to accumulation of β-CTF and a concomitant reduction in the β-CTF metabolites AID/AICD and Aβ. On the contrary, in FDD KI mice the loss of mBRI2 leads to an increased production of all these APP metabolites. Second, in Aph1BC -/mice processing of another γ-secretase substrate, Neuregulin-1 [53] is reduced. Thus, behavioral deficits caused by the deletion of Aph1B and Aph1C, but not by the FDD mutation, could be attributed to reduction in AID/Aβ and/or reduction of processing of Neuregulin-1.
Subjects
All behavioral experiments were conducted by using male littermates of the F2 generation of the C57BL/6J-129 hybrid mice as subjects. Mice were generated and maintained at the Animal facility of the Albert Einstein College of Medicine. Four genotypes, Aph1BC -/-, FDD KI , FDD KI /Aph1BC -/-, and wild type (WT), of F2 mice (n=11-19 per genotype) were evaluated for behavior. Aph1BC -/mice have been previously described [51]. FDD KI mice carried one mutant and one wild type BRI2/ITM2b allele [19]. Upon weaning, all mice were implanted with electronic chips (PharmaSeq, Monmouth Junction, NJ) subcutaneously on the tail for identification purposes, and their identity was regularly checked during testing periods. Animals were group-housed in plastic cages with ad libitum access to food and water in a temperatureand humidity-controlled animal care facility with a 12-h light/12-h dark cycle. All experimental procedures were in accordance with the National Institutes of Health guidelines and approved by the Institutional Animal Care and Use Committee (IACUC) at the Albert Einstein College of Medicine in animal protocol number 20130509.
Behavioral experimental procedures
All mice were extensively handled prior to the start of behavioral testing. All behavioral testing was conducted during the light cycle. On each testing day, animals were transported to a behavioral testing suite in their home cages and allowed to acclimate for at least 30 min prior to the start of testing. The experimenter was not blind to the genotypes of the animals in tests conducted at 4, 7-8, 12 and 15 months of age but was made blind in tests conducted at 18-19 months. All measurements were taken automatically by video tracking software.
Elevated zero maze
Mice were assessed for anxiety-like behavior on the elevated zero maze initially at 4 months of age and again at 18-19 months. The zero maze (Stoelting, Wood Dale, IL) consisted of an annular platform (inner diameter 50 cm, width 5 cm) elevated to 50 cm above the ground level, divided equally into four quadrants. Two opposite quadrants were enclosed by walls (15 cm
Open field
The open field test was conducted to assess animals' general locomotor activity, exploratory behavior, and anxiety-like behavior at 4 and 18-19 months of age. The open field apparatus (Stoelting) consisted of a square open field (40 cm × 40 cm) surrounded by opaque walls (35 cm high) and was dimly lit with a single light bulb directly above the apparatus, which illuminated the arena at approximately 5 lx in the center and 9 lx in corners. Each mouse was placed in the center of the open field box and allowed to explore the box freely for 10 min. The total distance traveled and the number of entries into, and the time spent in, the center of the arena (20 cm × 20 cm) were recorded with the ANY-maze video tracking system. This was repeated for three consecutive days to assess how animals would habituate to the increasingly familiar environment.
Morris water maze
Mice were tested in the Morris water maze for spatial reference memory at 4, 7-8, and 12 months and for spatial working memory at 15 months of age. The water maze consisted of a circular tank (120 cm in diameter) filled with water made opaque with nontoxic white paint and maintained slightly above the room temperature (25 ± 2 °C). All water maze tasks involved the animal finding a circular platform (10 cm in diameter) submerged in the water in order to escape from the water. On each trial, the mouse was released into the water facing the wall of the pool and allowed to swim freely in the pool to find the platform for the maximum of 60 s. Once the animal located the platform, it was allowed to stay on it for 15 s. Mice that did not locate the platform within 60 s were guided to the platform and allowed to stay on it for 15 s. After 15 s on the platform, the animals were removed from the pool, gently dried with paper towels, and placed in a single holding cage under a heat lamp until the next trial. A video tracking system (HVS 2020 and 2014; HVS Image, Mountain View, CA) was used to measure parameters such as the distance traveled, escape latency, swimming speed, the percentage of time spent in the quadrants, the number of counter crossings, and the average proximity to the platform location. The experimenter's position was maintained at the southeast (SE) corner of the room far from the tank for all the water maze tasks conducted at different ages. The experimenter was visible to the animal but remained stationary.
Visible platform task
At each age that mice were tested in the water maze, either before (4 and 7-8 months) or after (12, 15 and 18-19 months) the memory task, a visible platform task, in which the platform was made visible by attaching a small flag (7 cm × 5 cm) to it, was conducted to examine whether mice had any visual, motor, or motivational deficits at that particular age. At 4, 7-8, 12, and 15 months of age, two or three daily sessions, with three or four trials a day, were given, in which both the platform location and the starting position were changed in a semi-random manner between trials to ensure that the animal was using the proximal cue (i.e., flag) to locate the platform. At 18-19 months, two three-trial sessions were given in a single day. The distance traveled to the platform (path length) and swimming speed were measured by the HVS video tracking system. The data collected from the last session were used for data analysis.
Reference memory task
The Morris water maze hidden platform task was performed to assess spatial reference memory at 4, 7-8, and 12 months of age. In this task, the platform was hidden 1 cm below the water level, and distal visual cues were placed on the walls surrounding the pool. The location of the hidden platform remained constant across the acquisition sessions, while the starting position was varied in a pseudo-random manner between trials within each session. The distance traveled by the mouse to reach the platform was recorded by the HVS video tracking system. At 4 months, mice received two daily sessions of three trials each with an inter-trial interval of 6-10 min and an inter-session interval of 3 h for six consecutive days. The platform was located at the center of the fourth quadrant between the center and the northwest (NW). Following a probe trial given two days after the last acquisition session of this initial reference memory task, mice received a reversal learning task for another six days, in which the location of the hidden platform was moved to the quadrant opposite to the original target quadrant (i.e., the first quadrant). At 7-8 months, the number of daily trials was reduced to three, with an intertrial interval of 6-10 min, and the task was run for five consecutive days. The platform was placed at the center of the second quadrant between the center and the southeast (SE). Mice were tested again at 12 months and received two trials per day with an inter-trial interval of 6-10 min for eight consecutive days. The platform position was at the center of the first quadrant between the center and the northeast (NE). Two days after the last acquisition session, a single probe trial was given, during which the platform was removed from the pool, and each mouse was released from the quadrant opposite to the trained platform location and allowed to search the pool for 60 s (4 and 7-8 months) or 30 s (12 months). The time spent in the target quadrant, where the platform had been located prior to its removal, the number of crossings of a circular area encompassing the original platform location (counter: 2 × platform diameter), and the average proximity to the former platform location were measured to assess the animal's spatial reference memory for the location of the hidden platform.
Working memory task
Mice were tested in the Morris water maze for working memory at 15 months of age. In the working memory task, the position of the hidden platform varied from day to day but remained in the same place throughout the trials of a given day. The starting position was pseudorandomly changed from trial to trial within a given day. On each day, four trials (1 cued trial+ 3 test trials) were given. On the cued trial, each mouse was placed on the platform for 20s, after which it was removed from the platform to a single holding cage, where it remained for 5 min. After 5 min, three test trials were given, with an intertrial interval of 6-8 min. On each test trial, the animal was allowed to swim freely to find the hidden platform for the maximum of 60 s. Any mouse not locating the platform within 60 s was guided to the platform and allowed to stay on it for 15 s. Mice were tested for 10 days, with a one-day break after the seventh day. The distance traveled for each mouse was averaged over the last three days of testing and used for statistical analysis.
Radial arm water maze
Mice were tested in the radial arm water maze for spatial working memory at 7-8 and 18-19 months of age. An eight-arm radial arm water maze (each arm 8 cm in width and 38 cm in length) was used at 7-8 months. A sixarmed radial arm water maze (each arm 20 cm in width and 30 cm in length) was used at 18-19 moths. The radial maze was placed into the same water tank as used for the Morris water maze, filled with opaque water (25 ± 2 °C). The height of the walls of the maze was 8 cm above the water level. Distal visual cues were placed on the walls of the testing room. A clear submerged platform (square, 8 cm × 8 cm for the 8-arm maze; circular, 10 cm in diameter for the 6-arm maze) was placed at the end of one of the arms. In the working memory task, the platform would remain in the same arm for all trials on a given day but its location was changed pseudo-randomly from day to day. On each trial (maximum time 60 s), the mouse was released from one of the non-goal arms and allowed to swim freely to locate the platform. The release arm was pseudo-randomly changed from trial to trial. A mouse was charged with one error each time it entered an arm other than the goal arm or did not enter any arm for 15 s. The trial continued for 60 s or until the mouse ascended the platform. If a mouse did not locate the platform within 60 s, it was guided to the platform. The mouse was removed after 15 seconds on the platform. After three acquisition trials (inter-trial interval 6-8 min), the mouse was placed in a single holding cage for 30 min, after which it was given a fourth retention trial. The error scores for each mouse was averaged over the last three days of testing and used for statistical analysis. Mice were tested until the mean number of errors over three days for WT mice reached performance criteria (2.5 on Trial 3).
Y-maze
Mice were tested in the Y-maze for spatial recognition memory and spatial working memory at 18-19 months. The Y-maze consisted of three arms of equal length (35 cm) and width (5 cm), which were interconnected at 120° and enclosed by walls (10 cm high). The inside of the arms were identical, providing no intra-maze cues. The maze was placed under a bright fluorescent light and was surrounded by distal visual cues.
Two-trial test
The two-trial test test was conducted to assess short-term spatial recognition memory at 18-19 months. During the first trial (training trial), one of the arms of the maze was blocked, and mice were placed into one of the remaining arms of the maze (start arm) and allowed to explore the unblocked two arms for 10 min. After a 1-hr inter-trial interval, the blocked arm was opened (novel arm), and mice were placed in the start arm and allowed to explore freely all three arms of the maze for 5 min (test trial). The number of entries into and the amount of time spent in each arm were registered by the ANY-maze video tracking system. The relative position of the novel vs. known arms (i.e., left or right) was counterbalanced within each genotype to reduce place preference effects. This test takes advantage of the innate tendency of mice to explore novel unexplored areas (e.g., the previously blocked arm). Mice with intact recognition memory will prefer to explore a novel arm over the familiar arms, whereas mice with impaired spatial memory will enter all arms randomly. www.impactjournals.com/oncotarget
Spontaneous alternation test
Ten to thirteen days after the two-trial test, the spontaneous alternation Y-mazetest was conducted to assess spatial working memory. Mice were released to the center of the Y maze with all three arms open and allowed to explore freely for 5 min. The number and the sequence of arm entries were recorded by a video tracking system (ANY-maze). The dependent variables were activity, defined as the number of arms entered, and percent alternation, which was calculated as the number of alternations (entries into three different arms consecutively) divided by the total possible alternations (i.e., the number of arms entered minus 2) and multiplied by 100. For efficient alternation, mice need to use working memory to maintain an ongoing record of most recently visited arms, and a mouse with impaired working memory cannot remember which arm it has just visited and shows decreased spontaneous alternation accordingly.
Fear conditioning
Mice were tested for contextual and cued fear memory at 18-19 months. Fear conditioning was conducted in a mouse conditioning chamber (18 cm × 20 cm × 28 cm) with a metal grid floor, lit with a single house light and enclosed within a sound-attenuating cubicle (Coulbourn Instruments, Whitehall, PA). The floor grid was connected to a shocker (Coulbourn Instruments) for the delivery of an electric foot shock, which was to be used as an unconditioned stimulus (US). The chamber was also equipped with a speaker connected to an amplifier for the delivery of a pure tone (2.8 kHz, 85 dB), which served as a conditioned stimulus (CS) for cued fear conditioning. The same conditioning chamber was used for testing for contextual fear memory, while, in testing for cued fear memory, the chamber was altered with a rectangular partition placed at a diagonal, wall and floor covers with novel texture, and a novel (vanilla) scent. On the first day, mice were individually placed in the conditioning chamber, and the house light was immediately turned on. One hundred and twenty seconds later, animals were presented with a continuous tone for 30 s, at the end of which an electric shock (0.6 mA) was delivered through the floor grid for 2 s and co-terminated with the tone. Mice remained in the chamber for another 30s before being removed to the home cage. Approximately 24 hr after the training session, animals were tested for contextual fear memory. The mouse was placed in the same conditioning chamber as had been used for training and observed for freezing behavior in the absence of any shock or tone for 5 min. The last 3-min period constituted testing for contextual fear memory. Approximately 24 hr after the test for contextual fear memory, mice were tested for cued fear memory. The animal was placed in the modified chamber (novel environment) and observed for freezing behavior for 2 min. After 2 min, the animal was presented with the tone CS continuously for 3 min, during which time it was again observed for freezing behavior. The last 3-min period with the tone presentation constituted testing for cued fear memory. In both tests, each animal's movements were recorded and the percentage of freezing was calculated by FreezeFrame software (Coulbourn Instruments).
At the completion of the test, mice were assessed for possible genotype effects on shock sensitivity. A sequence of single foot shocks was delivered to animals placed in the same chamber used for fear conditioning at four intensity levels (0.1, 0.2, 0.4, and 0.6 mV) in the ascending order. There were two presentations at each shock intensity level, with a 20-s inter-stimulus interval. At each intensity level, the animal's behavior was evaluated using the following scale to determine the threshold to each of these behavioral responses (0=no response, 1=ambulation, 2=flinch, 3=hop, 4=run, and 5=jump).
Statistical analysis
Statistical analysis of most data was performed by analysis of variance (ANOVA), with one between-subjects factor (genotype) and, when appropriate, a within-subjects factor (e.g., day). When significant effects were found, the data were further analyzed by post hoc comparison tests (Tukey's, Sidak's, Dunnett's, or Fisher's LSD). The level of significance was set at p < 0.05. Statistical analyses were carried out using the Prism software (GraphPad, La Jolla, CA).
ACKNOWLEDGMENTS
We thank Bart De Strooper and Lutgarde Serneels for providing the Aph1B/C -/mice.
CONFLICTS OF INTEREST
The AECOM has a patent on the commercial use of FDD KI mice. Luciano D'Adamio is a co-inventor on this patent. AECOM has licensed the patent to Remegenix, a company of which Luciano D'Adamio is a co-founder and a Board member. As a co-founder Luciano D'Adamio owns 35% of Remegenix. The patent and the licensing only covers commercial use of the mice and does not pose any obstacle to distribution of the mice to academic laboratories. There are no further patents, products in development or marketed products to declare. This does not alter the authors' adherence to all the Oncotarget's policies on sharing data and materials, as detailed online in the guide for authors. | 2018-04-03T02:35:12.270Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "77cef89f0c778ea5c30537d3460c6cfa5a9185a3",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=22896&path[]=7389",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77cef89f0c778ea5c30537d3460c6cfa5a9185a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239468650 | pes2o/s2orc | v3-fos-license | Do Parental Perceptions of the Nutritional Quality of School Meals Reflect the Food Environment in Public Schools?
(1) Background: It is unknown whether parents’ perception of school meals, a determinant of student meal participation, align with the nutritional quality of meals served in schools. This study compares the healthfulness of foods offered in schools with parental perception of school meals at those same schools. (2) Method: Parents were asked to rate the healthfulness of school meals at their child’s school. Data on the types of foods offered were collected from public schools in four cities in New Jersey and matched with parent-reported data. Measures were developed to capture the presence of healthy and unhealthy items in the National School Lunch Program and the presence of a la carte offerings as well as vending machines. Multivariable analysis examined the association between parental perceptions of school meals and the school food measures after adjusting for covariates. (3) Results: Measures of the school food environment and parental perceptions were available for 890 pre-K to 12th grade students. No significant associations were observed between parental perceptions and food environment measures when examined one by one or in a comprehensive model. (4) Conclusions: Parents’ perception of the healthfulness of meals served do not align with the nutritional quality of foods offered at schools.
Introduction
The Healthy Hunger Free Kids Act (HHFKA), the first major update to the National School Lunch Program (NSLP) since the 1990s, was approved by congress in 2010 and implemented in a stepwise fashion in schools starting in the 2012-2013 school year (SY) [1]. The HHFKA focused on improving the nutritional quality of the foods provided to children through the NSLP by setting age-specific calorie, sodium, and fat maximums as well as requiring more whole grains and a greater variety of fresh fruits and vegetables to be served each week [1]. Similar regulations were applied to competitive foods (i.e., foods sold in schools outside of the NSLP, such as a la carte and in vending machines) starting in SY 2014-2015, with the Smart Snacks Standards [2]. Since its implementation, the positive impact of the HHFKA on the nutritional quality of the foods served and sold in schools has been well documented [3][4][5][6]. Studies have consistently shown that children who eat school lunches are selecting and consuming more nutrient-dense meals [4], including more fruits, vegetables and whole grains [6][7][8], while school meal participation rates have remained steady [9], indicating students' acceptance of these meals. Children who eat school meals also tend to have healthier overall diets compared to their non-participating counterparts [10][11][12], and meals served in schools are healthier than those brought from home [13][14][15]. Further, a recent study examining trends in foods consumed by children ages [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] found that the nutrition quality of foods consumed at schools significantly improved after 2010 and that these foods had higher nutrition quality compared to foods consumed from other locations such as grocery stores and restaurants [6].
Despite marked improvements in the nutritional quality of school meals, school meal participation did not increase; both prior to and after implementation of the HHFKA, participation rates were, and remained around 70% [4,9]. Similarly, parental perceptions of the healthfulness of school meals-a predictor of students' school meal participation-did not change after implementation of the HHFKA [16], with parental assessment of school meals remaining low [17,18]. Even though parents' impressions of the nutritional quality of school meals plays a critical role in whether or not a child participates in the program [16,19], no studies to date have examined how parental assessment of the nutritional quality of school meals compares with the healthfulness of the food actually served at the school their child attends. To address this gap, the goal of the current study was to examine the association between parent perceptions of the healthfulness of school meals and the food offerings at their child's school. This is critical for designing interventions aimed at increasing participation in school meal programs, which are a major source of healthy nutrition for school-aged children. We hypothesize that there is an overall misalignment of parental perceptions of school meal healthfulness with measures of the nutritional quality of school meals in the school their child attends. However, based on indications that parents of younger students are more engaged with their children's school activities, we expect a more accurate alignment between perceptions and the school food environment among parents of elementary school children.
Materials and Methods
This is a secondary analysis using data from the ongoing New Jersey Child Health Study (NJCHS). The NJCHS investigates the impact of the food and physical activity environment on children's weight and health outcomes over time. The study collected data from a sample of households with children located in four predominately low-income cities in New Jersey: Camden, New Brunswick, Newark, and Trenton. Data were also collected on the school food environment in all public schools in the four study cities through a school survey. Both household and school survey respondents were compensated for their time. The study was approved by the Institutional Review Boards of Rutgers and Arizona State Universities.
Household Survey
Computer-assisted phone interviews were used to collect data from two panels of households with children at two times points between 2009 and 2017. Data collected at time 1 on each panel (2009-2010 and 2014-2015) were included in the current analysis. In panel one, households were selected using a random digit dialing of landline telephone numbers associated with the study cities. In panel two, cell phones were added to the sampling design to reflect the increased use of cell phones over landlines. Households were eligible if they lived within the study city limits, had a child in the home between the ages of 3 and 18, and spoke English or Spanish. The respondent was an adult, at least 18 years old, who provided responses for themselves and one randomly selected child (referred to as the index child) in the household. Additional details about the household survey design are available elsewhere [20].
Household Survey Content
Parents reported socio-economic and demographic characteristics of the household, including household income and mother's education, as well as age, sex, and ethnicity/race of the index child. Respondents also provided the name of the school the child attended at the time of the survey. Children were grouped into 3 race/ethnicity categories-Hispanic, non-Hispanic Black, and non-Hispanic White/other. Mother's education was categorized as less than high school, high school, and at least some college. To measure participation in school meals, parents were asked, "On most days, does (index child) have a lunch served by the school?" with yes and no as response options [21]. To assess parents' perception of the school meals, parents were asked: "Regardless of whether or not (index child) eats foods provided by his/her school, how would you rate the nutritional quality of foods offered at (index child's) school?" Answer options were on a 4-point Likert scale: "Very Unhealthy", "Unhealthy", "Healthy" and "Very Healthy" with the option to refuse or select "I don't know" or "School does not provide food."
School-Level Data
A survey using questions from prior research was used to gather information on specific aspects of the food environments in schools [22][23][24]. The survey included questions about foods offered as part of reimbursable school lunches, a la carte during lunch time, and in vending machines. Surveys developed using Qualtrics ® (Provo, UT, USA) were distributed in paper and online formats to school nurses in all public schools in our study cities that included any grade from K to 12. School nurses were asked to draw upon their own knowledge as well as consult with school food staff to complete the survey. Data about school environments were collected for each SY between 2010-2011 and 2017-2018. The current analysis used 2010-2011 and 2014-2015 school environment data for the first and second panels of the household survey, respectively. Overall, the response rate to the school environment survey averaged 92.5% across all schools in the four districts. We were able to match data for 96 schools for SY 2010-2011 and for 88 schools for SY 2014-2015 to the household data.
Data from the school food environment survey were summarized in a series of indices. For this analysis we used four indices: NSLP healthy; NLSP unhealthy; presence of vending machines; presence of a la carte items served in the cafeteria during lunch. The NSLP healthy index (range 0-9) indicates the total number of healthy items (e.g., whole grains, salad bar, fresh fruits, etc.) offered as part of the NLSP. The NSLP unhealthy food index consisted of similar counts of available food items that were designated as unhealthy (e.g., fries, dessert, pizza, etc.) and could range from 0 to 5. A complete list of the items included in these two indices is provided in Table 1. Taken together, they represent the overall exposure to healthy and unhealthy items offered in school meals. To capture competitive foods, two binary variables were created to indicate (1) whether there were vending machines available to children in school, and (2) whether a la carte items were served during school meals. While we collected data on the number of healthy and unhealthy items offered in these two venues, use of the binary variables was preferred for two reasons: first, there was a relatively large number of schools that did not have vending machines (54%), or a la carte (29%), thus scoring 0 on both healthy and unhealthy items. Second, schools that had vending machines and/or a la carte offerings, tended to have a similar proportion of healthy and unhealthy items, resulting in high correlations between the numbers of healthy and unhealthy items offered through each of these venues. More detail about the school environment survey and index development can be found elsewhere [25].
In addition to school data collected via the school environment survey, school characteristics such as total student enrollment, racial/ethnic composition, and proportion of children eligible for free or reduced-price meals (FRPM) were taken from the National Center for Education Statistics [26] and were included in the analysis as control variables.
Analytical Sample
The analytical sample consisted of 1201 students (from both time points) who attended public schools in the study cities. A total of 311 responses were excluded due to missing data on the school food environment (n = 184), parent perceptions (n = 108), or other variables (n = 19). The final analytical sample included 890 respondents with complete data on variables included in the analysis. The cases that were excluded from the final analytical sample did not differ from those in the analytical sample on any individual or household characteristics, except for race. The cases that were excluded consisted of a higher proportion of non-Hispanic Black children and a lower proportion of Hispanic and non-Hispanic White/other children.
Data Analysis
Ordered logistic regression was used to examine the association between parental perception of the healthfulness of school meals and each of the school food environment indices individually. Next, all four indices were entered in the model together to assess a comprehensive account of the school food environment. Each model controlled for age, sex, and race of the child, household income as a ratio to the federal poverty line, and mother's education. Models also adjusted for school-level factors (i.e., school size, racial composition, proportion of students eligible for FRPM, and whether the school was an elementary school or a middle/high school). An indicator for panel was also used as a control variable to account for any potential unaccounted differences across the two panels, given that panel one was collected before HHFKA and panel two after HHFKA implementation. Lastly, all models were adjusted for clustering at the city level and included sampling weights to account for the complex survey design and to ensure the sample was representative of the cities from which it was taken. To test if the relationship between the school food environment (captured by the four indices) and parents' perception of the healthfulness of school meals was moderated by school level (elementary vs. middle/high school), interaction terms representing school level and each of the four indices (one at a time) were added to the comprehensive model. In additional models, similar analyses were conducted by introducing interaction terms between panel and each of the four indices to assess whether the relationship between parental perception of the healthfulness of the school food environment and measures of that environment changed between the two panels. Sensitivity analyses were run after recoding the perception variable into two and three categories. Additional sensitivity analyses were conducted to examine differences in associations by race/ethnicity and student school meal participation status in regression models. All analyses were run using Stata 15.1 (StataCorp LLC, College Station, TX, USA, 2017). Table 2 shows the demographic information for the sample including child, household, and school level variables. Most parents (72%) perceived school meals to be either "somewhat healthy" or "very healthy" while 28% of parents perceived school meals to be either "somewhat unhealthy" or "very unhealthy." The sample was comprised primarily of non-Hispanic Black and Hispanic children (50% and 44%, respectively). The average age of the children was 10.8 years, and the majority of them (66%) attended an elementary school. Nearly all children (91%) participated in school meals.
Relationship between School Food Environment and Perceptions
As shown in Table 3, in multivariable models, none of the school food environment indices were significantly associated with parental perceptions of school meals when examined individually (models 1-4). Only for NSLP healthy did the association approach significance; for every additional healthy item served in the NSLP, parents were 14% (p = 0.054) more likely to have a more positive perception of the school meals (OR 1.14; CI: 1.00-1.29). The comprehensive model that examined all 4 indices of the school food environment together (model 5) showed results that were consistent with models 1-4. Across all models, parents of non-Hispanic Black children and children who participated in school meals were significantly more likely to give a more positive assessment of the healthfulness of the school food environment.
Subgroup Analyses
The association between parents' perception and the school food environment indices did not significantly vary by school level (Table 4). However, presence of vending machines was marginally associated with more negative perceptions of the food environment for parents of elementary school children. The results were similar to those presented in Table 3 when interaction terms between panel and indices were introduced in the models. The association between parent perception and school food environment indices were not significant either before or after implementation of the HHFKA. The interactions between school level and the four indices of the school food environment were included in the same model. None of the associations were significant at p < 0.05;ˆp < 0.10.
Sensitivity analyses using the perception variable based on two categories (combining "very unhealthy" with "unhealthy" and "healthy" with "very healthy") or three categories (combining "very unhealthy" with "unhealthy" and leaving "healthy" and "very healthy" separate), using a logit and an ordinal logit model, respectively, yielded similar results. Additional sensitivity analyses examining differences by race and student participation in school meals also resulted in similar findings; with the lack of association between parental perceptions and food indices observed across all racial/ethnic groups and independent of children school meal participation status.
Discussion
This study examined the relationship between parents' perception of school meals and the healthfulness of the school food environment in public schools in four cities with low-income and high minority populations in New Jersey. Consistent with our hypothesis, parental perceptions of school meals were not associated with actual measurements of the school food environment. None of the four measures of the food environment were associated with parents' perception when examined alone or together in a comprehensive model. Results were similar across school levels and time periods-before and after implementation of the HHFKA. Parents of non-Hispanic Black children and of those who participated in school meals tended to have an overall more positive perception of school meals; nevertheless, interaction analyses showed that their perceptions of the healthfulness of the school meals were not aligned with the nutritional quality of the meals served. The current findings underscore the disconnect between school meals and parents' perception of those meals, highlighting the importance of efforts to better acquaint parents with the nutritional quality of school meals. Our findings might help explain why prior research has shown that parents' perception of school meals did not improve after the implementation of the HHFKA [16], despite improvements in the overall nutritional quality of school meals [3][4][5][6].
The presence of competitive foods, specifically those served a la carte and in vending machines, was also not associated with parental perceptions of school meals. Competitive foods typically include snack foods such as chips, cookies and ice cream-all items for which there were no nutritional standards until implementation of the Smart Snacks standards in 2014 [2]. After 2014, schools were required to offer competitive foods that met nutrition standards similar to those of the NSLP, including whole grain requirements and setting limits on calories, sodium, and fat [2]. How schools responded to these requirements may have contributed to parents' misperception of the healthfulness of competitive foods. For example, school specific versions of popular snacks were created by food manufacturers, often referred to as "look alike" products [27]. As a result, chips sold in schools might have less fat and salt than the non-school version of the same item sold at a local grocery store. While the use of "look alike" products does result in an improvement of the nutritional quality of competitive foods in both venues (vending machines and a la carte), that improvement may not be immediately obvious to students or parents [27]. The use of "look alike" products is not limited to competitive foods. For instance, some popular items served as part of the NSLP, including chicken nuggets and pizza, fall within this category. While these items do meet healthier nutrition guidelines post HHFKA, it may be difficult for parents to recognize their nutritional benefits. It is also possible that the lack of association between the presence of competitive foods (especially vending) and parents' perception is impacted by the relatively low prevalence of vending machines in our sample (54% of schools did not have vending machines).
Prior research shows that participation in school meals tends to be higher for younger children compared to older children [5,28], and parents of younger children are likely to be more engaged with school activities [29] and more familiar with the school environment compared to parents of older children. It was therefore hypothesized that there would be better alignment between perceptions and actual school food environment among parents of elementary school children as compared to parents of middle/high school students. Contrary to expectations, the results from this study did not suggest any moderation effect of school level on the association between perceptions and the school food environment.
These findings highlight the misalignment between the healthfulness of the school food environment and parents' views of food offered in school. Prior research has shown that parents' perception of school meals did not improve after the implementation of the HHFKA [16] despite improvements in the healthfulness of school meals [25]. This is a troubling issue, as participation in school meals is impacted by parental perceptions of those meals [16,19] and could, at least partially, account for the fact that on an average school day, only 56% of students participated in the NLSP [5]. Based on the most recently available data, not all students who are eligible for FRPM participate in meals on a regular basis [28]. For instance, in the 2009-2010 school year, 79.1% of students eligible for free meals and 73.2% of students eligible for reduced-price meals participated in the NSLP [28]. The goal of the NSLP to reduce food and nutrition insecurity by providing healthy meals to low-income children can only be achieved if children participate in school meals.
In addition to missed opportunities for improving children's nutrition, sub-optimal participation in school meal programs may impede schools' abilities to meet the fixed costs of maintaining a viable meals service (e.g., costs of equipment and personnel). The main source of funding for school food programs is through federal reimbursement for meals served. These funds are used not only to purchase food but also for equipment and wages for employees who prepare and serve meals. The latter expenses often do not vary with the volume of meals served; yet, low participation numbers affect a program's ability to meet these costs and successfully provide meals to students. In addition to the impact on participation, lack of understanding about the nutritional quality of school meals by parents and other stakeholders, including teachers and administrators, may result in less support from these groups for school food programs. The current findings suggest the need for effective efforts to acquaint parents with the quality of food served at their children's schools through frequent and effective communication using multiple channels. These might include family meal days when parents join students for lunch, inclusion of more nutrition information in menus distributed to parents, newsletters featuring school foods, use of social media to discuss school meals, and other targeted marketing strategies.
Limitations
Among the study's limitations, a bias toward endorsing socially desirable responses may have impacted parents' reported perception of the nutritional quality of school meals, given the high participation rate in school meals in our sample. To account for this potential bias, the analysis adjusted for school meal participation by the index child. Our analysis is based on a cross-sectional design; therefore, the associations observed cannot be considered causal. We do not have information on parents' nutrition knowledge, which may influence their understanding and perception of nutritional quality of the school meals. In addition, our findings may not be generalizable beyond communities similar to our study citiesi.e., low-income, racially/ethnically diverse populations. Further, as this is a secondary data analysis, we were not able to collect qualitative data, which may have provided more details about the origins of parents' thoughts and beliefs about school meals. Lastly, we were not able to gather detailed information from schools about their use of health education classes or other health related information aimed at parents.
Conclusions
Parents' perception of school meals are not aligned with measures of the healthfulness of school food offerings. This disconnect can impact participation rates in school meals and undermine the potential to reduce food insecurity and improve children's diet quality by providing nutritious meals to students. School-, state-, and federal-level stakeholders could improve communication with parents about the nutritional content of school meals to address this misperception. School food departments could also consider including students and parents in recipe and product testing so their opinions and views are better incorporated into menu decisions. Focused marketing strategies highlighting the healthfulness of the resulting menus could be incorporated as well. | 2021-10-16T15:09:57.348Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "6a59ad4874b516401037f6748d57fb54641517bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/20/10764/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98b6cc37012aa2b0bfde158d9d74dbede95da9f2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252839283 | pes2o/s2orc | v3-fos-license | Constrained Planar Array Thinning Based on Discrete Particle Swarm Optimization with Hybrid Search Strategies
This article presents a novel optimization algorithm for large array thinning. The algorithm is based on Discrete Particle Swarm Optimization (DPSO) integrated with some different search strategies. It utilizes a global learning strategy to improve the diversity of populations at the early stage of optimization. A dispersive solution set and the gravitational search algorithm are used during particle velocity updating. Then, a local search strategy is enabled in the later stage of optimization. The particle position is adaptively adjusted by the mutation probability, and its motion state is monitored by two observation parameters. The peak side-lobe level (PSLL) performance, effectiveness and robustness of the improved PSO algorithm are verified by several representative examples.
Introduction
Thinned arrays have been the focus of research in recent years for their lower cost, lower energy consumption and lighter weight compared with the conventional uniform arrays. The main purpose of array thinning is to obtain a lower peak side-lobe level (PSLL) on the condition that the antenna array satisfies gain demand. Planar array thinning can be achieved by adjusting the "ON" or "OFF" states of each element in a uniform array.
To suppress the PSLL, several optimization methods have been proposed. As suggested by Liu in [1], the thinning of a reconfigurable linear array can be reduced by matrix beam method with high efficiency. RF switching technology in the T/R module makes the application of adaptive sparse technology possible [2]. Rahardjo et al. [3] developed a new method for designing linear thinned arrays using spacing coefficients based on Taylor line source distribution, which is verified to be able to correctly match different beam pattern configurations without repeated global optimization. Recently, analytical thinning methods by convex optimization [4] and Bayesian compressive sensing [5] have been introduced, but these methods need to set a suitable reference pattern as a precondition. Keizer [6] proposed the iterative Fourier technique (IFT) for array thinning. He calculates the array factor and excitation of the uniform array by making use of the Fourier transforms. Moreover, the IFT method is also used to optimize large planar arrays with higher convergence speed [7].
Benefitting from excellent global search performance, some intelligent optimization algorithms, such as the real genetic algorithm (RGA) [8] and asymmetric mapping method with differential evolution (DE) [9], also play a good role in array thinning. For this problem, analyses are also carried out with the newly introduced Honey Badger Algorithm (HBA) and Chameleon Swarm Algorithm (CSA) algorithms in [10]. Vankayalapati introduced a new algorithm called multi-objective modified binary cat swarm optimization to deal with the optimization of multiple contradicting parameters [11].
First proposed by Eberhart and Kennedy in 1995 [12], particle swarm optimization (PSO) is a new optimization algorithm for solving continuous and discontinuous problems in multiple dimensions. With a variety of analytical and numerical tools available, PSO has been applied to different application domains such as antenna pattern synthesis, array thinning, and sensor networks. The PSO and its improved algorithms are employed frequently for suiting multiple objectives simultaneously. Random drift particle swarm optimization (RDPSO), which is used to solve the electric power economic scheduling problem, has been applied to the optimal design of thinned array in [13], achieved good results. To compare the performance of single-ring planar antenna arrays, Bera et al. [14] proposed a novel wavelet mutation-based novel particle swarm optimization (NPSOWM). By applying the binary particle swarm optimization (BPSO) algorithm to the total focusing method (TFM) [15] for thinned array design, the simulation results indicate that the proposed TFM can greatly increase computational efficiency and provide significantly higher image quality. The PSO study in [16] aims to generate multiple-focus patterns and a large scanning range for random arrays thinning, which is applied to the ultrasound treatment of brain tumors and neuromodulation. A search mode with multi-objective particle swarm optimization was proposed in [17] to solve the problem of optimal array distribution, which met the requirements of reducing the number of antenna elements and maintaining the PSLL simultaneously. In 2021, Guo et al. [18] studied the SLL reduction optimization of linear antenna arrays (LAA) with specific radiation characteristics and circular antenna arrays (CAA) with isotropic radiation characteristics. In the optimization process, an optimization method (GWO-PSO) combining gray wolf optimization and PSO will be used.
In general, in order to avoid the rapid loss of particle distribution in solution space, the existing PSO methods give a variety of evaluation strategies for the diversity of particle population distribution. However, these methods mainly focus on improving the algorithm efficiency and ignore the balance between global search and local search, resulting in the lack of ability to adjust the search focus dynamically in different search stages.
In this paper, a new novel optimization algorithm for large array thinning is proposed. The innovative part of this algorithm is the combined usage of different particle learning strategies with discrete particle swarm optimization, which enhances the ability of global search and effectiveness of the algorithm.
The rest of this paper is organized as follows: The planar array structure and optimization problem model are briefly outlined in Section 2. Section 3 introduces the DPSO algorithm and our improvement to it. Several simulation results and discussions are presented in Section 4. Finally, a brief conclusion is given in Section 5.
Optimization Model
Assume a large planar array with elements arranged in square grids with a spacing of d along M columns and N rows, as shown in Figure 1. Matrices A and B are set as: Figure 1. A planar array with some elements "ON" and the others "OFF". Matrices A and B are set as: where a mn and b mn are the excitation of element (m, n) and the "ON" or "OFF" state of the element (m, n), respectively. So, b mn is 1 or 0. In Figure 1, θ and ϕ are the elevation and the azimuth angles in the spherical coordinate, respectively. u and v are direction cosines defined by u = sin θ cos ϕ and v = sin θ sin ϕ. If (u 0 , v 0 ) is the desired main beam direction, the radiation beam pattern F(u, v) can be expressed as: The fitness function considered in the present study is the PSLL of the radiation beam pattern, which is desired to be as low as possible. The PSLL of the planar array can be computed as: where S denotes the angular region excluding the main beam. Considering the constraint of the array filling rate, the sum of all elements in matrix B should be a definite constant. If the aperture remains the same, the four corner elements of the planar array must be "ON". The model of optimization can be represented as: where K denotes the number of elements that turned "ON".
Improved Algorithm
This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Fundamental PSO Algorithm
The fundamental PSO algorithm assumes a swarm of particles in the solution space, and the positions of these particles indicate possible solutions to the variables defined for a specific optimization problem. The particles move in directions based on update equations impacted by their own local best positions and the global best position of the entire swarm.
Assume a swarm composed of NP particles is uniformly dispersed in a D-dimensional solution space, where the position x i and velocity v i of ith particle can be expressed as: The velocity update equation is given below: where i = 1, 2, . . . , NP, j = 1, 2, . . . , D. Obviously, NP represents the number of particles, and D represents the number of dimensions. c 1 and c 2 are the acceleration constants, and r 1 and r 2 are two random numbers within the range [0,1]. v ij (t) and x ij (t) are the velocity and position along the j th dimension for the i th particle at t th iteration, respectively. pBest ij (t) is the best position along the j th dimension for the i th particle at t th iteration, also called "personal best". Finally, "global best" gBest i (t) is the best position found by the swarm along the j th dimension at t th iteration. The value of a position along each dimension for each particle is limited to "0" and "1" in discrete algorithms. The position update equation along the j th dimension for the i th particle is given by where s(v ij ) denotes a function that maps the particle velocity to the probability of the particle position, and r represents a random number within the range [0,1].
DPSO with Hybrid Search Strategies
The improved PSO algorithm proposed in this paper consists of three main strategies, which are the global learning strategy based on niche technique, dispersed solution sets, the local mutation search strategy and the motion state monitoring strategy. The execution of the corresponding strategies is adaptively adjusted in different search stages of the optimization.
Global Learning Strategy
It is necessary to maintain as high a swarm diversity as possible in the early stage of optimization to avoid premature convergence. So, two strategies are utilized in the early stage, which are the niche technique and the gravitational search algorithm.
The niche technique proposed in [19] can form multiple stable subpopulations within a population. Each particle interacts only with its immediate neighbors through a ring topology. Obviously, the ring topology structure can promote individuals to search thoroughly in its local neighborhood before global optimization. Therefore, it is acceptable for discovering multiple optima. On the basis of the ring topology proposed by [19], a high-quality solution set P G is proposed in this paper, as shown in Figure 2. The solution set P G consist of two kinds. One is the optimal solutions of all particles, the other is the eliminated optimal solutions of some particles with excellent fitness function values. Preserving the possibility of interaction between each particle and its neighbors, particles can also learn from P G directly. This structure will substitute for the role of the global best position gBest in (8).
Because the positions of elements in P G may be uniformly distributed in the whole solution space, particles under the guidance of elements in PG have a considerable number of potential motion directions, which improves the diversity of the population, and avoids premature convergence. consist of two kinds. One is the optimal solutions of all particles, the other is optimal solutions of some particles with excellent fitness function values. P possibility of interaction between each particle and its neighbors, particles from PG directly. This structure will substitute for the role of the global best in (8). Because the positions of elements in PG may be uniformly distributed solution space, particles under the guidance of elements in PG have a cons ber of potential motion directions, which improves the diversity of the po avoids premature convergence.
The gravitational search algorithm (GSA) mentioned in [20] is a n search algorithm, where a particle is impacted by the combined gravity o particles in the solution space. We use a GSA-based acceleration to replac best position pBesti in (8) to enhance the correlation between particles.
At the t th iteration, the gravitational attraction of particle q on particle mension can be defined as: where Mq(t) represents the inertial mass of the particle applying force, Mi(t) inertial mass of the particle subjected to the force, Riq = ||xi(t) − xq(t)||, R Euclidean distance between particle i and particle q, and ε is a small constan The gravitational search algorithm (GSA) mentioned in [20] is a novel heuristic search algorithm, where a particle is impacted by the combined gravity of all the other particles in the solution space. We use a GSA-based acceleration to replace the personal best position pBest i in (8) to enhance the correlation between particles.
At the t th iteration, the gravitational attraction of particle q on particle i in the k th dimension can be defined as: where M q (t) represents the inertial mass of the particle applying force, M i (t) represents the inertial mass of the particle subjected to the force, R iq = ||x i (t) − x q (t)||, R iq denotes the Euclidean distance between particle i and particle q, and ε is a small constant to make sure the denominator is not zero. G(t) is the gravitational constant whose value changes dynamically, which can be expressed as: where α represents the attenuation rate, T is the max iteration times, and G 0 is the initial value. At the t th iteration, the resultant force F k i (t) caused by all the other particles will play a role to i th particle in the k th dimension, so: where rand q is a random number within the range [0,1]. According to Newton's second law, the acceleration produced by this resultant force can be expressed as: Sensors 2022, 22, 7656 6 of 14 The inertial mass of each particle is calculated from its fitness function value and the updating equation of inertial mass is given below: where PSLL i (t) represents the fitness function value of particle i at t th iteration, and gWorst(t) is the worst position found by the swarm at t th iteration.
Therefore, after applying the global learning strategy, the updating equation of particle velocity can be rewritten as: where d = 1, 2, . . . , D, D represents the number of dimensions, C denotes a set of q dimensions randomly selected from all D dimensions, pGood id (t) is the best position of l candidate solution randomly selected from P G and neighborhood particles along the d th dimension for the i th particle at t th iteration, a id (t) is the acceleration along the d th dimension for the i th particle at t th iteration.
Local Search Strategy
An algorithm should have strong local search ability to improve the convergence efficiency in the later stage of optimization, especially when it is applied to the optimal design of large-scale array. The flowchart of the local search strategy is shown in Figure 3. It can be considered that a particle i can be regarded as moving from a certain position xi to another position x ' i in the solution space when xi along some dimensions are mutated. Therefore, we propose a local search method that the neighborhood of the optimal position of a particle i is searched by only changing partial elements of its position xi under It can be considered that a particle i can be regarded as moving from a certain position x i to another position x ' i in the solution space when x i along some dimensions are mutated. Therefore, we propose a local search method that the neighborhood of the optimal position of a particle i is searched by only changing partial elements of its position x i under the guidance of a Gaussian random variable. The position moving the equation along the j th dimension for the i th particle is given by: where X ij (0, σ 2 ) represents a Gaussian random variable with a mean value of 0 and standard deviation of σ, and r i is a random number within the range [0,1].
The mutated position x i should be used to replace x i if it has a better fitness function value, and the standard deviation σ should be amplified to increase the moving distance of particles. Otherwise, keep x i unchanged and reduce σ. The updating equation of standard deviation σ can be expressed as: where k represents the number of variations, ρ is the expansion parameter and ρ > 1, µ is the contraction parameter and 0 < µ < 1. If the size of the standard deviation σ is less than a preset threshold σ e , that means there is no better solution for particle i in the adjacent position. So, the update should be stopped.
Particle Movement Condition Monitoring
In order to avoid some particles falling into local optimum prematurely and to improve the performance of the optimization algorithm, a condition monitoring strategy is proposed to adjust the position of the particle according to the moving state of the particle.
There are two preconditions for a particle i to trigger a mutation operation. The first one is that the optimal solution of particle i has not been updated for u iterations, which can be expressed as: where Tpb i represents the number of iterations of the optimal solution that have not been updated. The second precondition is that the moving distance of the position x i is less than the set value ε for successive h iterations, which can be expressed as: where Txb i represents the number of iterations in which the moving distance of the position x i is less than the set value ε. The moving distance is defined as the number of different elements in all dimensions before and after the particle position changes. It can be assumed that particle i has fallen into local optimum when both preconditions are met, and then an individual variation probability γ a is used to vary all dimensions of position x i . The equation of variation can be represented as: where d = 1, 2, . . . , D, D represents the number of dimensions, and r d is a random number within the range [0,1].
Steps of Algorithm
The new Algorithm 1 exploits the hybrid search strategies (HSS) described above to improve the performance of a fundamental DPSO. Therefore, it is called DPSO-HSS. The detailed steps of the improved algorithm are summarized as follows.
Algorithm 1: DPSO with hybrid search strategies 1. Generate the initial particle swarm that satisfying the conditions. Initialize pBest i , gWorst, and gBest. Initialize observation parameters Tpb i and Txb i . Initialize the solution set P G . 2. Calculate the fitness function value of each particle and update pBest i , gWorst, gBest, and Tpb i . Replenish the set P G with the good solutions that have been eliminated. 3. Update the velocity and position of the particle according to (16), (9), and (10). Determine whether the number of iterations t is larger than t L . If so, go to Step 4. Otherwise, go to Step 5. 4. Initiate the local search strategy. 5. Determine whether Tpb i > u and Txb i > h are both valid. If so, update the position of the particle according to (21). Otherwise, go to Step 6. 6. Constrain the particle position according to the constraint condition and update the parameter Txb i . 7. Do boundary treatment for particle velocity. 8. Output gBest. Determine whether the termination conditions are met. If so, end the optimization. Otherwise, t = t + 1, and return to Step 2.
Numerical Results and Analysis
In this section, several examples are presented to compare the performance, effectiveness and robustness of the DPSO-HSS algorithm and some contrast algorithms.
The five typical functions used to compare the performance of each algorithm are shown in Table 1. The test is to minimize the function value within a specified range of dimensions and variables. In order to compare the effects of each algorithm under the same conditions, the test of each function is run several times and its statistical results are calculated for comparison. The three statistical characteristics as evaluation criteria are: 1.
The mean value of multiple simulation results; 2.
The variance of multiple simulation results; 3.
The minimum value of multiple simulations.
The population size NP, the iteration times T, the dimension D, the learning factors c 1 and c 2 , and the inertia weight w of the three algorithms are the same. The simulation results are shown in Table 2. Among the five test functions, the proposed algorithm outperforms the other two algorithms in the mean and variance of Ackley, Rastrigin, and Sphere. In Rosenbrock's test, DPSO-HSS has a better variance. In Griewank's test, the test results of DPSO-HSS and RDPSO are close to the same. Considering the test results of the above five functions comprehensively, the proposed DPSO-HSS algorithm performs well in both mean and variance compared with the RDPSO and NPSOWM algorithm. The low mean indicates the excellent global search ability and convergence effect of the algorithm, whereas the low variance indicates that the stability of the algorithm results in multiple runs.
The time complexity of each test function is O (D), where D is the number of dimensions. By substituting this result into the optimization of the DPSO-HSS algorithm, the time complexity of DPSO-HSS can be calculated as follows where D represents the number of dimensions, p is the swarm population, q denotes the number of dimensions randomly selected from all D dimensions in (16), and t is the number of iterations. It can be seen that, compared with the general PSO algorithm, the improved algorithm has higher time complexity.
Simulation 2: Application in Planar Array Thinning with Constraints
The algorithms involved in the simulation include the proposed DPSO-HSS algorithm, RDPSO algorithm in [13], NPSOWM algorithm in [14], and MOPSO-CO algorithm in [4].
Consider a planar array consisting of 20 × 20 elements with equal spacing d = 0.5λ as an initial layout. u and v are both within the range [−1, 1] with a scanning step of 0.01. The main beam direction is (θ, ϕ) = (0 • , 90 • ), that is, (u, v) = (0, 0). The filling rate is 50%, so there are 200 elements in the "ON" state. The population size NP = 100, iteration times T = 500, and dimensions D = 400 are the same for all algorithms used.
The convergence curve of PSLL using four algorithms for array thinning is shown in Figure 4. At the early stage of optimization, the DPSO-HSS algorithm does not have more impressive search efficiency than other algorithms, but it maintains good global searching ability with the guiding of comprehensive learning strategy and aims at maintaining the high diversity of the population and the correlation between the particles. At the later stage of optimization, the curve of the proposed algorithm shows the clear change process with local search strategy before reaching the optimal PSLL results. The trend of DPSO-HSS corresponds to the configured search strategy and meets the ideal expectation. Figure 4. At the early stage of optimization, the DPSO-HSS algorithm does impressive search efficiency than other algorithms, but it maintains good g ability with the guiding of comprehensive learning strategy and aims at m high diversity of the population and the correlation between the particl stage of optimization, the curve of the proposed algorithm shows the clear with local search strategy before reaching the optimal PSLL results. The HSS corresponds to the configured search strategy and meets the ideal ex The elements distribution and beam pattern diagram of the optim DPSO-HSS algorithm are shown in Figures 5 and 6, respectively. In Figu triangles represent the array elements in the "ON" state, and the number of is 200, which meets the filling rate of 50%. The four angular array eleme are all in the "ON" state because the array aperture must be unchanged hand, the direction of the main beam is (u, v) = (0, 0) with the normalized dB, and the PSLL is −18.32 dB. These results mean the result of the array t is correct. The elements distribution and beam pattern diagram of the optimized array of DPSO-HSS algorithm are shown in Figures 5 and 6, respectively. In Figure 5, the black triangles represent the array elements in the "ON" state, and the number of these elements is 200, which meets the filling rate of 50%. The four angular array elements in the array are all in the "ON" state because the array aperture must be unchanged. On the other hand, the direction of the main beam is (u, v) = (0, 0) with the normalized amplitude of 0 dB, and the PSLL is −18.32 dB. These results mean the result of the array thinning design is correct. We can see the PSLL performance in details in Figure 7, illustrating the proposed DPSO-HSS algorithm, which plots the u-cut of the beam patter thinned array optimized by four kinds of algorithms. The main lobe wid pattern of each algorithm is identical. The PSLL of the DPSO-HSS algorith whereas those of RDPSO, NPSOWM, and MOPSO-CO are −17.15 dB, − −17.66 dB, respectively. Compared with the other three algorithms, the P HSS is decreased by 6.82%, 5.17%, and 3.74%, respectively. We can see the PSLL performance in details in Figure 7, illustrating the novelty of the proposed DPSO-HSS algorithm, which plots the u-cut of the beam pattern diagram of a thinned array optimized by four kinds of algorithms. The main lobe width of the beam pattern of each algorithm is identical. The PSLL of the DPSO-HSS algorithm is −18.32 dB, whereas those of RDPSO, NPSOWM, and MOPSO-CO are −17.15 dB, −17.42 dB, and −17.66 dB, respectively. Compared with the other three algorithms, the PSLL of DPSO-HSS is decreased by 6.82%, 5.17%, and 3.74%, respectively. We can see the PSLL performance in details in Figure 7, illustrating th proposed DPSO-HSS algorithm, which plots the u-cut of the beam patter thinned array optimized by four kinds of algorithms. The main lobe wid pattern of each algorithm is identical. The PSLL of the DPSO-HSS algorith whereas those of RDPSO, NPSOWM, and MOPSO-CO are −17.15 dB, −17.66 dB, respectively. Compared with the other three algorithms, the P HSS is decreased by 6.82%, 5.17%, and 3.74%, respectively.
Simulation 4: Synthesis Results of Four Algorithms for Large Planar Thinne
The array thinning effect may be affected by both array aperture an rate. Table 3 records the PSLL and 3 dB bandwidth of the radiation b thinned arrays optimized by each algorithm under different array apertu rates. Each result is the mean value of five iterations running under the sa conditions.
Simulation 4: Synthesis Results of Four Algorithms for Large Planar Thinned Arrays
The array thinning effect may be affected by both array aperture and array filling rate. Table 3 records the PSLL and 3 dB bandwidth of the radiation beam pattern of thinned arrays optimized by each algorithm under different array apertures and filling rates. Each result is the mean value of five iterations running under the same simulation conditions. The results of PSLL under different filling rates show that the filling rate does have a certain influence on the optimization effect of thinned array and the optimization effect will be reduced if the number of open elements is too many or too few. The PSLL is improved when the filling rate is a constant of 60% and the aperture is changed from 5λ to 10λ, which indicates that the aperture also has a certain influence on the performance of array thinning. As shown in Table 3, the 3 dB bandwidth of the main lobe is mainly affected by the aperture, and the larger the array aperture is, the smaller the 3 dB bandwidth is.
Conclusions
A novel optimization algorithm for large array thinning based on DPSO with hybrid search strategies is proposed to improve the performance of large planar array thinning. The proposed algorithm, named DPSO-HSS, utilizes a global learning strategy to improve | 2022-10-12T16:25:49.655Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "8f660d4e286955e22082a80ee7df472e1d85a8ac",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/19/7656/pdf?version=1665309789",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3750f915081b7fb7da6ef577819cedb7e539b78",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246696194 | pes2o/s2orc | v3-fos-license | The effect of strain and pressure on the electron-phonon coupling and superconductivity in MgB 2 —Benchmark of theoretical methodologies and outlook for nanostructure design
Different theoretical methodologies are employed to investigate the effect of hydrostatic pressure and anisotropic stress and strain on the superconducting transition temperature ( T c ) of MgB 2 . This is done both by studying Kohn anomalies in the phonon dispersions alone and by explicit calculation of the electron – phonon coupling. It is found that increasing pressure suppresses T c in all cases, whereas isotropic and anisotropic strain enhances the superconductivity. In contrast to trialed epitaxial growth that is limited in the amount of achievable lattice strain, we propose a different path by co-deposition with ternary diborides that thermodynamically avoid mixing with MgB 2 . This is suggested to promote columnar growth that can introduce strain in all directions.
I. INTRODUCTION
Beginning with the discovery by Nagamatsu et al. in 2001 of MgB 2 being a high-temperature superconductor with T c ¼ 39 K, 1,2 a flurry of both experimental and theoretical studies on different properties of this compound was initiated.MgB 2 has an AlB 2 -type hexagonal crystal structure with the space group of P6/mmm with alternating layers of pure Mg and graphene-like B, as seen in Fig. 1, with experimental lattice parameters a ¼ 3:086 Å and c ¼ 3:524 Å. 1 Each unit cell consists of one Mg atom and two B atoms.Before its discovery, the record-holding binary system was Nb 3 Ge with T c ¼ 23:2 K, making the sudden jump in T c impressive.
Both the electronic and phonon band structures hold crucial information for understanding superconductivity.More exactly, the electronic states near the Fermi energy ϵ F are vital, as they are the ones that couple to the underlying vibrating lattice.Due to the difference in electronegativity, Mg will donate electrons to the B-layer, effectively making it isoelectronic to graphene.Consequently, the states at ϵ F are predominantly of B-character.The electronic band structure reveals two σ-bands from the valence band with p x and p y character and one π-band from the conduction band with p z character.These make up the Fermi surface that consists of two narrow coaxial cylindrical sheets in the Γ-A out-of-plane direction, from the p x,y σ-bands, and two distinct 3D tubular networks along in-plane directions, arising from bonding and antibonding p z π-bands, see, e.g., the work of Kortus et al. 3 for visualization.
Measurement of phonon band structure by inelastic x-ray scattering 4 reveals Kohn anomalies 5 in the optical E 2g branch coming from in-plane boron modes.The location of the Kohn anomalies is determined by the in-plane and out-of-plane nesting that occurs on the Fermi surface. 5,6][9][10] However, more studies are needed that compare methods for calculating T c based on the strength of the Kohn anomaly to calculations that explicitly treat the electronphonon coupling (EPC), in particular, for more than just one given case, i.e., in a material where the different methodological approaches can be studied as a function of an external condition.We suggest that pressure in MgB 2 is such an excellent example.
Zhang and Zhang 16 studied the effect of negative pressure on T c using semi-empirical models and found that it enhances T c .On the contrary, Gerbi and Singh 17 found in their first-principles calculations that positive hydrostatic pressure increases T c .The study is the only one showing such pressure behavior.Moreover, the effect of anisotropic strain on MgB 2 superconductivity has been explored experimentally [18][19][20] and theoretically, [21][22][23] out of which only Ref. 22 treats EPC from first principles (for a monolayer).The general finding is that tensile strain boosts T c in MgB 2 .
At ambient conditions, the theory of superconductivity in the Migdal-Éliashberg 24,25 formalism is known to perform well in estimating the superconducting transition in MgB 2 , in both isotropic and anisotropic approximations. 26Additionally, this theory relates T c to the phonon vibration frequency, which, in turn, is inversely proportional to the square root of the atomic mass.As such, one guiding principle is that materials with light atoms should have high T c 's. MgB 2 , just having one Mg atom and two light B atoms per unit cell, and no magnetism involved, is a perfect compound to use for methodological benchmarking.
In this work, we benchmark different approaches that predict T c of MgB 2 under hydrostatic pressure and anisotropic stress and strain.The first studied approach relies on phonon dispersions and the pressure-induced evolution of the Kohn anomaly on the optical E 2g branch.Phonons derived from both finite displacements and linear response methods are compared and differences in the identified Kohn anomalies along Γ-q (q ¼ K, M, H, L) between the methodologies are quantified.
Then, we explicitly calculate EPC in the formalism of Migdal-Éliashberg 24,25 as a function of pressure and use the finite phonon linewidth to identify exactly where in the phonon dispersion the coupling occurs.T c is derived in both isotropic and anisotropic approximations and is compared to experimental values from the literature.
Lastly, the recent development of thin film deposition methods, synthesis of diborides, and theoretical calculations on the mixing and clustering thermodynamics of diborides 27 opens up for attempts to design nanostructures with MgB 2 where both the a and c lattice parameters can be strained simultaneously.As such, we calculate the EPC and T c over a 5 Â 5 grid of individually stressed and strained a and c lattice parameters, with discrete steps of 2% and 7% in terms of equilibrium a and c lattice parameters, respectively.In particular, we suggest that T c can be boosted by introducing tensile lattice strain along both the c-axis, and in the ab-plane, by co-deposition of MgB 2 with a combination of YB 2 and either ZrB 2 or HfB 2 as (Zr,Y)B 2 and (Hf,Y)B 2 .
For VASP, the Projector-Augmented-Wave (PAW) method 43 is used to expand the electronic wave function in plane waves.The generalized gradient approximation (GGA) functional is used for calculating the exchange-correlation energies, as proposed by Perdew, Burke, and Ernzerhof (PBE96). 44To ensure a sufficient energy-and force-convergence, a plane-wave energy cutoff of 600 eV is used.For the unit cell, a k-point mesh of 17 Â 17 Â 17 is used when sampling the Brillouin zone in the Monkhorst-Pack scheme, 45 and a 5 Â 5 Â 5 mesh is used for 6 Â 6 Â 6 supercells in phonon calculations.The finite displacement (FD) method, as implemented in PHONOPY, 46 is used to calculate phonon frequencies and band structures.Atomic displacements of 0.01 Å from their equilibrium positions are performed for a symmetry reduced set of displacements, using the Parlinski-Li-Kawazoe method. 47or QE and EPW, EPC and superconducting properties are explicitly calculated from first-principles using both linear-response and maximally localized Wannier functions.The mutual formalism is that of the Migdal-Éliashberg theory. 24,25Electrons are described with both norm-conserving (NC) [48][49][50] and Vanderbilt ultrasoft (US) 51 pseudopotentials (PP).A cutoff of 400 Ry is used for the charge density, and 55 and 80 Ry are used for USPP and NCPP electronic wave functions, respectively.In both cases, the charge density is integrated on a Γ-centered 12 Â 12Â 12 k-point mesh and a Gaussian smearing of 0.02 Ry is applied.First-order EP matrix elements are calculated using DFPT on a Γ-centered 6 Â 6 Â 6 q-point mesh.Electron-phonon Wannier interpolation was performed on 48 Â 48 Â 48 kand q-meshes.
A brief description of central equations in the Migdal-Éliashberg 24,25 formalism of superconductivity is given below.If we let n and ν denote band indices of electrons and phonons, respectively, the electronic response upon the scattering of an electron from state nk to mk þ q while emitting or absorbing a phonon νq with frequency ω is described by the matrix element where M is the reduced mass and ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1=2Mω νq p is the zero-point phonon amplitude, and @V=@u νq is the induced potential per unit displacement, with u νq being the phonon normal coordinate.The matrix element g is central to electron-phonon theory.For instance, it is used to calculate the EPC strength associated with a specific phonon νq, where N(ϵ F ) is the electronic density of states at the Fermi energy and Ω BZ is the volume of the Brillouin zone.The energy of electron band n with wavevector k is given by ϵ nk and so on.The delta functions restrict the electron scattering to the Fermi surface.From this, one can calculate the total EPC λ ¼ P νq w q λ νq (w q being the BZ weights that normalize to 1), a quantity discussed in the results below.Furthermore, the phonon linewidth, corresponding to the imaginary part of the phonon self-energy, can easily be obtained from γ νq ¼ 2πN(ϵ F )ω 2 νq λ νq .This quantity identifies what modes and in which directions the EP scattering is most amplified, leading to decreased phonon lifetimes.
A. Lattice spacings
The lattice parameters a and c at hydrostatic pressures, ranging from À20 to 25 GPa in steps of 5 GPa, has been derived using VASP-PBE96, QE-US, and QE-NC approaches and is presented in Table I.Overall, the values are in excellent agreement, with VASP-PBE96 being slightly higher at the pressures below À10 GPa.Experimental values are derived from quadratic fit of measurements from the work of Bordet et al., 52 where pressures from ambient conditions up to 40 GPa were considered.Values in parentheses are extrapolated from the quadratic fit.
B. Kohn anomalies
There is a certain type of phonon anomalies that is named Kohn anomalies, after Walter Kohn who first brought them into light in his paper from 1959. 5 The message is that lattice vibrations of nuclei in metallic compounds are partially screened by the conduction electrons.This screening can change rapidly on certain surfaces in phonon q-space defined by jq þ Kj ¼ 2k F , where K is some reciprocal lattice vector and k F is the Fermi momentum.Therefore, on these surfaces, the frequencies ω vary abruptly with q.The location of the anomalies, i.e., the location of these surfaces, is determined by the shape of the electronic Fermi surface.][9][10] The Kohn anomaly appears on the boron E 2g phonon branch related to in-plane B-B bond stretching vibrations.As will be shown later in this work, we find Kohn anomalies along the Γ-q (q ¼ K, M, H, L) directions and explicit evidence of strong EPC at their location.We also find strong EPC along Γ-A, but without a clear sign of a Kohn anomaly.Calandra et al. note the presence of a weak anomaly at q % 0:8ΓA that other works have failed to capture. 6This highlights intricacy of these phonon dispersion features.The Kohn anomalies along Γ-q (q ¼ K, M, H, L) are attributed to in-plane nesting on the Fermi surface, while along Γ-A it is due to out-of-plane nesting between two coaxial holelike σ-band cylinders. 6
Journal of Applied Physics
In the works of Alarco et al. 9 and Mackinnon et al., 10 the authors use the following equation that connects the anomaly depth δ to the thermal energy T δ , where n is the degrees of freedom per atom, N is the number of atoms per unit cell, Z is the number of formula units per unit cell, k B is Boltzmann's constant, and kBTδ 2 is the average thermal energy contribution per degree of freedom, as dictated by the equipartition theorem.For MgB 2 with three atoms per unit cell, each with three degrees of freedom, Eq. ( 3) can be simply reduced to the following expression for the thermal energy: This quantity has been shown to correlate with experimentally measured superconducting transition temperature T c , not only for MgB 2 , but also for metal substituted Mg 1Àx M x B 2 , where M = Al, Sc, and Ti. 9,10This provides proof of concept that T δ can be used to estimate T c .
To further link the Kohn anomaly to superconductivity in MgB 2 , we can borrow the ratio 2Δ ¼ 3:50k B T c from the seminal paper by Bardeen et al. 53 Here, Δ denotes the superconducting gap size.Letting T c ! T δ from the discussion above and combining with Eq. ( 4) gives the estimate Δ % 0:39δ.As soon will be shown, our calculations average to δ ¼ 14:12 meV at ambient conditions.Therefore, one could expect Δ % 5:51 meV.Experimentally, the superconducting gap size Δ of the σ and π bands have been measured by Souma et al. 54 to 6:5 + 0:5 and 1:5 + 0:5 meV, respectively.The authors further note that the σ band is dominant in the superconductivity of MgB 2 and that the π band is less important, having much weaker coupling to phonons.As such, the rough estimate of 5.51 meV is comparable to the 6:5 + 0:5 meV of the σ-band.
In this work, as one of the approaches to predict T c of stressed/strained MgB 2 , we now explore how the Kohn anomaly δ changes with hydrostatic pressure.Using both FD and DFPT approaches to calculate phonon dispersions, the Kohn anomalies identified along the four q-directions Γ-q (q ¼ K, M, H, L) are studied as a function of pressure.Figure 2 summarizes the Kohn anomaly properties and their pressure dependence.
Figure 2(a) shows the differences in the phonon band structures of MgB 2 at ambient conditions between the two calculation methods of FD and DFPT.All phonon branches are, for all practical purposes, identical except for the anomalous E 2g branch.This shows that numerical convergence has been achieved.The difference in the E 2g branch is mostly seen in the relative depth, here defined as d, and is measured from the bottom of the Kohn anomaly and up to the value at Γ, as marked by the arrows on the right side of Γ.In FD, the phonons are derived from direct force calculations of all symmetrically inequivalent static displacements within a large supercell.In DFPT, one essentially calculates the linear response in the charge density in the unit cell, induced by the presence of a phonon with wave vector q, in order to find the phonon dispersions.The discrepancy could be that the two methods are not equally sensitive to the intricate details of the Kohn anomalies.
The directional and pressure dependence of d is shown in Fig. 2(b).For DFPT, the values are essentially invariant with pressure, up until 25 GPa where the phonon dispersion curves get distorted.For FD, the anomaly is more responsive to pressure, and after 10 GPa it becomes increasingly shallow relative to Γ.For both cases, Γ-L is found to form the deepest anomalous kink.
Figure 2(c) displays the anomaly depth δ, which measures the vertical distance from the bottom of the Kohn anomaly up to its cusp, 9,10 along the different directions as a function of pressure.At ambient conditions, the average over all directions and between FD and DFPT turns out to be δ ¼ 14:12 meV.It becomes evident that increasing the pressure suppresses the Mexican hat-esque feature.In fact, at 25 GPa, both FD and DFPT predict that the Kohn anomalies are fully destroyed by the pressure.This further explains why the description of d breaks down at high pressure.On the other hand, negative pressure, corresponding to tensile strain, effectively stretches the upper cusp (located at 80 meV at zero pressure) away from the bottom of the anomaly.This is beneficial for enhancing the superconducting transition temperature according to Eq. ( 4).
T δ is then trivially calculated for each direction-dependent δ and is presented in Fig. 3. From the average over all directions and both methods hδi ¼ 14:12 meV at ambient conditions, we find T δ ¼ 36:4 K. Individually, both methods are close to the experimental T c ¼ 39 K at p ¼ 0 GPa and when averaging over all directions FD overestimates by just 2.54 K and DFPT underestimates by 7.70 K Both methods capture the same trend with T δ decreasing with pressure.The green (Γ-M) and black (Γ-K) curves, both corresponding to in-plane q-directions, are overall on top of each other for both FD and DFPT.Meanwhile, the diagonal directions Γ-H in red and Γ-L in blue differ slightly.In terms of the equilibrium volume V 0 , the pressure of À20 GPa corresponds to the volume V ¼ 1:185V 0 , which increases T δ to 77.39 (79.48)K according to FD (DFPT).On the opposite end at 25 GPa (V = 0.873V 0 ) both methods predict that the Kohn anomalies vanish and thus also T δ is completely suppressed.Finally, we conclude that comparing to calculations that explicitly treat electron-phonon coupling, using phonon dispersion curves alone to estimate superconducting transition temperature T c is computationally very cheap.
C. Explicit treatment of electron-phonon coupling
To aid in the visualization, γ νq is presented as Gaussian broadening of the phonon frequency ω νq in the phonon band structure for all branches of equilibrium MgB 2 in Fig. 4(a), all calculated using the QE software.
This reveals that the anomalous band in the 60-80 meV range along Γ-K, Γ-M, Γ-H and Γ-L, originating from B atoms, indeed have strong EPC and contribute to superconducting properties.Γ-A of the same band is also found to have strong coupling, despite the Kohn anomaly noted by Calandra et al. 6 at q % 0:8ΓA not being captured in our phonon calculations.Moreover, looking at the electronic band structure in Fig. 4(b) reveals three important boron bands highlighted in color; two σ-bands (blue and green) and one π-band (red).The two σ-bands with p x,y character straddle the Γ-A line with negligible dispersion near ϵ F .Furthermore, they cross ϵ F parabolically along Γ-q (q ¼ K, M, H, L) providing the majority of the contribution to N(ϵ F ) and the EPC.The features seen in Fig. 4 evidencing the presence of Cooper-Pairs, from the coupling of optical B-B bondstretching vibrations in the 60-80 meV range (at ambient pressure) and the holes at the top of the σ-bands.
Another key quantity in the theory is the Migdal-Éliashberg spectral function, which in the isotropic approximation is given by where more exactly F(ω) is the phonon density of states and α denotes an average over all phonons of energy ω in the BZ.Partial F(ω) as a function of hydrostatic pressure from À20 to 25 GPa, in steps of 5 GPa, is presented in Fig. 5(a).Mg is shown by dashed lines and B by solid lines.As pressure is increased from À20 GPa (black) to 25 GPa (red), the Mg and B peaks in the density of states become increasingly separated.At 25 GPa, the phonon bands of Mg and B are almost completely separated, with minor overlap in the 40-60 meV range.On the other hand, at À20 GPa with high tensile strain, the B reaches down to even the 20-30 meV region.
The overall shift of each individual peak is related to the atomic mass, where the lighter B atoms evidently are more sensitive to pressure effects.The full spectral function α 2 F(ω) is shown in Fig. 5(b) for the same pressure set.Here, the cumulative electron-phonon mass enhancement parameter λ(ω) is also shown.It is expressed as where in the upper integration limit ω ! 1 it equates to the total EPC λ mentioned above.From our calculations, we find that the FIG. 3. Directional and pressure dependence of thermal energy T δ that correlates with superconducting transition temperature T c , calculated using FD and DFPT.
Journal of Applied Physics
total coupling decreases with pressure.Using QE (EPW), at the lowest considered pressure of À20 GPa it goes from its maximum λ ¼ 1:498 (1.259), to 0.649 (0.621) at ambient pressure, and down to 0.476 (0.459) at 25 GPa.Following from the discussion on the effect of pressure on phonon frequencies and that the integrand in Eq. ( 6) is weighted by ω 0À1 , it can be seen as a general principle that tensile strain (negative pressure) lowers the energy of the phonon modes, resulting in enhanced EPC.Furthermore, as the figure shows, spectral function α 2 F(ω) does not follow the overall shape of the phonon DOS F(ω), a feature not usually seen in superconductors, compare, e.g., the examples of high pressure face-centered cubic (fcc) and bodycentered tetragonal (bct) B, and high pressure hexagonal close packed (hcp) Fe presented by Bose and Kortus. 55The most striking feature of the spectral function is that the position of its peak aligns with the location of the anomalous B E 2g optical branch, dwarfing all contributions from other phonons-cf.the light-blue p ¼ 0 curve in Fig. 5 found to be accurate for known materials with λ , 1:5, [56][57][58] where ω log is the logarithmically averaged phonon frequency and μ * is the Anderson-Morel Coulomb pseudopotential, 59 which is difficult to calculate from first-principles.7][58] introducing a slight uncertainty in the determination of T c in this procedure, especially at moderate coupling strengths.For the purpose of this work, it is taken as a fitting parameter where 0.16 is selected as the "standard choice." The results of this isotropic approximation are presented in Fig. 6.
For comparison, the direction-averaged T δ from both finite displacements and linear response phonons of Fig. 3 has been included, here marked by colored stars connected by dotted lines.Black symbols show experimental values taken from the literature 11-15 that if all fitted to a simple linear equation reveals that the transition temperature has the average slope of dT c =dP ¼ À0:98 K/GPa from 0 to 25 GPa.Kohn anomaly estimates using T δ are shown to perform well at ambient conditions, but overestimate the slope of which experimental values drop with pressure.
The first attention is brought to the comparison between McMillan-Allen-Dynes T c [Eq. ( 7)] as calculated using QE and EPW both with the choice μ * ¼ 0:16, shown by blue and red solid squares, respectively.At ambient pressure, the QE (EPW) estimate is 9.45 (8.54) K, with the difference coming from EPW having systematically lower λ than QE according to our calculations.Here, the results from NCPP are shown.A comparison between USPP and NCPP revealed virtually the same estimations, within 0.5 K from each other on average.
Since μ * can be freely varied in this framework, we used QE as a trial case and found that μ * ¼ 0:01 allows for a good fit to the experimental value at ambient pressure, resulting in T c ¼ 38:7 K, shown by blue open squares.Varying μ * changes the absolute values of T c , while the relative change with respect to the ambient pressure value, i.e., the pressure trend, is widely preserved.The fact that the standard choice of μ * severely underestimates T c , unless the unconventionally small value of 0.01 is selected, is here taken as a sign that MgB 2 is not very well-described as an isotropic material.This is not surprising as it has been well established herein and in the literature that the superconductivity is mainly governed by in-plane vibrations.
Instead, we go beyond the isotropic approximation and calculate the closing of the distribution of the anisotropic superconducting gap Δ nk on the Fermi surface, obtained both from the real axis using Padé approximants 60 and the imaginary axis for comparison.In both cases, the standard choice of μ * ¼ 0:16 was employed.Our calculations show that both the real and imaginary axes' approaches yield the same values.In Fig. 6, the red solid circles represent the results from calculations on the real axis.
Regardless of the theoretical approach, the overall trend of a T c decreasing with pressure is mutual.There is, however, the work by Gerbi and Singh 17 that also has investigated the effect of hydrostatic pressure on superconductivity of MgB 2 using EPW.They find controversially that T c increases with pressure already in the range 0-25 GPa covered in this work and that it, for instance, is enhanced to 95 K at a pressure of 650 GPa.It is an interesting outlier in the literature that is well worth noting.
D. Anisotropic stress and strain engineering of T c
Within the first year or two of the discovery of superconductivity in MgB 2 , the initial experiments that investigated the effect of hydrostatic pressure on T c used anvils to achieve isotropically compressed conditions.As seen in Fig. 6, all experiments clearly indicate that compressive strain leads to a decrease in the superconducting transition temperature.Literature search reveals several experimental and theoretical efforts that look into using strain to tune T c in MgB 2 , not only by compression, but also by anisotropic tensile strain.
The experimental work of Serquis et al., 18 and later with the continuation by Liao et al., 19 found that 1.1% compressive strain due to the presence of 5% Mg vacancies reduces T c by around 2 K. Pogrebnyakov et al. 20 synthesized MgB 2 with 0.55% in-plane biaxial tensile strain achieved by epitaxial growth on SiC that increased T c by roughly 2 K up to 41.8 K.It was confirmed by Raman scattering measurements that the increase was due to softening of the bond-stretching E 2g phonon mode, following the strain-induced lowering of the corresponding phonon frequency.
On the theoretical side, Zheng and Zhu 21 used a semi-empirical model with fitted parameters to study T c as a function of the strained lattice.Their findings indicate that separately increasing both a and c Journal of Applied Physics leads to T c enhancements, with the most significant way being increasing the volume with a fixed c=a ratio.Furthermore, they suggested several substrates that can provide in-plane biaxial tensile strain via lattice mismatch.Some examples being SiC, Si 1þx C 1Àx , AlN, GaN, and Al x Ga 1Àx N.For instance, AlN provides 1% strain, compared to the 0.5% from SiC, and enhances T c by 7%-8%.Bekaert et al. 22 used DFPT to study strained MgB 2 monolayers and found that reducing in-plane Mg-Mg distances with 4.5% biaxial compressive strain reduces T c by 11 K, from 20 K of the relaxed single monolayer, while 4.5% tensile strain boosts it up to 53 K.It is noted that the response to strain is much higher in MgB 2 than in superconducting graphene. 22Zhai et al. 23 studied how strain alters the covalency in transition metal diborides and discussed implications for superconductivity.They note how stretching uniaxially along the c-axis can raise the energy of the σ-band and favorably reduce the frequency of the E 2g phonon.Here, it should be noted that biaxial strain induced by epitaxial growth has limitations in achieving increased T c in MgB 2 .This is because an increased a-or c-parameter tends to lead to a decrease in the other with a compensating effect on T c .This is in addition to the effect of loss of epitaxial coherency by misfit dislocations in thick layers.
Using mechanical strain to change the superconducting properties of a material has also been studied in other materials systems.For instance, Cheng et al. 61 performed DFPT calculations to probe EPC under strain engineering in AlB 2 .Similar to what is found for MgB 2 , they find uniaxial tensile strain along the c-axis to increase T c for AlB 2 , while the opposite is indicated for compressive strain.Furthermore, they find that straining is more effective than carrier doping in tuning superconductivity. 61Bozovic et al. 62 measured La 2 CuO 4 under tensile strain on SrTiO 3 substrates and found that T c is brought up from 25 to 40 K. On LaSrAlO 4 substrates, providing compressive strain, it is enhanced to 51.5 K.They point out that the variation in T c in La 2 CuO 4 does not originate from strain alone, but is also caused by sensitivity to oxygen content.Li et al. 63 investigated buckled triangle borophene and β 12 borophene from first-principles using DFPT and found that strain can be used as a tool to tune T c for both boron polymorphs.Herrera et al. 64 measured an increase in the transition temperature of SrTiO 3 attributed to tensile strain-induced modification of phonon modes.Lastly, Ghini et al. 65 found that superconductivity in single crystal FeSe is enhanced under compressive uniaxial strain.
The literature is in this sense rich in studies on strain engineering, but to the best of our knowledge, there exists no work with explicit treatment of EPC from first-principles that treats both anisotropic compressive and tensile strain, or any combinations thereof, and their implications for superconducting properties of MgB 2 .It is, therefore, our intention to investigate this matter further here.
In the work of Alling et al., 27 the configurational mixing and clustering tendencies of a large set of ternary metal diborides were studied.For the purpose of this study, we select the subset of Mg x M 1Àx B 2 alloys with metals M that do not want to form solid solutions with Mg, but cluster instead.These metals are M ¼ V, Ti, Hf, Zr, and Y.In contrast to the aforementioned substrates like SiC and AlN, these all form AlB 2 -type diborides that are isostructural to MgB 2 .We propose that these compounds can potentially be used in cleverly designed epitaxial growth to create layered Mg x M 1Àx B 2 structures that are under compressive or tensile strain along preferable crystal directions.For instance, if biaxial tensile or compressive strain in the ab-plane is desired, epitaxial growth along a common c-axis could allow for sandwiching of out-of-plane ordered pure Mg and suitable M (with the flat boron sublayers being spectators in the clustering).This idea is similar to what has recently been experimentally proven to be possible in the work of Dahlqvist et al. 66 We also suggest co-deposition with intermediate temperature to allow some, but limited, surface diffusion.This allows for creating 3D intertwined coherent nanostructures or columns that are strained from all directions.Furthermore, in such mixtures, co-deposition can result in a metastable solid solution that upon annealing can undergo spinodal decomposition resulting in a coherent nanostructure with strained MgB 2 domains.
In Fig. 7, the VASP-derived lattice parameters of isostructural AlB 2 -type diborides predicted to have clustering tendencies with respect to MgB 2 are presented.For comparison, AlB 2 itself is also included but has ordering tendencies.To sufficiently probe the lattice parameter space of MgB 2 , and to capture the values of the closest-lying clustering candidates, a 5 Â 5 (a, c) grid was constructed.The chosen grid spacings are, in terms of the equilibrium lattice parameters, 2% in the a-direction and 7% in the c-direction.This is represented by the purple points in the figure, corresponding to different pressures.Out of the metal diborides, one can readily identify that HfB 2 and ZrB 2 are candidates for achieving biaxial in-plane tensile strain, with an 2%-4% a-parameter mismatch, while c remains matched within 1%.Using VB 2 or TiB 2 results in compression of both a and c.YB 2 could potentially be used to achieve high tensile strain, but dynamical stability of MgB 2 at such cell shape was not tested in this study.One can also consider co-deposition of MgB 2 with a combination of ZrB 2 and YB 2 or HfB 2 and YB 2 , as (Zr,Y)B 2 and (Hf,Y)B 2 tend to form solid FIG. 7. The 5 Â 5 (a, c) lattice parameter grid with Δa ¼ 2% and Δc ¼ 7%, showing metal diboride candidates that have clustering tendencies with respect to MgB 2 . 27AlB 2 has ordering tendencies and is included as a reference.Clever nanostructure design with TiB 2 and VB 2 supplies compressive biaxial strain, while HfB 2 and ZrB 2 mostly offer uniaxial in-plane tensile strain.YB 2 is not covered by the grid points but offers the highest biaxial tensile strain in both a and c.
solutions while they all tend to avoid mixing with MgB 2 . 27he complexity of controlling the B to metal content in thin film deposition, such experiments would benefit from recent advances in combinatorial growth, allowing mapping of the compositional space in one single experiment.
Despite not being explicitly shown here, the phonon band structure was derived to check for dynamical stability for all 25 grid points of pure MgB 2 using both finite displacement and linear response methods.Finite displacements predict that all strained structures are dynamically stable, while the linear response is indicative of stretching the c-axis by þ14% results in imaginary frequencies at the Γ-point at 0 K.It is not investigated further in this study if temperature-driven anharmonic effects can stabilize such features.8][69] Available grid points are used to show qualitative predictions of what is possible with strain engineering.
In Fig. 8, the total EPC coupling λ and the absolute change in T c relative to the equilibrium value, calculated using McMillan-Allen-Dynes Eq. ( 7) with μ * ¼ 0:16, are presented for the anisotropically strained MgB 2 .A green point and transparent plane is used to guide the eye to the value corresponding to unstrained MgB 2 .In both subfigures, it is clear that compressing either a or c alone, or in a combination as in the case of hydrostatic pressure, the EPC strength and consequently T c are suppressed.The most minute change to the lattice, captured by the grid, is achieved by stepping either +2% along the a-direction.A 2% compression of a alone is here predicted to reduce T c by 2.6 K, while on the contrary a 2% expansion increases it by 5.5 K. Compressing c alone by 7% is predicted to reduce T c by 8.3 K, and a 7% expansion increases it by 14.0 K. Stretching a while compressing c is found to suppress superconductivity.For instance, stretching a by 2% while compressing c by 7% lowers T c by 9.3 K.This is in contrast to the reverse case, whereby compressing a by 2% and stretching c by 7% increases T c by 11.3 K.In line with the indication from the semi-empirical model of Zheng and Zhu, 21 the grid with explicit treatment of EPC shows how individually stretching either a or c alone or in unison enhances the superconductivity in MgB 2 .Simultaneously stretching a by 2% and c by 7% enhances T c by 18.2 K.If MgB 2 can remain stable at conditions corresponding to an increase in a by 4% and c by 14%, T c is here predicted to be increased by 38.4 K, effectively doubling the value of unstrained MgB 2 .
IV. CONCLUSION
The effects on superconducting transition temperature in MgB 2 by hydrostatic pressure and anisotropic stress and strain conditions have been investigated.Different theoretical methodologies with varying complexity have been benchmarked.
A. Kohn anomalies
First, two strictly phonon-based methods of finite displacements and linear response were employed to capture how the Kohn anomaly in the optical E 2g branch, corresponding to in-plane B-B bond stretching modes, evolves under hydrostatic pressure.The related Kohn anomaly depth δ was found to decrease under compressive strain and increase under tensile strain.The thermal energy T δ tied to the anomaly has in previous studies been directly linked to superconducting transition temperature T c for MgB 2 and other AlB 2 -type diborides.We find that enhancement of T c strongly favors negative pressures and that sufficiently large compression of the lattice will fully suppress the Kohn anomaly.For MgB 2 , using phonon dispersion curves alone to estimate T c is computationally non-expensive and surprisingly accurate for its cost.Kohn anomalies are identified along Γ-q (q ¼ K, M, H, L) directions and arise due to Fermi surface nesting.Averaged across all
Journal of Applied Physics
directions, the Kohn anomaly approach estimates T c finite displacements (linear response) to 41.54 (31.30)K at ambient pressure.This approach somewhat overestimates the magnitude of the slope dT c =dP.The results presented herein motivate applying this methodology to other materials systems with anomalous phonon branches and studying their pressure behavior.
B. Explicit treatment of electron-phonon coupling
The calculations following the Migdal-Éliashberg formalism of EPC are computationally more expensive, but allow for explicit treatment of superconducting properties.Visualizing the phonon dispersion ω νq broadened by the phonon linewdith γ νq reveals explicit evidence of EPC on the E 2g branch in the range 60-80 meV (at ambient pressure), along Γ-q (q ¼ A, K, M, H, L).These locations correspond to where both σ-bands graze the Fermi energy.Calculation of the Éliashberg spectral function α 2 F(ω) as a function of pressure captures how the location of the anomalous branch moves during lattice strain and how the total EPC varies.Increasing pressure is found to stiffen the anomalous phonon frequency and decrease the total electron-phonon coupling, effectively suppressing the superconductivity in MgB 2 .The superconducting transition temperature is estimated using the isotropic McMillan-Allen-Dynes formula and is found to underestimate T c by over 20 K when a typical value of the pseudopotential μ * ¼ 0:16 is used.We show that μ * ¼ 0:01 boosts T c to 38.7 K and that the pressure trend closely follows that of the majority of experimental reference values.Furthermore, we calculate T c by the closing of the superconducting gap Δ nk with μ * ¼ 0:16 as a function of pressure and find the value 37 K at ambient pressure.Furthermore, the pressure trend is in excellent agreement with experiments in the literature, in particular, the work of Tissen et al. 13
C. Anisotropic stress and strain engineering of T c
Literature review reveals a plethora of studies on the topic of using stress and strain engineering to enhance T c in MgB 2 and a handful of other selected compounds, e.g., AlB 2 , La 2 CuO 4 , borophenes, SrTiO 3 , and FeSe.Whether it is compressive or tensile strain that enhances T c is system dependent.The metal diborides MB 2 (M ¼ V, Ti, Hf , Zr, Y) are identified as potential candidates for clever nanostructure design as they are predicted to have clustering tendencies with respect to MgB 2 .A 5 Â 5 (a, c) grid is constructed to probe the lattice parameter space in the vicinity of the MB 2 candidates.The total EPC and change in T c relative to the equilibrium value is calculated for all 25 points.The results show that biaxial in-plane tensile strain, uniaxial out-of-plane tensile strain, or a combination of both enhance T c by a considerable amount.Furthermore, both HfB 2 and ZrB 2 are identified as candidates for achieving tensile strain in cleverly designed 3D intertwined coherent nanostructures or columns that are strained in both a and c, something not achieved by planar epitaxy alone.YB 2 would allow for the highest level of tensile strain, but dynamical stability of MgB 2 was not investigated at those lattice parameters.A detailed study of the effects of quaternary diboride alloys and their nanostructures is fit for future investigations.
FIG. 2 .
FIG. 2. (a) Magnification of phonon dispersion of MgB 2 at zero pressure around Γ to highlight differences between FD (solid lines) and DFPT (dashed lines).The E 2g branch is shown in blue with the FD (DFPT) anomaly depth δ and d relative to Γ being marked in red (green).(b) Pressure dependence of the relative depth d for FD and DFPT.(c) Pressure dependence of the anomaly depth δ for FD and DFPT.
FIG. 4 .
FIG. 5. Electron-phonon properties of MgB 2 derived using QE.(a) Phonon density of states F(ω).Dashed lines show contributions from Mg and solid lines from B atoms.(b) Solid lines show Éliashberg spectral function α 2 F(ω) at different pressures, ranging from À20 to 25 GPa in steps of 5 GPa.Dashed lines correspond to the cumulative mass enhancement parameter λ(ω) that once becoming a straight flat line as the spectral function dies out reveals the total electron-phonon coupling value λ.
FIG. 6 .
FIG. 6. Superconducting transition temperature T c vs pressure for different methods.Blue (red) boxes show McMillan-Allen-Dynes T c [Eq. (7)] calculated using QE and EPW, with μ Ã shown in legends.Red circles show pressure dependent closing of the anisotropic superconducting gap Δ nk on the Fermi surface, obtained from the real axis using Padé approximants and using μ Ã ¼ 0:16.Colored stars correspond to direction-averaged T δ values from finite displacements and linear response phonons.Black symbols correspond to experimental values. 11-15
See Ref. 3 for visualization of the Fermi surface of MgB 2 .
TABLE I .
52ttice parameters a and c (given in Å) as a function of pressure (given in GPa) for different pseudopotentials compared to experiment values.52 | 2022-02-10T16:08:31.514Z | 2022-02-14T00:00:00.000 | {
"year": 2022,
"sha1": "ce538cf5236cfc2970bfd68077e51f8efab9f4a2",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0078765",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "abd85d065cd6770bdad2afd93b4c5d3f7fed8cab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
1997224 | pes2o/s2orc | v3-fos-license | Biomarkers for Presymptomatic Doxorubicin-Induced Cardiotoxicity in Breast Cancer Patients
Cardiotoxicity of doxorubicin (DOX) remains an important health concern. DOX cardiotoxicity is cumulative-dose-dependent and begins with the first dose of chemotherapy. No biomarker for presymptomatic detection of DOX cardiotoxicity has been validated. Our hypothesis is that peripheral blood cells (PBC) gene expression induced by the early doses of DOX-based chemotherapy could identify potential biomarkers for presymptomatic cardiotoxicity in cancer patients. PBC gene expression of 33 breast cancer patients was conducted before and after the first cycle of DOX-based chemotherapy. Cardiac function was evaluated before the start of chemotherapy and at its completion. Differentially expressed genes (DEG) of patients who developed DOX-associated cardiotoxicity after the completion of chemotherapy were compared with DEG of patients who did not. Ingenuity database was used for functional analysis of DEG. Sixty-sevens DEG (P<0.05) were identified in PBC of patients with DOX-cardiotoxicity. Most of DEG encode proteins secreted by activated neutrophils. The functional analysis of the DEG showed enrichment for immuneand inflammatory response. This is the first study to identify the PBC transcriptome signature associated with a single dose of DOX-based chemotherapy in cancer patients. We have shown that PBC transcriptome signature associated with one dose of DOX chemotherapy in breast cancer can predict later impairment of cardiac function. This finding may be of value in identifying patients at high or low risk for the development of DOX cardiotoxicity during the initial doses of chemotherapy and thus to avoid the accumulating toxic effects from the subsequent doses during treatment. PLOS ONE | DOI:10.1371/journal.pone.0160224 August 4, 2016 1 / 20 a11111
Introduction
Doxorubicin (DOX), a commonly used anthracycline antibiotic for treatment of various malignancies may cause unpredictable cardiotoxicity [1]. DOX cardiotoxicity is cumulative-dosedependent and begins with the first dose, suggesting that assessment of the cardiac function in patients at early doses of chemotherapy can avoid permanent cardiac damage [2]. According to the American College of Cardiology guidelines, patients receiving chemotherapy are at increased risk of developing cardiac dysfunction [3].
Evidence indicates that susceptibility to DOX cardiotoxicity is largely individual with some patients developing cardiomyopathy at doses of 200-400 mg/m 2 [3], and others tolerating >1000 mg/m 2 [4], suggesting the presence of a genetic predisposition.
Serial measurements of heart left ventricle ejection fraction (LVEF) is commonly used for cardiac monitoring during anthracycline treatment [5], although the prognostic value of LVEF appears to be controversial [6]. In some studies cardiotoxicity was defined as LVEF decrease by absolute 10% and/or to below 55% [7], in others cardiotoxicity was defined as a decrease below 45% [8]. A serious disadvantage of this test is the exposure to radioactivity along with the low predictability of pre-symptomatic cardiac damage [9]. Blood cardiac biomarkers, such as cardiac troponins and B-type natriuretic peptide (BNP) have been used in the diagnostics of heart failure [10], but other diseases have also been associated with increased troponin release [e.g. acute pulmonary embolism [11], and end-stage renal disease [12] and/or BNPs [e.g. end-stage renal disease [13]. Several studies failed to detect any correlation between concentrations of troponin and/or BNP and DOX-induced cardiotoxicity [14,15].
Our previous study showed a high similarity between the gene expression of heart and peripheral blood cells (PBCs) in a rat model of DOX-cardiotoxicity [16], suggesting that PBC can be used as a surrogate tissue for assessing biomarkers of DOX cardiotoxicity. We hypothesize that PBC gene expression induced by the early doses of DOX-based chemotherapy could identify potential biomarkers for presymptomatic cardiotoxicity in cancer patients.
Study subjects and blood samples
Fifty-five women treated for breast cancer with DOX-based chemotherapy at the University of Arkansas for Medical Sciences were enrolled initially in an Institutional Review Boardapproved protocol with written informed consent for each patient. All patients were treated with a predefined protocol which included a combination of DOX (Adriamycin, 60 mg/m 2 ) with cyclophosphamide (600 mg/m 2 ). However, 22 of these patients decided to dropout from the study for various reasons. We were able to obtain RNAs and data about cardiac function of 33 subjects.
Blood samples were collected prior to chemotherapy and after the first cycle of chemotherapy. PBCs were isolated from EDTA anti-coagulated blood using standard Ficoll-Paque Plus gradient centrifugation (density 1.073 g/mL) according to the instructions of the manufacturer (GE Healthcare, USA). Briefly, EDTA anti-coagulated blood, diluted with an equal volume of phosphate-buffered saline (PBS) was layered over the Ficoll-Paque Plus and was centrifuged at 400 g for 30 min at 18°C-20°with the brake off. After removing the upper layer containing plasma and platelets, the layer of peripheral blood mononuclear cells (PBMCs) was isolated and stored at -80°until further use. Total RNA was extracted from PBC using RNeasy columns (Qiagen; Valencia, CA) and samples with RNA integrity number (RIN) score>7 were used for expression analysis.
Ethics Statement
This study was carried out in accordance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the ethics committee of the University of Arkansas for Medical Sciences. All subjects signed an IRB approved informed consent where they were informed for the use of their blood samples and medical records for research purposes.
Assessment of LVEF as a measure of cardiac function
Cardiac function of the patients was assessed by a multigated acquisition (MUGA) scan before the start of DOX-treatment and at its completion. A decline of LVEF by >10% or below 50% was considered abnormal [17].
Microarray gene expression and data analysis
The gene expression screen was performed using HumanHT-12 v4 Expression BeadChip array (Illumina, San Diego, California). Raw data were further analyzed using Illumina GenomeStudio software.
PBC gene expression data were log2 -transformed and gene transcripts with average log2-intensities >7 were considered to be expressed. We first compared baseline levels of gene expression between the two groups of patients, and then examined expression changes from baseline to after the first cycle of chemotherapy in each group of patients. The group-specific means for each group ("abnormal MUGA scan" and "normal MUGA scan") were analyzed via repeated-measures ANOVA for expression changes after DOX. Genes with p-value <0.05 were considered differentially expressed. To adjust for multiple testing, the Benjamini-Hochberg false discovery rate (FDR) was used [18]. Given our relatively small sample size, we chose to use a FDR corrected significance threshold P 0.1 which is commonly used in hypothesisgenerating, discovery-driven gene expression studies [19].
Functional annotation
Functional network analysis was performed using Ingenuity Pathways Analysis System; http:// www.ingenuity.com) which identifies the most significant biological functions to a dataset based on the causal relationships previously reported in the literature.
QRT-PCR Validation
Quantitative real-time RT-PCR was used for evaluation and confirmation of the gene expression data. Total RNAs were reverse transcribed and cDNAs were amplified using Nugen Ovation RNA Amplification System V2 (NuGEN™ Technologies, San Carlos, CA). QPCR was performed using Taqman Universal Fast PCR master mix and specific primers for DEFA3, DEFA4, ELANE, ARG1, HP, CEACAM8, and 18S tRNA (Applied Biosystems, Foster City, CA). Data were analyzed using the 2-deltadelta Ct method. The Ct values of both the control and the samples of interest were normalized to 18S. DeltaCt was calculated as "deltaCt(gene) equals Ct(18S) minus Ct(gene)". DOX response was calculated as deltaCt "After DOX" minus deltaCt "Before DOX", and is in ddCt (delta-delta-Ct) units.
Characteristics of study subjects
We have analyzed and compared PBC gene expression associated with one dose of DOX-based chemotherapy of 8 breast cancer patients who developed abnormal LVEF after the completion of chemotherapy and 25 breast cancer patients who did not. Patients' characteristics are provided in Table 1.
Gene expression profile associated with a single dose of DOX-based chemotherapy
The volcano plots on Fig 1 show the probability that the gene is differentially expressed in a data of average 'before vs. after' one dose of DOX-based chemotherapy in all women (1A) and in the groups with normal (1B), and abnormal LVEF (1C). The analysis of the gene expression of PBC indicated that a single dose of DOX induced significant changes in the expression of 235 genes (fold change >1.5, FDR<0.1) ( Table 2). There were no significant differences between the gene expression of the two groups of patients (abnormal and normal LVEF) at baselines (P>0.05, FDR>0.1).
We have also compared the gene expression of the 33 patients divided into 3 age groups, each comprising 11 patients, 18-47 years (youngest), 48-55 years (mid-age) and 56-99 years
Gene expression profile in patients with abnormal decline of LVEF
The analysis of the expression profiling of PBC after one dose of DOX-based chemotherapy in women who developed abnormal LVEF after a full course of chemotherapy found 80 transcripts that differed >1.5-fold (p<0.05, FDR<0.1) from the baseline, 13 of which were shared with women with normal LVEF. Table 4 shows the unique DEG in patients with abnormal LVEF. Downregulation of TCL1A and FCRL was reported in breast cancer patients treated with several doses of DOX-based chemotherapy who developed cardiomyopathy [20]. Downregulation of CXCR5, which encodes chemokine expressed on B-cells could be explained with the depletion of B-cells in cancer patients treated with DOX-based chemotherapy [21,22]. Lower expression of CD72 was detected in patients with systemic lupus erythematosus (SLE) and it correlated inversely with SLE disease activity [23].
The top upregulated DEG in the "abnormal" dataset were genes encoding the alpha-defensins DEFA1-4 (HNP-1 to -4), which are secreted by activated neutrophils and are involved in involved in innate immune response [24]. Significantly upregulated were several other genes encoding proteins secreted by activated neutrophils and associated with inflammation, such as BPI (bactericidal permeability increasing protein), ELANE (neutrophil elastase), CTSG (neutrophil cathepsin) [25,26]. ARG1 [27] which encodes arginase and HP which encodes haptoglobin [28] are involved in a variety of inflammatory diseases.The protein encoded by TNFAIP6 (tumor necrosis factor, alpha-induced protein 6) was found to be increased in the synovial fluid of patients with osteoarthritis and rheumatoid arthritis [29].
QRT-PCR Validation
The results (Table 5) from QRT-PCR confirmed the upregulation of the selected genes in patients with abnormal LVEF in comparison with the patients with normal LVEF.
Discussion
DOX-based chemotherapy has greatly increased the number of long-term cancer survivors but has also led to an increasing number of patients experiencing DOX-induced cardiotoxicity [30]. Because there are no clinical methods for early detection of subclinical DOX cardiotoxicity, attempts to minimize cardiotoxicity include empiric DOX dose limitation or modification by risk factors, which pose a risk of premature discontinuation of effective anthracycline therapy. In addition, because of a wide individual variability in toxic anthracycline doses [31,32] cardiotoxicity may occur at unexpectedly low doses. This is the first study to identify the PBC transcriptome signature associated with a single dose of DOX-based chemotherapy in cancer patients. We have identified a transcriptome signature associated with one dose of DOX-based chemotherapy which distinguished patients developing abnormal cardiac function after a full course of chemotherapy from those who maintained normal cardiac function. In addition, we have identified a gene expression profile associated with a single dose of DOX-based chemotherapy which distinguished older than younger patients. As PBC are a subset of white blood cells, it is not surprising that the immune system was identified as the first and the most affected responder to the systemic stress of DOX-based chemotherapy. It has been found that the hyper-activated innate immune responses, including cytokine production, augmentation of natural killer (NK) cell activity [33], stimulation of cytotoxic T-lymphocyte (CTL) responses and augmentation of macrophages differentiation [34], could contribute to the progression of congestive heart failure. The top most significantly upregulated genes in the group of patients with abnormal LVEF decline encode proteins secreted by activated neutrophils, such as alpha-defensins, cathelicidins, arginase, cathepsin G, elastase and haptoglobin. Neutrophils provide the first line of innate immunity and are the major effectors of inflammation associated with cardiovascular diseases [35,36]. Several reports suggested the prognostic value of alpha-defensins [37], cathepsin G [38] and arginase [39] in heart failure. Our results indicate that there is a correlation between the severity of DOX-associated cardiotoxicity and the levels of activated neutrophils in the PBMC fraction of peripheral blood of breast cancer patients. The presence of abnormal subset of low density neutrophils in PBMC preparations have been reported in a range of conditions such as systemic lupus erythematosus (SLE), rheumatoid arthritis (RA) and acute rheumatic fever [40,41], cancer [42][43][44][45], vasculitis [46] and asthma [47], and their presence was correlated with disease activity. Several distinct features were identified in the low density neutrophils, such as different nuclear morphology, enhanced capacity to synthesize IFN I upon stimulation, higher expression of TNF-alpha, IL-6 and IL-8, and enhanced capacity to form neutrophil extracellular traps (NETs) [48,49]. NETs are characterized as chromatin fibers associated to granular proteins that are released to the extracellular space in order to immobilize and kill invading microbes during a process of cell death termed "NETosis" [50,51]. In addition to their antimicrobial role, recent evidence suggests that NETs can induce endothelial damage [52,53]. Denny et al [54] characterized the phenotype of low density neutrophil subset in SLE patients and found that the low density neutrophils displayed an impairment in the phagocytic potential, had proinflammatory phenotype and induced vascular damage, indicating that they contribute to accelerated atherosclerosis and cardiovascular disorders observed in SLE patients. Because age-associated changes in DOX-induced gene expression has not been reported, we have compared the gene expression of the 33 patients divided into 3 age groups, 18-47 years (youngest), 48-55 years (mid-age), and 56-99 years (oldest). The results showed that a single dose of DOX-based chemotherapy resulted in 66 DEG in the oldest group of patients in comparison with the other two age-groups The functional analysis of DEG in PBC of the subjects with abnormal cardiac function identified several linked pathways enriched for genes involved in inflammation and immunity and interconnected with NFKB, p38, ERK1/2, AKT, IFN-gamma, TGFB, ARG1 and IL-4, which were associated with ischemia [55], cardiac hypertrophy [56] and chronic heart failure [57]. Consistent with these reports, we found an overlapping enrichment of molecules for chronic Fig 2. Ingenuity biological function analysis of DEG associated with 1 dose of DOX-based chemotherapy in patients with abnormal decline of LVEF at completion of chemotherapy. The most enriched "biological functions" of this dataset identified using Ingenuity software were "cell-to-cell signaling" (p-value = 1.40E-07-3.2E-02), "cellular movement" (p value = 2.23E-04-2.02E-02) and "cellular development" (p value = 2.85E-04-2.89E-03). The most represented "diseases and disorders" were "connective tissue disorders" (p value 2.53E-12-2.57E-02), "inflammatory diseases" (p value = 2.53E-12-3.13E-02), "skeletal and muscular disorders" (2.53E-12-1.23E-02) and "immunological diseases" (p value = 5.59E-12-3.12E-02). In conclusion, the results from this study show for the first-time that: 1) PBC transcriptome signature associated with early doses of DOX-based chemotherapy has the potential to predict later impairment of cardiac function and can be used as a surrogate marker for DOX-induced cardiotoxicity, 2) individual sensitivity to a single low dose of DOX is associated with differential expression of several genes implicated in inflammatory response and immune trafficking; 3) there is an overlapping but distinctive age-related pattern of gene expression associated with a single dose of DOX-based chemotherapy. Table 5. Results from QPCR of the target genes in breast cancer patients treated with DOX-based chempotherapy who developed abnromal decline in LVEF after the completion of chemotherapy.
Genes
Response differences ΔΔCt Units 95% Lower confidence limit This finding may be of value in identifying patients at high or low risk for the development of DOX cardiotoxicity during the initial doses of chemotherapy and thus to avoid the accumulating toxic effects from the subsequent doses during treatment. | 2018-03-02T18:08:28.193Z | 2016-08-04T00:00:00.000 | {
"year": 2016,
"sha1": "c67bc0b5f1273b8251ecf1ada2946370eb9b10e7",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0160224&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c67bc0b5f1273b8251ecf1ada2946370eb9b10e7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7255520 | pes2o/s2orc | v3-fos-license | Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Introduction
Among many skeletal maturity indicators, the hand-wrist radiographic analysis has been widely used [1][2][3][4]. However, this method of bone age determination requires additional exposure to radiation to obtain the data. Alternatively, skeletal age determination based on the cervical vertebrae from cephalometric films is used as this method does not require additional exposure to radiation [5,6]. The cervical vertebral maturation (CVM) method classifies the stages of skeletal development upon the visual observation of the cervical vertebrae from the sagittal view. Baccetti et al. [6] modified and presented a simple method of determining the skeletal age from six developmental stages of the cervical vertebrae (C2-4); each developmental stage is a biological indicator indicating physical growth, especially the degree of mandibular growth. Unfortunately, the CVM method also presents several problems including the inconsistent changes in skeletal development during the growth period, a high level of intra-and interobservation error during the tracing activities of lateral cephalometric radiography, and inaccurate measurements of bone mass [7].
To improve the limitations of the methods mentioned above, Chen et al. [8] developed a quantitative analytical method from the sagittal view and presented an objective indicator replacing the lateral cephalometric radiography. Yang et al. [9] also performed quantitative shape analysis from the axial view of the cervical vertebrae. The study concluded that the shape analysis enhances the ability to explain bone maturation as opposed to relying only on chronological age. The axial shape of the cervical vertebrae can serve as a biological indicator of ossification. These studies reported the regression models for skeletal age estimation from the lateral and axial images of the cervical vertebral shape, respectively.
However, the growth of the human body is not twodimensional (2D); there is a limit to which 2-dimensional approach cannot fully describe elements and factors involved in the growth of human body. The growth of the human body occurs in three dimensions and sequence. Growth in width occurs first, followed by growth in the anteroposterior dimension and, lastly, growth in height [10,11]. The growth of the maxillary skeletal width concludes before the peak pubertal growth; the growth in anteroposterior dimension and height continues throughout puberty [12]. Therefore, a three-dimensional (3D) approach is required to evaluate cervical vertebrae maturation. Previous studies used cone beam computed tomography (CBCT) in limited 2D applications, even though CBCT enables the observation of 3D multiplanar images from the head and neck area including the vertebrae. Thus, there is still a need to study 3D CVM assessment.
Therefore, the objective of this study is to determine the correlation of the volume parameter derived from 3D images of the second, third, and fourth cervical vertebrae using CBCT with skeletal maturation stage and to propose a new formula for predicting skeletal maturation using multiple regression analysis.
Subjects.
This study involved a retrospective review of available data. The sample population consisted of 54 women and 48 men (total: 102, range: 5-18 years, and mean age: 9.9 years) as determined by the inclusion criteria established by the Department of Orthodontics, Pusan National University Hospital, and based on the available hand-wrist radiographs and CBCT images. Data was obtained for orthodontic treatment (impacted teeth and craniofacial and dental trauma cases) and skeletal maturation evaluation. Investigators excluded patients diagnosed with a congenital or postnatal malformation or syndrome, growth impairment mental retardation, or preexisting conditions with potential impact on the vertebrae or hand-wrist development from the sample population (Table 1). This study was reviewed and approved by the Institutional Review Board of Pusan National University Dental Hospital (PNUDH-2014-019).
Skeletal Maturation Level Assessments.
To assess skeletal maturational stages from hand-wrist radiographic films, investigators used the Sempé maturation level (SML, 0-999) method [13]. Investigators obtained the hand-wrist radiographic films from a Rotanode DRX-3724HD (Toshiba Medical Systems, Tokyo, Japan) according to the exposure parameters of 42 kVp, 100 mÅ, and 0.032 seconds. The SML method quantifies the skeletal maturation level into an index ranging from 0 at birth to 999 at the stage when growth is complete, on a continuous scale, and provides a more refined ossification level. One investigator (CYK) assessed SML for all patients. For validating the intrainvestigator's error, all samples were reassessed twice at two-week intervals.
Volume Measurements.
For the volume measurements, investigators used the CBCT dataset. Investigators recorded CBCT scans (CB MercuRay (Hitachi Medical, Tokyo, Japan)), with the subject in an upright position for maximum intercuspation. The Frankfurt horizontal (FH) plane was parallel to the floor. Investigators used the CBCT settings of 100 kVp tube voltage, 10 mÅ tube current, 9.6 seconds' scan time, 192.5 mm diameter spherical field of view (FOV), and 0.376 mm voxel size. Investigators reconstructed the scans with the 3D image software OnDemand3D (Cybermed Co., Seoul, Korea) by the same gray-scale condition (window width 4000, window level 1000). Vertebral bodies of the second, third, and fourth cervical vertebrae were segmented and built in a separate 3D image.
Investigators followed these segmentation procedures. First, the investigators obtained the bone image with presettings to achieve the window width level (1180) and uniform opacity threshold (opacity threshold: 329∼2124). Second, the investigators used the segmentation tool to delete the opacity area corresponding to skin and airway and left the second, third, and fourth cervical vertebrae bones from the vertical view from the sagittal CVM image. Finally, the investigators built 3D images only with the vertebral body with the transverse arch removed from each cervical vertebral image. For the second vertebra, the dentocentral synchondrosis was utilized as the basis to segregate the odontoid process from the body. After that, each vertebral body area was designated from the 3D image using the "pick" function, with volume measured in cubic millimeters (mm 3 ) ( Figure 1).
Statistical Analysis.
The same investigator (CYK) repeated each volumetric parameter after two weeks. Investigators used Pearson's correlation coefficient analysis to demonstrate the association between the skeletal maturation index (SMI) and the SML. To use the SMI, the investigators replaced the SML as an indicator representing skeletal maturity. Each volumetric parameter was subjected to Pearson's correlation coefficient analysis to understand the correlation between each parameter and SML. In addition, the investigators conducted a simple regression analysis to understand the explanation power of the SML through the volumetric parameters at each of the cervical vertebrae and the sexrelated interaction. We used SPSS 21.0 (SPSS, Chicago, Il, USA) to analyze the data with a value less than 0.05. We evaluated the intrainvestigator's reliability and reproducibility for 20 randomly selected subjects after two weeks.
Skeletal Maturation Assessment.
The sample population of 54 women and 48 men (total: 102, range: 5-18 years, and mean age: 9.9 years) exhibited minimum skeletal maturation (Sempé maturation level) with a range of 14.5% to 98.5% ( Table 1). Pearson's correlation coefficient analysis demonstrated an association between the SMI and the SML. To use the SMI, investigators replaced the SML, as the indicator representing skeletal maturity. The association between the two was 0.950, which is considered a very high correlation, and as a result, the SML was used as the skeletal maturity indicator.
Correlation between Cervical Vertebral Body Volumes
and Skeletal Maturation Level. The regression equation for the individual vertebral body volume for the SML was built using simple regression analysis. The fourth cervical vertebra had the highest and the second cervical vertebra had the lowest correlation between each vertebral body and the SML, respectively. The second cervical vertebra did not represent a proportional increase in the SML in men and women. As the volume level increased, it demonstrated a rapid increase in the SML in volumes in the range of 2000∼3000 mm 3 in men (Table 2). Consequently, there was a significant difference in the level of volume change between men and women; the regression curve for men was not a linear line but rather a secondary curve. The explanation power of 60.47% is considered relatively low compared to the regression equations for the other parameters. The third cervical vertebra demonstrated a linear regression curve with a proportional increase in the SML as each volume level increased in both men and women. However, the increase was greater in men, presenting a steeper slope. The explanation power of the third cervical vertebral body volume was 81.29%.
The fourth cervical vertebra demonstrated a proportional increase in the SML as the volume increased in both men and women, with a somewhat similar increase. Consequently, a linear regression curve with a similar slope was present in both men and women. The regression equation was (SML) = −11.779 + 0.026 × C4 volume + 22.010 × sex F, with an explanation power of 81.76% (Table 3). The estimated SML within each volume level of the second, third, and fourth cervical vertebrae are in Table 2
Discussion
Several studies have utilized regression formulas using the lateral aspect of the cervical vertebral bodies to estimate the status of cervical vertebrae growth [8,[14][15][16]. However, there are limitations to these previous studies as most determined human growth using 2D data, whereas human growth changes in three dimensions. The cervical vertebrae also change three-dimensionally at the growth phase through a modeling process in a particular area. Therefore, the 3D approach to evaluating skeletal maturation is needed. In this study, to attempt the volumetric evaluation of skeletal age, we used CBCT. It is possible to execute volumetric analysis, much more than the measurement of the length and angle of the boundary of 2D analysis for the craniofacial region [17,18]. However, concerns remain about whether CBCT imaging on top of routine 2D radiography is necessary for every patient or if the additional imaging is only necessary for special circumstances (e.g., craniofacial syndrome, impacted teeth) with the use of a suggested analytical method of CBCT. We obtained the CBCT data in this study following the "as low as diagnostically acceptable (ALADA)" principle. And we believe that as CBCT develops, CBCT can replace conventional panorama and cephalometric radiograph. Furthermore we expect CBCT with lower radiation. We would like to emphasize that the intent of the study was to use the acquired data more valuably.
From the results, each cervical vertebra was examined for an increase in SML as volume increased. When examining the correlation between each cervical vertebra's volume and the SML, the fourth cervical vertebra showed the highest correlation while the second cervical vertebra showed the lowest correlation. In the second cervical vertebra, investigators demonstrated that the increase in SML, as a result of an increase in volume, was not consistent. In most cases, women have higher values in their volume level. However, the increase in SML in accordance with the increase in volume shows a difference between men and women. In women, the change in the SML in accordance with the volume level was relatively consistent throughout the process; a rapid increase in the SML increased the volume level. This tells us that the volume of the second cervical vertebra in men changes drastically during the pubertal growth peak. There is no significant difference between men and women before the pubertal growth peak when the volume is 1000 mm 3 and near the completion of pubertal growth peak with a volume of 3000 mm 3 . There is a significant difference between men and women when the volume is in the range of 1500-3000 mm 3 . However, the explanation power of the regression model using the second cervical vertebra is 60.5%, which is lower than that reported in previous studies [15,16].
The third cervical vertebra exhibited a different result; there was a significant difference between men and women in all areas, resulting in a higher SML in women at all volume levels. Furthermore, there was a difference in the slope of the regression coefficient for men and women, in that the SML increased more rapidly in women as the volume of the third cervical vertebra increased. However, it is well known that increases in the SML are relatively consistent as the volume increased in both men and women. The explanation power of 81.3% for the SML from the third cervical vertebra is similar to previously published studies.
Lastly, we observed that the change in the SML as the volume increased in the fourth cervical vertebra was relatively consistent in all areas/regions, like the third cervical vertebra. There was a significant difference between men and women at each volume level, and the SML was higher in women. However, unlike the third cervical vertebra, there was no difference in the regression coefficient in men and women, which in turn indicates that the SML increased consistently as the cervical vertebral volume increased. Furthermore, it also demonstrated an explanation power of 81.8% for the SML, representing the highest of the three cervical vertebrae.
The changes in the volume of the cervical vertebrae observed in our study are similar to that observed by Crawford et al. [19]. Crawford et al. stated that changes in volume are due to active bone remodeling in C2 that occurs more than it does in C3 [19]. In contrast, there is less resorption of preformed bone in C3, resulting in more consistent growth in volume. Altan et al. [20] conducted a study in girls and reported a similar outcome in that the growth in C2 exhibited twice the amount of growth in C3 in the SML of a 14.5year-old. After this age, growth starts to diminish, a finding confirmed when comparing the identical growth of the SML to a 16.5-year-old. This, in turn, demonstrates that C3 exhibits a more consistent change in volume during the growth period.
Furthermore, the explanation power for the SML, based on the change in volume, was relatively high. This finding is similar to or higher than the estimated outcome by using the 2D or 3D length parameters [8,9,[14][15][16]. The cervical vertebral maturation index, a tool first suggested by Lamparski to replace the hand-wrist radiograph, segregates the growth stage in six steps using the second, third, and fourth cervical vertebrae [16]. O'Reilly and Yanniello [21] determined that there is a direct correlation between the CVM stage and mandibular growth using this method; this is useful when evaluating growth in orthopedic treatment. In addition to this, this technique does not require additional irradiation for a hand-wrist radiograph. In 2002, Baccetti et al. [6] were able to reduce Lamparski's process to four steps with a modified CVM. However, the second, third, and fourth cervical vertebrae had to be observed. This is a problem as there is significant error among the test subjects, which is hard to quantify [6]. To complement these drawbacks, a study was conducted to estimate the CVM in a quantitative way [8,9,[14][15][16], there were a number of difficulties in the clinical application as the study had to include most of the C2-C5 vertebrae, and/or it was deemed onerous to measure many parameters in each cervical vertebra. However, the present study enabled the estimation of the SML by measuring the volume of C4 alone in a similar way. This allows the use of a skeletal maturation estimation method that is much more simple and accurate for use in clinical application.
The purpose of this study was to suggest an objective and quantitative method to estimate growth using the 3D volumetric analysis of cervical vertebrae by 3D CBCT images instead of the existing 2D CVM images. It is important to establish a method to estimate the SML by measuring 3D volume from CBCT imaging in patients at their growth phase, before orthopedic treatment, and to establish multiple regression models. This method measures fewer cervical vertebrae and associated parameters compared to the current CVM method. Using the 3D volumetric analysis of cervical vertebrae enables convenient clinical application, as well as a quantitative and accurate evaluation of the results
Conclusions
CBCT enables the measurement of the volume of cervical vertebrae, which is difficult, in addition to the sagittal image observed through the existing 2D radiography. In this study, a multiple regression model for the SML was established based on the volumes of the second, third, and fourth cervical vertebral bodies measured using CBCT. In particular, a higher explanation power for the SML was achieved through the linear regression model using the volume in the fourth cervical vertebral body, compared to that from the models in previously published studies [15,16]. Therefore, the derived multiple regression model has potential clinical application as it enables a simple and quantitative analysis to evaluate the SML in comparison to the existing CVM evaluation methods.
Competing Interests
All of the authors declare that there are no competing interests regarding the publication of this paper. | 2018-04-03T03:36:32.664Z | 2016-06-02T00:00:00.000 | {
"year": 2016,
"sha1": "1baf9da870d037c2e430736983954efc2c0f36e2",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2016/8696735.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1baf9da870d037c2e430736983954efc2c0f36e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
91447939 | pes2o/s2orc | v3-fos-license | Performance of West African Dwarf ( Wad ) Goats Fed Dietary Levels of Boiled Rubber Seed Meal ( Hevea Brasiliensis )
Effect of boiled rubber seed meal (BRSM) based diets on the performance of West African Dwarf (WAD) bucks was investigated. Four groups of WADS were randomly fed with the 4 experimental diets (A–D) formulated to contain 0, 10, 20 and 30% BRSM. The experiment lasted for 56 days. Average daily feed intake (g) were 417.90; 428.93; 322.00 and 288.10 for diets A, B, C, D, and the corresponding average daily weight gain was 31.69, 53.92, 46.62, and 34.64 respectively. Feed/gain ratio was 6.90 for goats fed diet C and 7.95 for those fed with diet B. Feed cost per Kg weight gain was N 115.29 for diet C and N 120.42 for diet B. The warm carcass and dressing % were insignificant among the 4 treatment groups, but goats fed diet C showed superiority. Legs, shoulder, sets and bone to lean ratio differed significantly between the treatment groups.
Introduction
The economic depression of nations has greatly reduced meat availability, and the inadequacy of meat supply has been aggravated by a combination of environment, feed and management factors Wikipedia, [1], Udo [2].In recent years many categories of Nigerian farmers tend to invest in ruminant livestock farming Hoffmann [3] yet cost of conventional feeds still posed big challenge Hassan [4].Feed as reported by Akpodiete and Inoni [5], accounts for 60 -70% of total cost of livestock production and that it's inadequacy in quality and quantity could lead to a situation of low nutritional status, poor weight gain, poor reproductive ability, poor production, poor health condition and poor conversion ratio Fajemisin [6].It therefore, becomes important to supply adequate feed in quantity and quality for optimal performance by livestock.Goats' farming offers ample opportunity for meat inclement and availability.They are easy to keep, require smaller capital investment, play significant role in socio-economic life of the people as they contribute about 35% Nigerian meat supply Oloche [7], and provides income to farmers Peacock [8].West African dwarf goats are the prevalent and trypono-tolerant breed in the derived and guinea savannah zones Eroarome [9], Udo [10].But it is worrisome that lack of government legislation for the multiplication of this hardy breed, nutritional constraint particularly during the dry season coupled with the extensive mode of production posed serious problem to their production in the tropic Ahamefule [11] and Ahamefule and Udo [12].To address the nutritional need of goats, it is therefore, important to supplement their diet with concentrate.As a result of high cost conventional feedstuff and in attempt to reduce competition between man and livestock, nutritionists are in search for alternative non-conventional feedstuff that are cheap and readily available Ahamefule and Udo [12].There are huge naturally occurring non-conventional feedstuffs that can profitably be used to stimulate small ruminant production Udo [2], Udo [10].Prominent among them is rubber seed which has no feed value for human Udo [10].The Humid tropics has large acreage of rubber plantation, and in Nigeria it is cultivated on estimated 185,000 hectares with seed collection of about 10,175 tonnes/year Udo [2], with crude protein content range of 21 -28%, Crude fibre range of 4.47 -8% (Udo [2], Udo [10], Njwe [13] and energy range of 2.32 -2.58 MJ/Kg Udo [2].
sheep Njwe [13]; but there is paucity of information on the feeding of rubber seed to West African Dwarf Goats.This work however, was designed to evaluate the performance of West African dwarf goat fed dietary levels of rubber seed meal based diet.
Experimental Site
The study was conducted at the Goat unit of the
Experimental Design/Procedures
Four diets were formulated to contain 0 -30% boiled rubber seed meal (BRSM) and designated as A, B, C and D. These diets were assigned randomly to the four animal groups in a completely randomised design.Each goat received 1kg of designated diet in addition to 2Kg of guinea grass (Panicum maximum).Daily feed intake was determined by subtracting daily feed refusal from the 1kg given the previous day.These were used to calculate the average daily feed intake, average daily weight gain feed conversion ratio, and feed economics of production for each treatment group.
Experimental Diets
Four (4) experimental diets (A-D) were formulated to contain various inclusion levels (0-30%) of boiled rubber seed meal (BRSM) with other conventional ingredients as shown in (Table 1).
Processing of Rubber Seed
Twenty (20) kilogrammes of raw rubber seeds were introduced into cooking pot (in batches) whose water has attained boiling temperature (100 °C) and allowed to boil for 30 minutes after which the seeds were decanted.The boiled seeds were sun-dried for seven (7) days, then dehulled and nuts milled, pressed using garri processing machine to remove oil and the products used to formulate boiled rubber seed meal-based diet (BRSM).
Slaughter Technique
At the end of the feeding trail, three goats per treatment group were starved for 24 hours prior to slaughter.Each goat was weighed before slaughter, after bleeding and after dressing.Dressing percentages were calculated as the weight of dressed warm carcass in relation to the live weight before slaughter.The dressed warm carcass is defined as the weight of the goat after the removal of the head, skin, content of the thoracic, limbs, distal to the carpal and tarsal joints and pelvic cavities (including the diaphragm and kidney).The lungs, head, heart, liver, limb (four feet) and skin were weighed also.
Carcass Evaluation
Three animals per treatment group were slaughtered for carcass evaluation.Jointing of carcass (meat cut) was done following the method adopted by Ahamefule [11].Each dressed warm carcass was divided down the spinal cord by means of meat saws into two (2) equal half and weighed individually.The left half was subsequently divided into various cuts consisting of thigh, shoulder, loin, sets and ends.Each of the cuts was weighed and the weight doubled in each case before expressing it as percentage of the dressed carcass.The leg (thigh) was severed at the attachment of the femur to the acetabulium; the loin consists of the lumber region plus a pair of ribs, the ends (spare ribs plus belly) consist of six ( 6) abdominal ribs, the shoulder consist of the scapular, and the sets made up of the breast and the neck.The loin cuts were then dissected into muscles and bone with ligament to obtain the meat to bone ratio.
tatistical Analysis
The experiment was laid out as completely Randomized design.
All data were analysed in a one -way analysis of variance (ANOVA) using SPSS [15] package.Duncan's Multiple Range Test Duncan [16] was used to separate significant means.
Chemical Analysis
All feed samples were analysed for proximate composition using AOAC (2007).
Results and Discussion
The composition and proximate assay of the experimental diets formulated to contain 0-30% boiled rubber seed meal (BRSM) are presented in (Table 1).The dry matter (DM) content of the diets, save for ration B (10% BRS), were fairly comparable (Table 2).The the ash contents (%) of the diets followed similar pattern as the EE, rising and stabilizing as the case was.Nitrogen free extract (NFE) values (%) rose from A-B and subsequently decline in diets C and D.
Curr Inves Agri Curr Res
Copyrights@ Metiabasi D Udo, et, al.The energy values (Kcal/g) followed similar trend as NFE.CP and energy content of the four diets were all above what is required by WAD goats as reported by Ahamefule [11], Akinsonyinu [17].
Response of West African Dwarf (WAD) Goats
The performance of WAD goats fed various inclusion level of boiled rubber seed meal (BRSM) is shown in diets C and D. This may be due to the increasing levels of rubber seed meal from B-D which Gohl [18] reported that rubber seed is not quite palatable and appetizing to ruminant.However, the values obtained in this report is in consonance with previous reports by Spring [19] that feed intake and growth decreased as rubber seed meal (RSM) incorporation levels increased in poultry rations.Njwe [13] also reported that rubber seed is not quite appetizing to sheep and that RSM should not exceed 20% level incorporation and not more than 10% for poultry Babatunde [19] while Devendra [20] considered 20% as optimal inclusion level for pigs.The trend of intake in this study agrees with the report by Rajan [21] that weight gain was not affected when fed diet containing 20% BRSM, but subsequently, a linear decrease in feed intake and daily weight gain occurred as the incorporation of BRSM exceeds 20%.The feed gain ratio for goats fed 20% BRSM was least (6.90) and apparently best and was in line with the reports Njwe [13], Rajan [21] that small ruminants can utilize up to 20% rubber seed without adverse effect on performance.The average daily weight gain range of 34.64 -53.92g obtained in this study compared favourably with the range reported for WAD goats within the first 12 months of life Nuru [22], Anya [23].The corresponding values for animals fed diets B, C and D were N120.42,N115.29 and N151.65 respectively.The result obtained in this study followed the findings trend of similar investigations by
Feed Economy
Ahamefule [11] and Anya [23].They also reported superior feed cost per kilogramme weight gain for WAD goats fed diets containing 20% Pigeon pea and African yam bean respectively.For best yield returns on investment, incorporation of 20% BRSM in WAD goat's diet is advisable.
Conclusion
This study revealed that boiled rubber seed meal generally enhanced performance at different level (10-30% BRSM) with all the inclusion levels being safe as dietary supplement for WAD goats.
However, 20% BRSM inclusion level gave the best performance, and is therefore recommended for goat's production/fattening programme as it also produced the cheapest cost per kilogramme weight gain.
crude protein (CP) ranged from 14.06 -15.82% and increased as inclusion levels of BRSM increased from B-D.Crude fibre (%) (CF) followed a reverse trend of the CP values.The ether extract (EE) composition (%) increased from diets A-D and stabilized in C and D.
Table 1 :
Proximate composition of experimental diets containing various levels of boiled rubber (Hevea brasiliensis) seed meal.
Table 2 :
Chemical assay of experimental diets containing various levels of boiled rubber (Hevea brasiliensis) seed meal (%DM).
Table 3 :
Performance of WAD Goats Fed Experimental Diets Containing Various Levels of Boiled Rubber (Hevea brasiliensis) Seed Meal.
a,b,c Means on the same row with superscripts differ significantly (P<0.05).
Table 4 :
[22] economies of WAD goats fed various inclusion levels of boiled rubber seed meal-based diets.20%and30% respectively.The daily weight gain range (34.64 -53.92g/d) reported in this study is lower than the range (35 -65 g/d) reported by Nuru[22]for WAD goats.The feed cost per kilogram weight gain was N150.71 for goats fed diets A of 0% BRSM.
Daily feed consumed by animals in treatment A and B were similar, but the two groups differed significantly (P<0.05) from animals fed diets C and D that were also similar in their feed intake.Goats fed diet B (10% BRSM) supported highest daily weight gain followedCurr Inves Agri Curr ResCopyrights@ Metiabasi D Udo, et al.Citation: Metiabasi D U, F O Ahamefule,JA Ibaewuchi, Glory D E. Performance of West African Dwarf (Wad) Goats Fed Dietary Levels of Boiled Rubber Seed Meal (Hevea Brasiliensis).Curr Inves Agri Curr Res 4(5)-2018.CIACR.MS.ID.000198.DOI: 10.32474/CIACR.2018.04.000198 by
Table 5 :
Carcass yield of West Africa dwarf goats fed various levels of boiled rubber (Hevea brasiliensis) seed meal-based diets.
Table 6 :
Average weight of meat cuts, organs and offal weights expressed as percentages of warm carcass or empty live weight. | 2019-04-03T13:09:05.661Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "2263144403348defc760451abdec291323a41a55",
"oa_license": "CCBY",
"oa_url": "http://www.lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000198.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2263144403348defc760451abdec291323a41a55",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
218471598 | pes2o/s2orc | v3-fos-license | How to divide the pancreatic parenchyma in patients with a portal annular pancreas: laparoscopic spleen-preserving distal pancreatectomy for serous cystic neoplasms
Background Portal annular pancreas (PAP) is a rare pancreatic anomaly in which the uncinate process wraps annularly around the portal vein and fuses to the body of the pancreas. PAP is highly relevant to the development of postoperative pancreatic fistula (POPF) in pancreatic surgery. Here, we describe our experience and surgical technique of laparoscopic spleen-preserving distal pancreatectomy using Warshaw’s procedure for patients with the PAP. Case presentation A 49-year-old woman with PAP was referred to our hospital for a suspicious mucinous cystic neoplasms 1.5 cm in diameter in the pancreatic tail. Laparoscopic spleen-preserving distal pancreatectomy using Warshaw’s procedure was performed. Mobilization of the pancreatic tail was first performed, and then, the splenic artery was cut. After dividing the pancreatic tail from the splenic hilum, the ventral pancreatic parenchyma was divided using a stapler. After cutting the splenic vein, complete mobilization of the pancreatic body and tail enabled easy division of the PAP. Finally, the PAP was also divided using the stapler. Although grade B POPF occurred, she was discharged on the 9th postoperative day. Conclusions Surgeons should understand the anatomical characteristics of PAP and be aware of the possibility of POPF.
Background
Portal annular pancreas (PAP) or the so-called circumportal pancreas is a rare pancreatic anomaly without symptoms, in which the uncinate process of the pancreas encircles the portal vein (PV) and/or its influx, the superior mesenteric vein (SMV), and the splenic vein (SV), and extends to the dorsal surface of the pancreas body [1]. In 1987, Sugiura et al. first reported PAP as the hypertrophic uncinate process surrounding the superior mesenteric artery (SMA) and SMV [2].
Here, we describe a case of a patient with a serous cystic neoplasm with the PAP who underwent laparoscopic spleen-preserving distal pancreatectomy (DP) using Warshaw's procedure.
Case presentation
A 47-year-old female (body mass index, 20.5) had an oval cystic lesion in the pancreatic tail identified by an annual health check and then visited the nearby hospital for further examination. She had a medical history of a left parapelvic cyst at 44 years old. Plain computed tomography (CT) scans taken at the same hospital 4 years ago did not indicate any pancreatic cysts. She was referred to our hospital for the surgical treatment of a suspected mucinous cystic neoplasm (MCN) without a mural nodule that was 1.5 cm in diameter in the pancreatic tail (Fig. 1a). A preoperative dynamic CT scan showed that the PV was annularly surrounded by pancreatic parenchyma at the superior side of the SV (Fig. 1b). Magnetic resonance cholangiopancreatography demonstrated the anteportal main pancreatic duct (MPD) (Fig. 1c). Contrast-enhanced endoscopic ultrasound revealed an oval cystic lesion without enhancement (Fig. 1d). The preoperative imaging studies revealed the PAP classified as type IIIA according to the Karasaki and Joseph classification [3,4]. Finally, the patient was diagnosed with a suspected MCN with PAP, and laparoscopic spleen-preserving DP using Warshaw's procedure was planned. Preoperative laboratory data is shown in Table 1, and carbohydrate antigen 19-9 levels was slightly elevated.
Under general anesthesia, the patient was placed in the lithotomy position. Five ports were inserted in the following order: (a) a 12-mm port inserted through the umbilicus to introduce the laparoscope by the open method, (b) a 12-mm port at the right lateral side of the umbilicus on the midclavicular line, (c) a 5-mm port in the right hypochondriac region on the midclavicular line, (d) a 12-mm port at the left lateral side of the umbilicus on the midclavicular line, and (e) a 5-mm port in the left hypochondriac region on the anterior axillary line. After the bursa omentalis was widely opened toward the border of the preserved left gastroepiploic vessels that ultimately fed the spleen, the splenocolic ligament was divided to take down the splenic flexure of the transverse colon. The serosa of the inferior border of the pancreatic body and tail was cut, and then, the pancreatic tail was widely mobilized from the retroperitoneum. After the greater curvature of the stomach was suspended with two sutures using a 2-0 nylon thread to expose the pancreas, the serosa of the superior border of the pancreas was cut, and then, the common hepatic artery and splenic artery (SA) were exposed. At this time, the cranial wall of the PAP at the left side of the PV was identified. The SA was occluded at its root using Hem-o-lok clips and then was divided to block inflow to spleen. The pancreatic tail was completely mobilized from the retroperitoneum. After exposing the splenic hilum, some branches of the SV and SA were carefully clipped and cut at the end of the distal pancreas without injury to the left gastroepiploic and short gastric vessels. Finally, the ventral pancreatic parenchyma was tapped through the ventral side of the PV/SMV (Fig. 2a) and then was divided using the Endo GIA™ reinforced with a Tri-Staple™ reload (purple reload, 60 mm, Fig. 2b). After dividing the SV at its root, the dorsal side of the PAP was finally divided using the previously mentioned stapler (Fig. 2c, d), and then, the resected specimen was extracted using a retrieval bag. This operative time was 268 min, and blood loss was 175 mL. During postoperative course, the grade B postoperative pancreatic fistula (POPF) occurred, and the antibiotics treatment alone was done. She was discharged on the 9th postoperative day and was pathologically diagnosed with serous cystic neoplasm.
Discussion
PAP is a rare congenital pancreatic anomaly present in 0.5 to 2.4% of normal people [1,[5][6][7]. In 2009, Karasaki et al. reported three classification types of PAP (A: suprasplenic, B: infrasplenic, and C: mixed type), depending on the relation to the portal confluence. The most frequent in the literature is type A, which ranges in incidence between 70 and 94%, followed by type B with an incidence between 4 and 20%, and type C with an incidence between 2 and 14% [1,3,5]. In 2010, Joseph et al. suggested three types of PAP (I: retroportal, II: type I is associated with pancreas divisum, and III: anteportal) based on the course of the MPD and then classified type III into three subtypes (IIIA, IIIB, and IIIC) according to the Karasaki classification. Type IIIC was the most frequent type, with an incidence that ranges between 44 and 92% [1,5]. Yilmaz and Ahmed also reviewed 55 cases of PAP and reported vascular variations in 16 (29%) cases. Among them, most were treated by replacing the right hepatic artery with the superior mesenteric artery (n = 6). The authors concluded that identifying the PAP and defining the pancreatic duct and vascular variations are important to preventing possible complications in patients undergoing pancreatic surgery.
In terms of the pancreatic division procedure for patients with PAP who undergo DP, there are two procedures: one cut margin or two cut margins. Ohtsuka et al. discussed the two procedures in detail: pancreatic division at the level of the PV/SMV, yielding two cut margins of the pancreatic parenchyma, and pancreatic division on the distal side of the PAP, resulting in a single cut margin. They recommended that two cut margins must be made during DP at the level of the PV/ SMV because a single cut margin usually leads to a larger area of the cut surface in the pancreas body than at the level of the PV/SMV. In the case of a one cut margin, the line of pancreatic division is the distal side of the PAP, as Ohtsuka et al. mentioned [7]. In this case, the thickness of the pancreas at the level of the SMA was 12 mm, which was larger than the 9 mm on the ventral side of the SMV. The distance between the distal side of the PAP and the right edge of the tumor was 12 mm, which is not enough distance to perform pancreatic division using a surgical stapler with a one cut margin. Thus, we selected two cut margins to secure the surgical margin and to prevent POPF. The rate of POPF depends on the thickness of the pancreatic parenchyma at the cut line as well as the number of cut surfaces of the pancreas (one cut margin/two cut margin). In previous papers, a pancreatic transection line thickness of 12 mm or larger was a risk factor for POPF [8,9]. Therefore, the one margin procedure is better for PAP patients as long as the thickness of the pancreatic parenchyma at the cut line is less than 12 mm; otherwise, a two cut margin procedure should be recommended. The cystic tumor of the pancreatic tail in this case was in complete contact with the splenic vein. Laparoscopic spleen-preserving distal pancreatectomy using Warshaw's procedure was planned because we assumed it was impossible to detach this tumor from the splenic vein without exposing the cystic wall. Recently, Song et al. reported that distal pancreatectomy using the splenic vessel-preserving technique led to a significantly lower incidence of clinically relevant postoperative pancreatic fistula, splenic infarcts, intra-and postoperative splenectomies, and gastric varices than that using Warshaw's procedure [10]. Benign or borderline malignant tumors of the pancreatic tail completely separated from the SA and SV are considered good indications for distal pancreatectomy using the splenic vessel-preserving technique. To the best of our knowledge, 7 patients with PAP (type IA in 2, and type IIIA in 5) who underwent DP for several kinds of pancreatic disease have been reported in the Englishlanguage literature [1,7,[11][12][13] including this case ( Table 2). All patients had two pancreatic cut margins. Among them, 5 patients (71%) developed POPF (grade A in 2, and grade B in 3). Therefore, DP for patients with PAP led to a substantially increased risk for POPF as well as for pancreaticoduodenectomy [1]. In 2012, Jang et al. first reported laparoscopic DP for intraducta1 papillary mucinous neoplasm patients with PAP. However, the details of the operative procedure are unknown. In our case, the ventral pancreatic parenchyma was first divided. After cutting the SV, the complete mobilization of the pancreatic body and tail enabled easy division of the remnant PAP. It was considered that our surgical procedure was rational and useful to preventing PAP injury.
Conclusion
It was considered that laparoscopic DP for patients with PAP was feasible. However, surgeons should understand the anatomical characteristics of PAP and be aware of the possibility of POPF. | 2020-05-02T13:38:00.793Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "badd9a4b7bc2a6d962665907457646ebaf08052f",
"oa_license": "CCBY",
"oa_url": "https://surgicalcasereports.springeropen.com/track/pdf/10.1186/s40792-020-00852-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "badd9a4b7bc2a6d962665907457646ebaf08052f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259534336 | pes2o/s2orc | v3-fos-license | Effect of patient positioning on anesthesiologic risk in endourological procedures
Objective: The objective is to compare supine and prone positions in terms of arterial blood gas during lithotripsy endourology procedures in different stages. Material and Methods: Cases of during lithotripsy endourology procedures in our department from March to September 2020 were included prospectively. The variables registered were body mass index, age, the American Society of Anesthesiologists (ASA) score, diabetes mellitus, positive end-expiratory pressure (PEEP), FiO2, stone size, stone location, procedural type, position, procedure duration, PaO2, SaO2, PaCO2, pH, and dynamic compliance. PaO2, SaO2, PaCO2, pH, and dynamic compliance were recorded at the beginning of the procedure, 5 min later, 15 min later, and at the end of the procedure. Results: Thirty patients in prone position and 30 in lithotomy position were included in this study. Patients in prone position underwent percutaneous nephrolithotomy, and patients in supine/lithotomy underwent retrograde intrarenal surgery or ureteroscopy. Statistically significant differences were found in PEEP, duration, PaO2 at the beginning, SaO2 at the beginning and at the end of the procedure, PaCO2 at the beginning and at minute 5 and pH at the beginning of the surgery. The saturation PaO2 increased significantly on prone position and was statistically significantly better at the end of the surgery. Conclusions: Both prone and supine positions were safe regarding anesthesiologic risk and had no clinically relevant differences in terms of individual comparisons in arterial blood gas parameters in static moments of the procedure. Prone position was related to an increase in PaO2 and a drop in PaCO2 gradually from the beginning to the end of the surgery.
Patient positioning in endourology, and any kind of surgery, is essential for achieving beneficial outcomes without any additional harm. [3]Most URS and RIRS procedures are performed in the lithotomy position.] Several modifications of the prone position, including prone-flexed, reverse lithotomy, prone split-leg as well as [3,5] supine PCNL together with all its modifications, have been proposed to enhance surgical efficiency.[8] Probably, the main benefit of performing prone PCNL is the simplicity of the puncture as it allows performance of multi-tract access, establishment of a relatively shorter access tract and an easier access in obese patients and in kidney anomalies. [5]Nonetheless, it is claimed that the endoscopic combined intrarenal surgery can be performed more easily in the supine position. [9,10]sition variations are related to changes in pulmonary ventilation and perfusion. [11]Some authors have suggested that supine position decreases the anesthesiologic risk due to a minimization of cardiac and respiratory encumbrance, [10] while others have found that prone and flank position have a positive effect on patients' oxygenation. [12]e aim of our study was to evaluate the effect of prone and dorsal lithotomy positions on oxygenation and elimination of carbon dioxide and compare changes in arterial blood gases during PCNL, RIRS, and URS in different surgical stages from beginning to end.
Study design
We performed a retrospective evaluation of longitudinally collected data of consequent patients from an institutionally approved database.To meet our objectives, cases of PCNL, RIRS, and URS in our department from March 2020 to September 2020 were included.Cases of patients with kidney anomalies, staghorn stones, uncontrolled coagulopathies, previous history of PCNL or open renal stone surgery, known cardiovascular or respiratory dysfunction and surgeries lasting more than 1 h and <30 min were excluded from the study.
Procedures and anesthesia
All patients underwent general anesthesia in supine position, and they were then placed in prone position or supine/lithotomy position according to the procedure [Figure 1].Throughout the study, the ventilator settings (tidal volume, respiratory rate, FiO2) were maintained at constant level.The patients' heart rate, blood pressure, end-tidal carbon dioxide pressure, and O2 saturation were monitored.Propofol 1.8-2.5 mg/kg, fentanyl 1.5 μg/kg, rocuronium 0.7-1 mg/kg, and paracetamol 1 g were used for the introduction of anesthesia, whereas propofol 6-7 mg/kg/h and remifentanil 3-10 μg/kg/h were employed for maintenance.
All surgeries were performed by an expert endourologist, with an expertise of more than 300 endourological procedures.
URS and RIRS were performed in lithotomy position.First, if there was a previous double J catheter it was removed.Second, a safety guide-wire (usually semi-stiff with hydrophilic tip) was placed.A Dual Lumen ® Ureteral Access Catheter (Cook Medical, Indiana, USA) was then placed through the guidewire and contrast was inserted.A second, working, stiff guide wire was placed through the Dual Lumen Catheter, and lithotripsy was performed once reached the stone with either a semi-rigid ureteroscope (Karl Storz, Tuttlingen, Germany) or Flex Xc ureteroscope (Karl Storz, Tuttlingen, Germany) depending on the location of the stone.All lithotripsies under URS or RIRS were performed with holmium:yttriumaluminum-garnet (Ho:YAG) laser.A double J was placed once the stone-free status was reached after a final control pyelography was conducted.
All PCNLs were performed in prone position.The contrast was firstly inserted by a ureteral catheter placed in the upper calix under lithotomy position.The access was made under fluoroscopy guidance following the "Bull's eye" technique.After the puncture, a hydrophilic guidewire was inserted anterogradely until the lower ureter, which was latterly replaced by a Lunderquist ® Extra-Stiff guidewire (Cook Medical, Indiana, USA).Two-step dilation was performed with Amplatz ® Renal Dilators (Cook Medical, Indiana, USA).Lithotripsy was performed with ultrasonic lithotripters, Trilogy or Lithoclast Master (E.M. S, Nyon, Switzerland).At the end of the procedure, a double J and a nephrostomy tube were placed after a final control pyelography was conducted.
Study variables and statistical analyses
The registered variables were body mass index (BMI), age, the American Society of Anesthesiologists score, [13] diabetes mellitus, positive end-expiratory pressure (PEEP), FiO 2 , stone size, stone location, procedural type, position, procedure duration, pressure of oxygen in arterial blood (PaO 2 ), oxygen saturation (SaO 2 ), pressure of carbon dioxide in arterial blood (PaCO 2 ), pH, and dynamic compliance.PaO 2 , SaO 2 , PaCO 2, pH, and dynamic compliance were recorded at the beginning of the procedure, 5 min later, 15 min later, and at the end of the procedure.Procedure duration was defined as the time between the puncture and suturing of the PCNL tract for the prone position and cystoscopy and placement of double J ureteral stents for dorsal lithotomy position.
Statistical evaluation was performed using SPSS v25 software (IBM Statistics, NY, USA).Mean, standard deviations, and proportions were analyzed for continuous and categorical variables, respectively.Direct comparisons were made between dorsal lithotomy and prone position with the two tails t-test with assumed differences for quantitative outcome variables and with Chi-square test for categorical outcome variables.Statistical significance was set up as a P < 0.05.
RESULTS
In the selected timeframe 86 patients underwent either PCNL, RIRS, or URS in our department.After applying the exclusion criteria, 60 patients were included in the study, 30 patients in prone position and 30 patients in dorsal lithotomy position.All patients in prone position underwent PCNL, and all patients in supine/lithotomy underwent RIRS or URS for renal and ureteral stones, respectively.
The mean age of the patients within the supine position and prone position groups were 54 years ± 9 and 52 years ± 8, respectively (P = 0.644).The mean BMI of the patients within the supine position and prone position groups were 28 Kg/m 2 ± 3 and 29 Kg/m 2 ± 4, respectively (P = 0.093).
The mean stone size was significantly greater in the group of prone position (mean stone size in prone 3.2 cm ± 0.2 vs. 1.3 cm ± 0.8 in supine (P > 0.01).Out of 30 cases in supine position, 17 had stones located in the ureter and 13 had stones located in the kidney.From 30 cases performed in prone, 2 had stones in the ureter and 28 had renal stones (P < 0.001 for the variable "stone location" meaning that there was a difference).
The distribution of patients by FiO 2 and ASA score was uniform.Forty-six patients had ASA score = 1, 23 in each group, and 14 patients had ASA score = 2, 7 in each group.One patient in the supine position group and 4 in the prone position group had diabetes mellitus (P = 0.161).
Statistically significant differences between the two cohorts, prone and supine position, were found in PEEP, procedure duration, PaO 2 at the beginning of the procedure, SaO 2 at the beginning and at the end of the procedure, PaCO 2 at the beginning of the procedure and at min 5 and pH at the beginning of the surgery [Table 1].
The saturation PaO2 increased significantly during the prone position and even was statistically significantly better at the end of the surgery.
DISCUSSION
Endourology is the preferred option for most renal and ureteral calculi. [1,2]Endourological procedures such as PCNL can be performed in different positions. [5,9,10]hoosing the optimal position in endourology is essential for achieving beneficial outcomes without any additional harm. [3]Modifications in position are related to variations in pulmonary ventilation and perfusion. [11]Although it is generally recommended that the proper positioning of the patient should be selected according to the surgeon's experience, [6] it is debated if there is a clinically significant difference in anesthesiologic risk during endourological procedures in a supine position compared to prone.The objective of our study was to compare supine and prone positions in terms of arterial blood gas during PCNL, RIRS, and URS at the beginning, 5 min later, 15 min later, and in the end to assess whether there is a position safer than the other in terms of anesthesiologic risk.
According to our data from 60 patients, statistically significant differences between the two cohorts, prone and supine position, were found in PEEP, PaO 2 at the beginning of the procedure, SaO 2 at the beginning and at the end of the procedure, PaCO 2 at the beginning of the procedure, and at min 5 and pH at the beginning of the surgery.However, despite these statistically significant differences, they were all within accepted normal values. [14]O2 at the beginning and at minute 5 were greater in the supine position group but at min 15 and in the end were greater in the prone position group.Nevertheless, all figures were within accepted normal values.Similarly, SaO2 at min 15 and in the end and pCO2 at the beginning of the procedure was higher in the prone position group, but all figures were within accepted normal values.pH at minute 5 of the procedure was higher in the supine position group (7.41) compared to prone (7.39) but all numbers were within accepted normal values.
Nevertheless, when studying the data dynamically, from the beginning to 5 min later, to 15 min later and to the end of the surgery, it is clear that prone position was related to an increase in PaO 2, from roughly 84 mmHg to around 108 mmHg, and a drop in PaCO 2, from approximately 42 mmHg to about 40 mmHg.These changes cannot be noticed for supine/lithotomy position.
Our findings are in accordance with the data presented by Karami et al. who conducted a similar clinical trial including a total of 90 patients, 30 per group like in our study, where they compared arterial blood gas analysis of patients undergoing PCNL in supine, lateral and prone position.
The weakness of their study was that they only measured the variables at the beginning of the procedure and 20 min after setting the position [12] so they did not record the full trend from beginning to end of the procedure.We addressed this issue by adding measurements until the surgery is over.
Likewise, Yadav et al. demonstrated a decrease in the PaCO 2 with the turnover of the patient from supine to prone position during cervical spine surgery with no significant change in arterial to end-tidal CO 2 gradient. [15] contrast, Kwee et al. reported that one of the most important complications of prone PCNL was a cardiovascular compromise [16] due to the increased pressure on anterior structures which results in complications associated with reduced abdominal and respiratory compliance, compression of vital organs, and hemodynamic changes that require close monitoring. [16,17]one surgeries may pose a risk for position-related complications.However, the interpretation of the information should be done with caution.As such, there is no evidence of anesthesiologic complications regarding prone PCNL.The data come mostly from studies regarding neurosurgical or traumatological procedures that are related to longer operative time compared to PCNL.
In endourology, the current-enhanced technological equipment and the growing experience of surgeons has led to a significant decline in the duration of endourological procedures. [4]This is probably one of the most determinant factors to minimize the position-related anesthesiologic risks described for prone surgeries in other fields. [16,18] our study, the fact that all patients in prone position underwent PCNL, and all patients in supine underwent RIRS or URS could be considered a limitation.The patients were not subjected to the same procedure because in our department the standardized approach for PCNL is prone position, and all procedures without any exception are performed in that position.However, since the aim of the study was to evaluate whether prone and supine positions have an effect on anesthesiologic safety in terms of arterial blood gas, we argue that the differences in surgical techniques did not affect our results and did not introduce any additional bias.
Inclusion criteria included cases of surgeries lasting more than 1 h.However, there was no need to exclude any case for this reason because in our department the vast majority of endourology surgeries are limited to 1 h to prevent postoperative septic and bleeding complications.
Results of patients with known cardiovascular and respiratory comorbidities might be different from our data since cases of known cardiovascular or respiratory dysfunction were excluded.
CONCLUSIONS
In our study, both prone and supine positions were safe regarding anesthesiologic risk and had no clinically relevant differences in terms of individual comparisons in arterial blood gas parameters.However, the prone position was related to an increase in PaO 2 and a drop in PaCO 2 gradually from the beginning to the end of the surgery leading to boosted oxygenation.
Financial support and sponsorship
Nil.
Table 1 : Mean and standard deviation together with P value are displayed for all parameters for supine and prone positions
PEEP: Positive end-expiratory pressure, PaO 2 : Pressure of oxygen in arterial blood, SaO 2 : Oxygen saturation, PaCO 2 : Pressure of carbon dioxide in arterial blood, SD: Standard deviation | 2023-07-11T18:25:43.529Z | 2023-06-16T00:00:00.000 | {
"year": 2023,
"sha1": "73702b7803411057deec5896fb8e3bf823f4f4b2",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ua.ua_113_22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25eb7ab11b933159b366f56fc459f30b8bc19255",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11187621 | pes2o/s2orc | v3-fos-license | Diagnosis of Leishmaniasis
Leishmaniasis is a clinically heterogeneous syndrome caused by intracellular protozoan parasites of the genus Leishmania. The clinical spectrum of leishmaniasis encompasses subclinical (not apparent), localized (skin lesion), and disseminated (cutaneous, mucocutaneous, and visceral) infection. This spectrum of manifestations depends on the immune status of the host, on the parasite, and on immunoinflammatory responses. Visceral leishmaniasis causes high morbidity and mortality in the developing world. Reliable laboratory methods become mandatory for accurate diagnosis, especially in immunocompromised patients such as those infected with HIV. In this article, we review the current state of the diagnostic tools for leishmaniasis, especially the serological test. Introduction The term leishmaniasis refers to a group of vector-borne diseases caused by obligate intracellular protozoan parasites of the genus Leishmania, belonging to the family Trypanasomatidae, order Kinetoplastidia. Natural transmission of the parasite occurs primarily via the bite of infected female sandflies of the genus Phlebotomous in the Old World and Lutzomyia in the New World. The disease can also be transmitted by shared syringes among intravenous drug abusers [1], by blood transfusion [2], by venereal infections [3,4], and congenitally from mother to infant [5]. However, these modes of transmission are very rare compared with vector-borne transmission. In a domestic environment, dogs are the most important reservoirs, maintaining the parasite endemic focus. Rodents also play a role as Leishmania reservoirs [6-8]. Leishmaniasis occurs in three clinical forms: (i) cutaneous leishmaniasis (CL), which is caused by L.
Introduction
The term leishmaniasis refers to a group of vectorborne diseases caused by obligate intracellular protozoan parasites of the genus Leishmania, belonging to the family Trypanasomatidae, order Kinetoplastidia.Natural transmission of the parasite occurs primarily via the bite of infected female sandflies of the genus Phlebotomous in the Old World and Lutzomyia in the New World.The disease can also be transmitted by shared syringes among intravenous drug abusers [1], by blood transfusion [2], by venereal infections [3,4], and congenitally from mother to infant [5].However, these modes of transmission are very rare compared with vector-borne transmission.In a domestic environment, dogs are the most important reservoirs, maintaining the parasite endemic focus.Rodents also play a role as Leishmania reservoirs [6][7][8].
Leishmaniasis occurs in three clinical forms: (i) cutaneous leishmaniasis (CL), which is caused by L. major, L. tropica, L. aethiopica (Old World CL), L. infantun, L. chagasi (Mediterranean and Caspian Sea region CL), L. amazonensis, L. mexicana, L. braziliensis, L. panamesis, L. peruviana, and L. guayanensis (New World CL); (ii) mucocutaneous leishmaniasis (MCL) or espundia, which is caused by L. brazilensis, L. panamensis, L. guyanensis in the New World and is occasionally encountered in the Old World, caused by L. infantum and L. donovani; (iii) visceral leishmaniasis (VL), which is caused by species of the L. donovani complex that consist mainly of L. infantum, L. donovani, and L. chagasi.LV is also known as kala-azar, black fever, and Dumdum fever.There is a fourth form, known as diffuse cutaneous leishmaniasis (DCL), which is caused by L. amazonensis and L. aethiopica [9].
CL is the most prevalent form and manifests as skin lesions that can heal spontaneously.VL is the most severe form of the disease and, left untreated, is usually fatal.Post-kala-azar dermal leishmaniasis (PKDL) is a complication of LV, characterized by a macular, maculo-papular, or nodular rash on the face, torso, or other part of the body.It occurs mainly in East Africa and the Indian subcontinent, behaving as the major reservoir of the parasite in areas of anthroponotic transmission [10].The mucocutaneous form causes partial or total mutilation of mucous membranes in the nose, mouth, and throat.
Leishmania infections are worldwide in distribution; they have been reported in 98 countries on all continents.The disease is endemic in tropical and subtropical regions.There are an estimated 1.3 million new cases worldwide annually, of which 1 million are CL or MCL and another 300,000 cases are VL, with approximately 200 million people at risk of infection [11].Of the 1.3 million estimated cases, only 600,000 are actually reported [12].
VL occurs mainly (90% of cases) in Bangladesh, Brazil, Ethiopia, India, Nepal, South Sudan, and Sudan.The governments of these countries have committed to eliminate VL by 2015 and aim to reduce the incidence of VL to 1 per 10,000 population in endemic districts [13].VL is an important opportunistic infection in AIDS patients; in countries where there is poor access to antiretroviral therapy, the prevalence of visceral disease is rising [14].These alarming facts have been attributed in part to the absence of an effective VL vaccine [15].
CL is found primarily in Afghanistan, Algeria, Brazil, Colombia, the Islamic Republic of Iran, Pakistan, Peru, Saudi Arabia, the Syrian Arab Republic, and Tunisia.Almost 90% of MCL cases occur in Brazil, Peru, and Bolivia.The health burden of CL in these countries remains largely unknown, partly because those who are most affected live in remote areas and often do not seek medical attention [12,15].
Control of leishmaniasis requires a combination of intervention strategies; early diagnosis and treatment is an important aspect.In VL, diagnosis is made by combining clinical signs with parasitological or serological tests (rapid diagnostic tests and others).In addition, an accurate diagnostic test that can identify active VL versus asymptomatic disease remains a key component of measures that aim to control this serious disease [16].In CL and MCL, serological tests have limited value.Table 1 presents a summary of all the serological and antigen detection techniques commonly used for leishmaniasis diagnosis.
Diagnosis of cutaneous and mucocutaneous leishmaniasis
In the laboratory, diagnosis is made microscopically by direct identification of amastigotes in Giemsa-stained lesion smears of biopsies, scrapings, or impression smears.Amastigotes are observed as round or oval bodies, 2-4 μm in diameter, with characteristic nuclei and kinetoplasts.The material from the ulcer base usually has the highest yield; however, other authors have not found significant differences in the diagnostic outcomes when smears or culture samples are taken from the center or the border of the ulcer or from an incision made tangential from the ulcer [17,18].Conventionally, three to five aspirates from different lesions or portions of lesions are dissected.The first samples should be used for microscopy and the last for culture to minimize the risk of contamination [19].A combination of microscopy and culture increases diagnostic sensitivity to more than 85% [18,20].
Antileishmanial antibodies can be detected by serological tests; however, these are not the usual methods, because antibodies tend to be undetectable or present in low titers due to poor humoral response [21][22][23].
Culture and DNA detection by PCR are sensitive but are not currently practical in developing countries.
Diagnosis of visceral leishmaniasis
In developing countries where the disease is not prevalent, the existence of laboratory facilities enables an adequate and efficient follow-up of the disease.However, in developing countries with large numbers of patients in rural areas, simple diagnostic tools are necessary for field use [24].Laboratory diagnosis of VL includes microscopic observation and culture from adequate samples, antigen detection, serological tests, and detection of parasite DNA.
Culture and microscopic observation
Definitive VL diagnosis is supported by direct demonstration of parasites in clinical specimens and specific molecular methods [25][26][27].The commonly used samples are splenic or bone marrow aspirates.The presence of amastigotes can also be determined in other samples such as liver biopsies, lymph nodes, and buffy coats of peripheral blood.The sensitivity of the bone marrow stained with Giemsa is about 60% to 85%.In splenic aspirates, the sensitivity is higher (93%) [28], but sampling is associated with a risk of fatal hemorrhage in inexperienced hands.To increase the sensitivity, fluorescent dye-conjugated antibodies can be used [29].The sensitivity in peripheral blood smears is low, especially in individuals with low parasitemia.In addition, results are dependent on technical expertise and the quality of prepared slides.
Culture of the parasite can improve diagnostic sensitivity, but is tedious, time-consuming, and expensive, and thus seldom used for clinical diagnosis.There are new culture methods that improve sensitivity, such as the micro-culture method (MCM); recent modifications of this method involve using the buffy coat and peripheral blood mononuclear cells [30,31].
The culture media used may be biphasic and may include Novy-MacNeal-Nicolle medium and Tobie's medium (conversion of amastigotes to promastigotes and monophasic medium), Schneider's insect medium, M199, and Grace's insect medium (amplifying parasite number) [29].
Antigen detection in urine
Several studies have demonstrated leishmanial antigen in the urine of VL patients.In 1995, De Colmenares et al. reported two polypeptide fractions of 72-75 kDa and 123 kDa in the urine of kala-azar patients [32].In 2002, Sarkari et al. described a urinary 5-20 kDa carbohydrate-based, heat stable antigen of VL patients [33].A latex agglutination test (KAtex, Kalon Biological, UK) for detection of this antigen in urine samples was evaluated using samples from confirmed cases and controls from endemic and non-endemic regions.This test showed good specificity (82% to 100%), but had low to moderate sensitivity that ranged from 47% to 95% [34][35][36][37][38]. Nowadays, this method is useful for the diagnosis of disease in cases with deficient antibody production.In this respect, this method has reported 100% sensitivity and 96% specificity in immunocompromised patients [39].In another study, 87% specificity and 85% sensitivity were obtained for primary VL in HIV-coinfected patients, and the method had predictive capabilities in the follow-up of treatment and detection of subclinical infection in Leishmania/HIV co-infected cases [40].Another urinary leishmanial antigen, a lowmolecular weight, heat-stable carbohydrate has been detected in the urine of VL patients by an agglutination test with 60% to 71% sensitivity and 79% to 94% specificity [41].In summary, the latex agglutination test is simple, easy to perform, inexpensive, rapid, and can be used as a screening test.
Efforts are being made to improve the performance of this technique, because it promises to be a test of cure in populations of developing areas [28].
Serological diagnosis
Specific serological diagnosis is based on the presence of a specific humoral response.Current serological tests are based on four formats: indirect fluorescent antibody (IFA), enzyme-linked immunosorbent assay (ELISA), western blot, and direct agglutination test (DAT).The sensitivity depends upon the assay and its methodology, but the specificity depends on the antigen rather than the serological format used.
It has been noted that all antibody detection tests share the same drawbacks; the antibodies remain positive for many months after the patient has been cured and do not differentiate between current and past infection.In endemic regions, asymptomatically infected persons can also be positive in these tests.
Indirect fluorescent antibody (IFA) test
The IFA test shows acceptable sensitivity (87%-100%) and specificity (77%-100%) [41,42].Promastigote forms should be the antigens of choice for diagnosis of visceral leishmaniasis by the IFA test because they minimize cross-reactivity with trypanosomal sera [43].The antibody response is detectable very early in infection and becomes undetectable six to nine months after cure; hence, if the antibodies persist in low titers, it is a good indication of a probable relapse [44].The need for a sophisticated laboratory with a fluorescence microscope restricts use of the IFA test to reference laboratories.
Enzyme-linked immunosorbent assay (ELISA)
ELISA is the preferred laboratory test for serodiagnosis of VL.The technique is highly sensitive, but its specificity depends upon the antigen used.Moreover, this assay can be performed easily and is adaptable for use with several antigenic molecules.
One of the antigens used in the ELISA test is a crude soluble promastigote antigen (CSA) that is obtained by freezing and thawing live promastigotes.
A conserved portion of a kinesin-related protein recombinant antigen from a cloned protein of L. chagasi, called rK39, has been reported to be highly reactive to sera from human and canine VL cases when run in an ELISA format [58,59].Using this recombinant antigen, 99% specificity and sensitivity were reported in immunocompetent patients with clinical VL.In India, this antigen was reported with a sensitivity of 98% and a specificity of 89% [60]; however, a report from Sudan and other countries revealed that this antigen showed low sensitivity (75%) and specificity (70%) [61].In HIV-positive patients, rK39-ELISA showed higher sensitivity (82%) than the IFA test (54%), with higher predictive value for detecting VL [62].With successful therapy, antibody titers declined steeply at the end of treatment and during follow-up; in contrast, patients who relapsed showed increased titers of antibodies to rK39.This suggests the possible application of rK39-ELISA in monitoring drug therapy and detecting relapse of VL [51].
In another study, two hydrophilic antigens of Leishmania chagasi were used (rk9 and rk26), leading to an increase in the list of available antigens for serodiagnosis of VL [63].Another kinesin recombinant related protein used in ELISA assay is rKE16.The use of this antigen for VL diagnosis has been very sensitive and specific as rK39 when tested in patients from China, Pakistan, and Turkey [64].
Another new assay, based on the detection of the K28 fusion protein in studies performed in Sudan and Bangladesh with 96% sensitivity in Sudan and 98% in Bangladesh, has been developed [65].Moreover, heat shock proteins HPS70 or histones proteins H2A, H2B, H3, and H4 may have potential use for serodiagnosis of VL [66,67]; furthermore, lipid-binding proteins (LBPs) as antigens have shown high levels of sensitivity and an absence of cross-reactions with the sera of patients with other diseases [69].The ELISA test, due to the requirement of skilled personnel, sophisticated equipment, and electricity, is not used in endemic regions for the diagnosis of VL.
Immunoblotting (western blot)
For this type of testing, promastigotes are cultured to log phase, lysed, and the proteins are separated on SDS-PAGE.
The separated proteins are electrotransferred onto a nitrocellulose membrane and probed with serum from the patient.The western blot technique provides detailed antibody responses to various leishmanial antigens [69,70], and has been found to be more sensitive than the IFA test and ELISA, especially in co-infected HIV patients with VL [71][72][73], but the drawbacks of the technique (equipment and time requirements, cumbersomeness, and cost) limit its use to research laboratories.
Direct agglutination test (DAT)
The DAT is based on direct agglutination of Leishmania promastigotes that react specifically with anti-Leishmania antibodies in the serum specimen, resulting in agglutination of the promastigotes.Whole, trypsinized, coomassie-stained promastigotes can be used either as a suspension or in freeze-dried form that can be stored at room temperature for at least two years, facilitating its use in the field [74,75].
Chappuis et al. [76], in a meta-analysis that included thirty studies evaluating DAT, found that the DAT had sensitivity and specificity estimates of 94.8% (95% confidence intervals [CI], 92.7-96.4) and 97.1% (95% CI, 93.9-98.7),respectively.Moreover, in settings where parasitological confirmation is not feasible, the freeze-dried DAT together with classical clinical features of VL can be used for diagnosis at a cut-off for positive DAT, which is 1:12,800 as in endemic areas.The DAT is simpler than many other tests but presents severe problems in terms of reproducibility of results, which depends on antigen elaboration [77].A new antigen elaboration method, the EasyDAT described in 2003, shows the same sensitivity, specificity, and durability as the traditional DAT antigen method but offers the additional advantages of cost reduction and standardization [78].
Although the DAT for the serodiagnosis of visceral leishmaniasis has high sensitivity and specificity, it still has some limitations; among these are the relative long incubation time (18 hours) and the serial dilutions of the samples that must be made.In order to circumvent these problems, Schoone et al. in 2001 [79] developed a fast agglutination screening test (FAST).The FAST utilizes only one serum dilution (qualitative result) and requires three hours of incubation.This makes the test very suitable for the screening of large populations.The sensitivity and specificity of FAST were found to be 91.1%-95.4% and 70.5%-88.5%,respectively [80,81].
Anti-Leishmania antibodies may persist for years as a result of previous VL infection, so titers measured by DAT may remain positive for up to five years after recovery in > 50% of VL patients, which may limit the DAT's widespread applicability in regions of endemicity.Although the DAT is the first real field test, it remains the serological test of choice as well as the first antibody detection test for VL used in field settings, particularly in many developing countries and in Leishmania/HIV co-infections [73,82,83].
Immunochromatographic assay (IC)
The immunochromatographic test using rK39 antigen (39 amino-acid-repeat recombinant leishmanial antigen from L. chagasi) has become popular in recent years.It is a qualitative membranebased immunoassay with nitrocellulose strips impregnated with recombinant K39 Leishmania antigen.A drop of blood or serum is smeared over the pad of the strips and dipped in a small amount of buffer; the results are ready within a few minutes.
In clinical cases of VL, the rK39 IC showed variation in the sensitivity and specificity among different populations.The rK39 IC showed 100% sensitivity and 93%-98% specificity in India [84,85], 90% sensitivity and 100% specificity in Brazil [86], and 100% sensitivity and specificity in the Mediterranean area [87].In other reports in southern Europe, the rK39 IC test was positive in only 71.4% of the cases of VL [88]; in Sudan, rK39 IC showed a sensitivity of 67% [61].These differences in sensitivity may be due to differences in the antibody responses observed in different ethnic groups.The rK39 IC assay has proven to be versatile in predicting acute infection, and it is the only available format for diagnosis of VL with acceptable sensitivity and specificity levels.It is also easy to use in the field, rapid (15-20 minutes), cheap, and gives reproducible results.Like the DAT assay, IC is positive in a significant proportion of healthy individuals in endemic regions and for long periods after cure; hence, this limits its usefulness in persons with a previous history of VL who present with recurrence of fever and splenomegaly, as these tests cannot discriminate between a case of VL relapse and other pathologies.
Latex agglutination test (LAT)
The LAT is one of the recently developed rapid diagnostic tests for the rapid detection of anti-Leishmania antibodies against the A2 antigen derived from the amastigote form as well as those against crude antigens derived from the promastigote form of an Iranian strain of L. infantum.In a comparative study with the DAT, the sensitivity of tested human sera from DAT-confirmed patients yielded 88.4% sensitivity, while the specificity was 93.5% on A2-LAT amastigote, with a higher degree of similarity in accuracy to the DAT [89,90].
Many PCR-based methods for diagnosis of VL have been described with different specificities and sensitivities; PCR assay sensitivity depends on the sample used.Sensitivity is highest (near 100%) in spleen or bone marrow [93,97,102] samples.The ideal sample is peripheral blood due to its non-invasive character.Using peripheral blood, the sensitivity ranges described vary from 70% to 100% [93,102,103].
A comparative clinical study between conventional microbiologic techniques and a leishmania speciesspecific PCR assay in HIV-co-infected and HIVuninfected patients has shown the sensitivity of the leishmania species-specific PCR to be 95.7% for bone marrow and 98.5% for peripheral blood samples; the sensitivity in HIV-co-infected and non-HIV-coinfected adults was 100% [94].
A PCR-ELISA was used to diagnose VL in HIVnegative patients; peripheral blood was used and yielded a high sensitivity [104].A similar PCR-based technique was applied in the diagnosis of VL in HIV patients, with good results [28].
Another interesting approach is a rapid fluorogenic PCR technique.Wortmann et al. used a fluorescent DNA probe for a conserved rRNA gene that is amplified using flanking primers; this technique using clinical samples showed great sensitivity and specificity [98].The real-time PCR has the advantage of being quantitative, which could be useful in the follow-up of treatment, allowing for the assessment of the parasite burden [106,107].
However, PCR techniques remain complex and expensive, and in most VL-endemic countries, they are restricted to a few teaching hospitals and research centers.
Diagnosis of VL-HIV co-infection
According to the World Health Organization (WHO) [108], an estimated 35 million people worldwide are living with HIV.Immunosuppression may reactivate latent Leishmania infection in asymptomatic patients and among HIV/AIDS patients.It is known that Leishmania has emerged as an opportunistic disease among HIV patients in endemic areas [109][110][111][112].Moreover, it has been noted that the risks of clinical VL in HIV patients increased by 100 to 1,000) [111].Although cases of co-infection have so far been reported in 33 countries worldwide, most of the cases have been found in sub-Saharan African countries, especially in East Africa.In Humera (in northwest Ethiopia), the proportion of VL patients coinfected with HIV increased from 18.5% in 1998-99 to 40% in 2006 [113].
VL in HIV patients has atypical clinical presentation; only 75% of HIV-infected patients, as opposed to 95% of non-HIV-infected patients, exhibit the characteristic clinical pattern -namely, fever, splenomegaly, hepatomegaly, and gastrointestinal involvement [114][115][116][117].The diagnostic principles remain the same as those for non-HIV-infected patients.The presence of Leishmania amastigotes in the bone marrow can often be demonstrated in buffy coat preparation and in unusual locations (stomach, colon, or lungs) [118], but it has lower sensitivity in VL-HIV patients.
For HIV patients, the sensitivity of antibody-based immunological tests such as the IFA test and ELISA is low.Serological tests have limited diagnostic value because over 40% of co-infected individuals have no detectable specific antibody levels against Leishmania [119].In their meta-analysis, Cota et al. [73] summarized the accuracy of different serological techniques used for diagnosing HIV-co-infected persons.The estimated sensitivities using random effect models and their respective 95% confidence intervals for the other tests were: IFA test, 51%; ELISA, 66% (40% to 88%); DAT, 81% (61% to 95%); and immunoblotting, 84% (75% to 91%).The estimated specificity using random effect models and their respective confidence intervals for the following tests were: immunoblotting, 82% (65% to 94%); ELISA, 90% (77% to 98%); IFA test, 93% (81% to 99%); and DAT, 90% (66% to 100%).Thus, due to the low sensitivity of the serological tests for VL diagnosis in HIV-infected patients, at least two different serological tests should be used for each patient to increase the sensitivity of antibody detection [120].The detection of polypeptide fractions of 72-75 kDa and 123 kDa of Leishmania antigen in the urine of patients with VL was 96% sensitive and 100% specific; furthermore, these antigens were not detectable after three weeks of treatment, suggesting a good prognostic value [32].In conclusion, serology should not be used to rule out a diagnosis of VL among HIV-infected patients; an additional specially recommended serological test and/or molecular or parasitological methods may be necessary if the results of serological tests are negative.
Conclusions
Leishmaniasis is a global health problem and a major killer in endemic countries.Recent years have witnessed extraordinary potential progress in diagnosing Leishmania infection.The main challenge in these trials has been to reach the gold standard test in order to establish effective strategic programs to control and eradicate the disease.There are several methods of laboratory diagnosis of leishmaniasis; these include parasite detection by microscopic examination, culture, or molecular biology-based assays for detecting parasite DNA (PCR).The molecular tests are more sensitive than microscopic examination and parasite culture, but they remain restricted to referral hospitals and research centers.It could be concluded from previous studies that serological diagnosis is an alternative tool to the invasive methods of parasitological diagnosis for large-scale and decentralized diagnosis of leishmaniasis, especially VL.These serodiagnostic techniques are, however, of limited use for CL and MCL because of low sensitivity and variable specificity.Serologic diagnosis of VL can be accomplished with many techniques, such as IFA, ELISA, western blot, rapid strip testing for rK39 antigen (IC), DAT, or detection of Leishmania antigen in urine by latex agglutination (KAtex).
Selection of the serological test during diagnosis should be based on different parameters such as region, cost, sensitivity, specificity, feasibility, sustainability, and field applicability, especially in problematic endemic areas.Detection of antibody responses to parasite antigens has inherent limitations.First, high serum antibody levels are present in both asymptomatic and active VL.Second, serum anti-Leishmania antibodies remain present for several years and complicate the diagnosis of relapse.Third, the specificity of these tests in VL-endemic areas is variable because there are a number of individuals with no clinical VL who have antileishmanial antibodies, complicating this specificity.Additional challenges originate from Leishmania/HIV coinfection, as the co-infection may appear without the typical clinical signs.The immunosuppressive action of Leishmania/HIV co-infection makes the serological approach to diagnosis especially challenging.However, DAT, immunoblotting, and the latex agglutination test (KAtex) have been proven to be superior in diagnosis of these co-infection cases.Negative results of serology should not reliably exclude a diagnosis of VL among HIV-infected patients.An additional molecular or other serological or parasitological test may be necessary to reach an accurate diagnosis if the results of serological tests are negative.
Table 1 .
Serological and antigen detection techniques commonly used for leishmaniasis diagnosis | 2017-04-08T03:14:15.265Z | 2014-08-13T00:00:00.000 | {
"year": 2014,
"sha1": "1a210e6afd2bb2db0d56c3c5ab3839331b0b83d8",
"oa_license": "CCBY",
"oa_url": "https://jidc.org/index.php/journal/article/download/25116660/1120",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1a210e6afd2bb2db0d56c3c5ab3839331b0b83d8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
215814319 | pes2o/s2orc | v3-fos-license | UvA-DARE (Digital Academic Repository) Event reconstruction for KM3NeT/ORCA using convolutional neural networks
: The KM3NeT research infrastructure is currently under construction at two locations in the Mediterranean Sea. The KM3NeT/ORCA water-Cherenkov neutrino detector off the French coast will instrument several megatons of seawater with photosensors. Its main objective is the determination of the neutrino mass ordering. This work aims at demonstrating the general applicability of deep convolutional neural networks to neutrino telescopes, using simulated datasets for the KM3NeT/ORCA detector as an example. To this end, the networks are employed to achieve reconstruction and classification tasks that constitute an alternative to the analysis pipeline presented for KM3NeT/ORCA in the KM3NeT Letter of Intent. They are used to infer event reconstruction estimates for the energy, the direction, and the interaction point of incident neutrinos. The spatial distribution of Cherenkov light generated by charged particles induced in neutrino interactions is classified as shower-or track-like, and the main background processes associated with the detection of atmospheric neutrinos are recognized. Performance comparisons to machine-learning classification and maximum-likelihood reconstruction algorithms previously developed for KM3NeT/ORCA are provided. It is shown that this application of deep convolutional neural networks to simulated datasets for a large-volume neutrino telescope yields competitive reconstruction results and performance improvements with respect to classical approaches.
Introduction
Precision measurements of the fundamental properties of neutrinos are one of the opportunities that might allow us to discover and understand the physics that exists beyond the established Standard Model of particle physics.
The detection of neutrinos, both for fundamental particle physics and high-energy astrophysics, can be achieved with the deep-sea and photon-detection technology that has been developed by the ANTARES [1] and KM3NeT [2] Collaborations for very-large-volume water-Cherenkov detectors.
KM3NeT/ORCA, the low-energy detector of KM3NeT, addresses the determination of a still unknown, but fundamental parameter of neutrino physics: the neutrino mass ordering.The experiment focuses on the measurement of the energy-and zenith-angle-dependent oscillation patterns of cosmic-ray-induced neutrinos with a few-GeV energy that originate in the atmosphere and traverse the Earth [3].
The power to distinguish between the two different mass orderings is linked to the detection of an excess or deficit of neutrino events in different regions of these oscillation patterns.This sensitivity increases with better energy and zenith-angle resolution and flavour identification for the interacting neutrinos, and finer control of systematic effects that influence the measurement.Therefore, one of the most important goals in the analysis of KM3NeT/ORCA data is the development and characterisation of neutrino event reconstruction and classification algorithms that improve these resolutions.
The neutrino detection principle of water-or ice-based large-scale Cherenkov detectors relies on the detection of Cherenkov photons induced by charged secondary particles created in a neutrino interaction with the target material.All neutrino flavours can interact through the weak neutral current (NC) mediated by the exchange of a Z 0 boson.This interaction results in a particle shower composed mainly of hadrons, generically referred to as a hadronic system, while the scattered neutrino escapes undetected.An interaction via the weak charged current (CC), with the exchange of a W + or W − boson, also often results in a hadronic shower at the interaction vertex.Additionally, a lepton of the same flavour as the interacting neutrino is created, which carries a fraction of the incoming neutrino energy.
A muon neutrino or muon antineutrino CC interaction, ν CC µ 1, results in an outgoing muon in the final state.From now on the term 'neutrino' refers always to both neutrinos and antineutrinos, if not stated otherwise.The muon appears as a track-like light source in the detector, and can therefore be identified with good confidence, depending on its track length.The visible trajectory of the muon is determined by its energy loss, and in water it amounts to about roughly 4 m per GeV of muon energy for the relevant energy regime of a few GeV.
At the energy ranges considered in KM3NeT/ORCA, all neutrino-nucleon NC, ν CC e , and ν CC τ interactions, with the exception of roughly 18 % of tau leptons decaying into muons, create a particle shower of a few meters length, that appears as an elongated, but localised, light source compared to the typical distance scales between the detector elements (9-20 m, cf.Sec.2.1).This event type is referred to as shower-like.The outgoing electron from a ν CC e event initiates an electromagnetic shower, a cascade of e ± -pairs, while the hadronic system, typically at the neutrino interaction vertex, develops into a hadronic shower with large event-to-event fluctuations and a possibly complex structure of hadronic or electromagnetic sub-showers, depending on the decay modes of individual particles in the shower.
Although an electromagnetic shower consists of many e ± -pairs with rather short path lengths (about 36 cm radiation length in water, see Sec. 33.4.2. in Ref. [4]) and overlapping Cherenkov cones, the small pair opening angle preserves the Cherenkov angle peak of the total angular light distribution.This results in a single Cherenkov ring projected onto the plane perpendicular to the shower axis.Similarly, each hadronic shower particle with energy above the Cherenkov threshold will produce a Cherenkov ring.Therefore, hadronic showers show a variety of different signatures due to the various possible combinations of initial hadron types, their momenta and the diversity of their hadronic interactions in the shower evolution.
While electromagnetic cascades show only negligible fluctuations in the number of emitted Cherenkov photons and in the angular light distribution, hadronic cascades show significant intrinsic fluctuations in the relevant few-GeV energy range.These intrinsic fluctuations of hadronic cascades and the resulting limitations for the energy and angular resolutions have been studied in detail in Ref. [5].
Dedicated reconstruction algorithms for track-like and shower-like events have been developed for KM3NeT/ORCA based on maximum-likelihood methods.Additionally, a machine-learning algorithm, based on Random Forests [6], has been employed successfully to classify track-like, shower-like, and background events.These algorithms, their implementation and performance are described in the KM3NeT Letter of Intent [2].
In the last few years, significant progress has been made in the machine-learning community due to the advent of deep-learning techniques.A particularly successful deep-learning concept is that of a deep neural network.Specialised neural network model architectures have been designed for individual use cases.In the field of computer vision, Convolutional Neural Networks (CNNs) have led to a strong increase in image recognition performance.From 2010 to 2016, the error rates in e.g. the popular ImageNet image classification challenge improved by a factor of 10 [7] [8].
Since the data of many high-energy physics experiments can be interpreted in a way similar to typical images in the computer vision domain, these techniques have already been exploited by several experiments [9][10][11][12].As an example, the classification performance of neutrino interactions in the NOvA experiment has been significantly improved by employing CNNs compared to classical reconstruction tools [13].
In this paper, we present for the first time the application of CNNs to detailed Monte Carlo simulations of a large water-Cherenkov neutrino detector with the goal to provide a comprehensive reconstruction pipeline for KM3NeT/ORCA, starting from data at the level of the data-acquisition system.For this purpose, a Keras-based [14] software framework, called OrcaNet [15], has been developed that simplifies the usage of neural networks for neutrino telescopes.Here, we apply this framework to neutrino event reconstruction and classification in KM3NeT/ORCA, and compare the results achieved with the algorithms described in Ref. [2].We note that also these algorithms continue to be developed and improved in KM3NeT.
The paper is organised as follows.Section 2 introduces the KM3NeT/ORCA detector, the Monte Carlo simulation chain used to generate training and validation data for the CNNs and introduces relevant aspects of the trigger algorithms.Section 3 gives a brief introduction to the main functional features of the employed CNNs, while Section 4 details the developed pre-processing chain that creates suitable input images from the Monte Carlo simulation data.Section 5 provides an overview of the general network architecture that is shared by all CNNs that have been designed for the reconstruction and classification tasks, which together define the analysis pipeline for KM3NeT/ORCA.The concepts and performance of these specific CNNs, as well as exemplary comparisons to their counterpart algorithms, are explained in the next sections.Section 6 explains the background classifier, Section 7 the event topology classifier used to distinguish track-like and shower-like events, while Section 8 introduces event regression and its respective uncertainties, i.e. the reconstruction of the direction, energy, and vertex of the incident neutrinos.Section 9 summarises and concludes the paper.
The KM3NeT/ORCA experiment
The KM3NeT research infrastructure is under construction at two sites in the Mediterranean Sea.The KM3NeT/ORCA detector is located about 40 km off-shore of Toulon in the south of France.Its main goal is to detect atmospheric neutrinos with GeV energies (3-40 GeV), while KM3NeT/ARCA, located south-east of Sicily, aims to investigate astrophysical neutrinos.The main design principles and scientific goals of the experiment can be found in the KM3NeT Letter of Intent [2].
Layout of the detector
The detector volume of KM3NeT/ORCA will be instrumented with 115 Detection Units (DUs), which are vertical, string-like structures anchored to the seabed and held upright by a buoy at the top of the DU.Currently, the first six DUs have been installed and are operational.Each DU holds 18 Digital Optical Modules (DOMs).The DOMs contain 31 photomultiplier tubes (PMTs) with a diameter of 3" each.The PMTs are used to measure two quantities, the arrival time and the time range that the anode output signal remains above a tunable threshold (time-over-threshold, ToT) with a time resolution on the nanosecond scale.The ToT can be used as a proxy for the amount of light registered by the PMT.The vertical spacing between the DOMs on a single DU is on average 9 m, while the average horizontal distance between the DUs is about 20 m.This results in a total instrumented volume of about six megatons of seawater.
There are two main sources of background in KM3NeT/ORCA, namely atmospheric muons reaching the detector from above and random optical background due to beta decays of 40 K in seawater and bioluminescence.Optical background, which is dominated by decays of 40 K, accounts for about 7 kHz of uncorrelated single photon noise per PMT with a rate of two-fold coincidences of about 500 Hz per DOM [16].
The atmospheric muon background can be reduced significantly by requiring the reconstructed vertex position to be inside or close to the instrumented detector volume.In addition, predominantly atmospheric muons enter the detector from above and therefore the atmospheric muon background can be further reduced by discarding events for which the direction of the emerging particle trajectory is reconstructed as downwards.
Monte Carlo simulations and trigger algorithms
Detailed Monte Carlo (MC) simulations of the detector response have been produced for three distinct types of triggered data, namely atmospheric muons, random noise and neutrinos.A detailed introduction to the KM3NeT simulation package and the trigger algorithms can be found in Ref. [2].
For neutrinos, ν CC e and ν CC µ interactions on nucleons and nuclei in seawater have been simulated, while all NC interactions are represented by ν NC e , since the detector signature is identical for all flavours.Charged-current interactions of ν τ are neglected for simplicity, as the resulting detector signatures for the different decay modes of the tau lepton are very similar to either ν CC e or ν CC µ .The distance between DUs for the simulations employed in this work is on average 23 m and hence 3 m larger than for the simulations used in Ref. [2].The vertical inter-DOM spacing is set to 9 m on average, which was identified as optimal for the determination of the neutrino mass ordering in Ref. [2].The highest level of simulated data consists of a list of hits, i.e. time stamp, ToT, and identifier, for all PMTs in the detector.In addition to the signal hits induced by the interactions of neutrinos and by atmospheric muons in the sensitive detector volume, simulated background hits due to random noise are added such that the simulated triggered data matches the real conditions as closely as possible.
After hits have been simulated, several trigger algorithms that rely on causality conditions are applied.Once a trigger has fired to define an event, all hits that have fired the trigger, including signal and background hits, are labelled as triggered hits.Since the trigger algorithm is not fully efficient in identifying all signal hits, a larger time window than the one defined for the triggered hits is saved for further analysis.Assuming a triggered event with t first as the time of the first triggered hit and t last as the time of the last triggered hit, all photon hits in each PMT in a time window [t first − t marg , t last + t marg ] are recorded, where t marg is defined by the maximum amount of time that a photon propagating in water would need to traverse the whole detector.As a result, the total time window of triggered neutrino events in KM3NeT/ORCA is about 3 µs.
A summary with detailed information about the simulated data for KM3NeT/ORCA is shown in Tab. 1.
Convolutional neural networks
This section introduces the concepts and nomenclature used in the description of the networks that have been developed and used in this work.
Convolutional neural networks [19,20] form a specialised class of deep neural networks.Generally, neural networks are used in order to approximate a function f (x), which maps a certain number of inputs x i ∈ X to some outputs y i ∈ Y .The goal is then to find an approximation f (x) to the function f (x) that describes the relationship between the inputs x i and the outputs y i .Neural networks are based on the concept of artificial neurons that are arranged in layers.For a fully connected neural network, each neuron in a layer is connected to all neurons in the previous layer.
Stacking multiple layers of neurons can be interpreted as multiple functions that are acting on the input X in a chain.For a two-layer network and thus two functions f (1) and f (2) (the ˆsymbol of f is neglected from this point on) this is: Here, f (1) refers to the first layer in the network and f (2) to the second.The first layer of a neural network is called the input layer, the intermediate layers are called the hidden layers and the last layer is called the output layer.
Event type E gen spectrum N trig [10 6 [17].N trig is the number of events that remain after triggering.Atmospheric muons have been simulated with the MUPAGE package [18].Random noise events have been simulated conservatively with a 10 kHz single rate per PMT with additional n-fold coincidences (600 Hz two-fold, 60 Hz three-fold, 7 Hz four-fold, 0.8 Hz five-fold and 0.08 Hz six-fold [16]).Time-varying increases of the hit rate due to bioluminescence in seawater have not been simulated.
In order to learn the relationship between (X, Y ), learnable weights are used for each neuron.If a single neuron has inputs x i , then each x i has a weight w i associated to this input.Additionally, a single, learnable bias parameter is added in order to increase the flexibility of the model to fit the data.This process in a neuron, consisting of the weights w i and the bias b, shows a linear response: However, many physical processes in nature are inherently nonlinear.To account for this, the output of the transfer function can be wrapped in another, nonlinear function.Additionally, it can be shown that a nonlinear, two-layer neural network can approximate any function (Chap.6.4.1.in Ref. [20]).The most commonly used, nonlinear function is the rectified linear unit (ReLU): Such functions are called activation functions or layers.The weights of each neuron get updated iteratively during a training process.For this purpose, one needs to define a so-called cost or loss function, which measures the distance between the output of the neural network f (x) = y reco and the ground truth y true .This can for example be done by measuring the mean squared error and minimising (y true − y reco ) 2 .
Typically, iterative gradient descent (Chap.4.3. in Ref. [20]) based optimisation algorithms are used that minimise the cost function until a low value is achieved.During this training process, the cost error is back-propagated using a back-propagation algorithm (Chap.6.5. in Ref. [20]), which allows for the tuning of the neural network's weights.
Convolutional neural networks are frequently used in domains, where the input can be expected to be image-like, i.e. in image or video classification.Therefore, several changes are made in the architecture of CNNs compared to fully connected neural networks.The main concepts of convolutional neural networks are based on only locally and not fully connected networks and on parameter sharing between certain neurons in the network.
Typical input images to convolutional neural networks are two-dimensional (2D).However, since most images are coloured, they in fact are encoded by three dimensions (3D): width, height and channel.Here, the channel dimension specifies the brightness for each colour channel (red, green and blue) of the image.This three dimensional array is then used as input to the first convolutional layer.
Similar to the input layer, the neurons in a convolutional layer are also arranged in three dimensions, called width, height and depth.As already mentioned, one of the main differences between convolutional layers and fully connected layers is that the neurons inside a convolutional layer are only connected to a local region of the input volume.This is often called the receptive field of the neuron.The connections of this local area to the neuron are local in space (width, height), but they are always full along the depth of the input volume.Hence, for a [32 × 32 × 3] image (width, height, channel) and receptive field size of 5 pixels, each neuron in the first convolutional layer is connected to a local [5 × 5 × 3] (width, height, depth) patch of the input.To each of these connections, a weight is assigned, such that each neuron has a [5 × 5 × 3] weight matrix, which is often called the kernel or filter.These weights are used in performing a dot product between the receptive field of the neuron and its associated kernel, also called the convolution process.Here, the total number of parameters for the single neuron would be 5 • 5 • 3 + 1 (bias) = 76.Additionally, each neuron at the same depth level covers a different part of the image with its receptive field.For more information the reader is referred to Refs.[19,20].
For CNNs, an important assumption is that abstract image structures, such as edges, occur multiple times in the image.Under this assumption, the neurons at a certain depth can share their weights, which significantly reduces the number of parameters in the network.The number of parameters in CNNs can be further reduced with the aid of pooling layers, which reduce the dimensionality of the layer outputs by selecting or combining the neuron outputs [19,20].
For the training of a neural network, the data are split up into a training, validation, and test dataset.The network is trained on the training dataset and validated by applying the network to the validation dataset.Once the training is finished, the network is applied to the test dataset to determine its performance.If the value of the loss function is significantly greater for the training dataset than for the validation dataset, this is called overfitting with respect to the training dataset.The smaller the size of the training dataset, the higher the probability that the network will focus on the peculiarities of individual input images instead of generalising generic image features.In order to avoid overfitting, so-called regularisation techniques such as dropout layers have been developed [21].In a dropout layer, inputs are randomly set to zero with a probability defined by the dropout rate δ.In order to set up a basic convolutional neural network, the convolutional layers are stacked.After the last convolutional layer, the multi-dimensional output array is reshaped without ordering into a one-dimensional array by a flattening layer.A small fully-connected network can then be added, in order to connect the outputs of the last convolutional layer to the output neurons of the full network.
Data pre-processing
For each hit in a simulated event, the PMT identifier, i.e. the relative coordinate of the hit PMT in a DOM, is recorded.Additionally, the time at which the PMT signal crosses the discriminator threshold, and the measured ToT, are stored.However, the ToT value itself is not used as input for the CNNs.In order to feed this four-dimensional event data to a CNN, the hits can be binned into rectangular pixels, such that each image encodes three spatial dimensions (XYZ) and the time dimension T.
Spatial binning
The number of pixels required to resolve the spatial coordinates of the individual PMTs inside the DOMs would be very large, and most bins would be empty due to the sparsely instrumented detector volume.Therefore, the pixelation is defined such that exactly one DOM fits into one bin, while some bins remain empty due to the detector geometry.In the case of the full KM3NeT/ORCA detector, this results in a 11 × 13 × 18 (XYZ) pixel grid.The information regarding which PMT in a DOM has been hit is discarded.This is corrected for by adding a PMT identifier dimension to the pixel grid, resulting in a XYZP grid.Since one DOM holds 31 PMTs, the final spatial shape of such an image is 11 × 13 × 18 × 31 (XYZP).Such image types can also be found in classical computer vision tasks, e.g. as coloured videos.The only difference is that in a conventional video the Z-coordinate is replaced by the time and the PMT identifier is replaced by the red-green-blue colour information.
Temporal Binning
The indispensable piece of information still missing in these images is the time at which a hit has been recorded.This information can be added as an additional dimension, such that the final image of an event is five-dimensional: XYZTP.
The time resolution in KM3NeT/ORCA is of the order of nanoseconds [22].As explained in section 2.2, the time length of an event is about 3 µs, implying 3000 bins for the time dimension to reach nanosecond resolution.However, with each additional bin, the size of the event image gets larger and this leads to additional computations in the first layer of a CNN.Hence, the number of bins of the final image should be as low as possible, while still containing the relevant timing information.
Most hits that lie outside the time range of the triggered hits are background hits and not signal hits.Therefore, background hits can be discriminated against to some extent by selecting the time range in which most signal hits are found.Investigating the distribution of the time of the signal hits relative to the mean of the triggered hit times in individual events, as depicted in Fig. 1 for ν CC µ events, shows that it is asymmetric and that the relevant time range can be reduced significantly for the image generation binning.
Since the time range covered by the triggered hits is different for each event, the time range selection is defined relative to the mean time of the triggered hits for each event.As can be seen in Fig. 1, only a small fraction of the signal hits are removed, as indicated by the timecut defined by the dashed, black lines.A compromise needs to be found between the width of the timecut window and the number of time bins, implying a certain time resolution available to the network.The specific values of the timecuts used in this work are reported in the respective image generation sections of the presented CNNs.The timecut window is a parameter that can be further optimised with respect to the final performance of a trained neural network.In this work, no such parameter optimisation studies have been carried out for any of the presented CNNs, which implies that their performance for specific use cases can likely be improved further.
Multi-image Convolutional Neural Networks
After binning, the resulting images are five-dimensional: XYZTP.In order to train the neural networks presented in this paper, the deep learning framework TensorFlow [23] has been used in conjunction with the Keras [14] high-level neural network programming library.However, TensorFlow does not support convolutional layers which accept more than four dimensions as input, since five dimensional inputs are not a usual case in computer vision.Hence, the five dimensions of the XYZTP images need to be reduced to four dimensions, such that one can use three-dimensional convolutional layers.To this end, one image dimension is summed up, i.e. the information of individual PMTs in a DOM is discarded, such that the resulting image is only four-dimensional (XYZT).However, a second image of the same event is then fed to the network (XYZP) that recovers the information regarding which PMT in a DOM has been hit, but discards its hit time.Since these images only differ in the 4th dimension, i.e. the depth dimension of a convolutional layer, the images can be stacked in this dimension.For example, an XYZP image of dimension 11 × 13 × 18 × 31 can be combined with an XYZT image of dimension 11 × 13 × 18 × N T into a single, stacked XYZ-T/P image of dimension 11 × 13 × 18 × (N T + 31).These images lack the information about the hit time for a specific PMT, if more than one hit has occurred on a DOM in an event.
Significant gains in performance for all CNN applications in this work were observed when using this stacking method, as compared to just supplying a single XYZT image.Furthermore, it will be demonstrated that networks with such input limitations can still match or outperform the KM3NeT/ORCA reconstruction algorithms as presented in Ref. [2].
Main network architecture
Four-dimensional images that have been created from simulated events are fed as input to a CNN.All networks that have been designed for a specific task in this work share a common architecture.The CNNs consist of two main components: the convolutional part with the convolutional layers and a small fully-connected network in the end.The convolutional part consists of convolutional blocks, each of them containing a convolutional layer, a batch normalisation layer [24], an activation layer, and, optionally, a dropout [21] or a pooling layer.The batch normalisation layer usually enables a faster and more robust training process of deep neural networks by normalising, scaling and shifting the output of the convolutional layer.The scaling and shifting transform is controlled by learnable parameters during the training process.Recent studies indicate that the batch normalisation method smoothes the optimisation landscape and induces a more predictive and stable behaviour of the gradients, allowing for faster training [25].
In the three-dimensional convolutional layer, the weights are initialised based on a uniform distribution whose variance is calculated according to Ref. [26], while the biases are set to zero.Additionally, the kernel size is three (3×3×3), the stride, i.e. the step size in shifting the convolutional kernel, is one (1 × 1 × 1) and zero-padding (Chap.9 in Ref. [20]) is used.For the batch normalisation layers, the standard parameters from Ref. [24] are used.After this, a ReLU activation layer is added.These three layers are found in all convolutional blocks that are used in this work.Additionally, optional maximum pooling and dropout layers are added.In the case of maximum pooling layers, zero-padding is not applied.A scheme of these convolutional blocks is shown in Tab. 2.
Layer type Properties
Convolution kernel size (3 × 3 × 3), uniform initialisation [26], zero-padding Batch normalisation parameters as in Ref. [24] Activation ReLU Maximum pooling optional, no zero-padding Dropout optional Furthermore, all models use the Adam gradient descent optimiser [27] with standard parameter values, except for the parameter , which is increased from its default value of 10 -8 to 10 -1 .A larger value of results in smaller weight updates after each training step.In our case, it has been observed that the network occasionally did not start to learn, depending on the random initialisation of the parameters.This could be fixed by changing the value of to 10 -1 as suggested in Ref. [28], while significant drawbacks, such as a slower training convergence due to smaller weight updates, have not been observed.The weights of the neural network are updated after one batch of images is passed through the network.This is known as batch gradient descent.The batch size is defined as the number of images contained in one batch.The batch size in the training for all presented CNNs is generally set to 64 and the learning rate, i.e. the step size in the Adam algorithm for the update of the weights is annealed exponentially.
The training of all presented networks has been executed at the TinyGPU cluster at the RRZE computing centre 2. It consists of 32 nodes with 4 GPUs each.The GPUs are either Nvidia GTX1080, GTX1080Ti, RTX2080 Ti, or Tesla V100.All CNNs in this work have been trained with CUDA 10 [29].In order to train the networks, an open-source software framework called OrcaNet [15] has been developed, which is intended as a high-level application programming interface on top of Keras [14], specifically suited to the large datasets that frequently occur in astroparticle physics.
Background classifier
An essential part of the KM3NeT/ORCA reconstruction pipeline is the background classifier, which discriminates atmospheric muons and random noise from neutrino-induced events.For this purpose, the employed classification algorithm is based on a Random Forest (RF) [6] method.The inputs of the RF are high-level observables (features), mainly determined from likelihood-based track and shower reconstruction algorithms.In this section, an alternative classifier based on CNNs is presented and its performance is compared to the RF classifier.
Image generation
As outlined in Sec.4.3, XYZ-T/P images are used as input to the network.For both event images, i.e. the XYZT and XYZP components of the stacked XYZ-T/P image, a timecut has been defined, as introduced in Sec.4.2.The signal hit time distribution of atmospheric muons, relative to the mean time of all triggered hits, is shown in Fig. 2.This distribution has a larger variance than for neutrino events shown in Fig. 1.The reason is that, on average, atmospheric muons traverse larger parts of the detector compared to the secondary particles of the GeV-scale neutrino interactions of interest.Hence, the timecut window for all event classes has been set conservatively based on atmospheric muon events, resulting in a width of 950 ns.
The timecut for this distribution, as indicated in Fig. 2, has been set to an interval of [−450 ns, +500 ns], keeping more early signal hits than late hits.The reason for this is that events which produce late hits are typically energetic and leave a longer trace in the detector.Hence, they can be better reconstructed than less energetic atmospheric muons that only produce a few hits at the edge of the detector.Consequently, cutting away a few late hits has a small effect compared to discarding early hits from a low-energy atmospheric muon event, which already produces a low number of hits.
The number of time bins is set to 100, such that the XYZT images have dimensions of 11 × 13 × 18 × 100.The resulting time resolution of each time bin is 9.
Network architecture
The CNN network architecture for the background classifier is based on the three-dimensional convolutional blocks introduced in Sec. 5, with two additional fully-connected layers, also called dense layers, at the end.The output layer of the CNN is composed of two neurons, such that the network only distinguishes between neutrino and non-neutrino events.An overview of the final network structure is shown in Tab. 3.
Initially, a CNN with three output neurons was tested, so that neutrinos, atmospheric muons and random noise events could be classified separately.However, it was observed that the three-class CNN performed slightly worse than the two-class CNN that distinguishes only neutrino events from all others.This is due to the fact that the network cannot prioritise neutrino versus non-neutrino classification in the three-class case.A mistakenly classified atmospheric muon, e.g., classified as a random noise event, has the same effect on the total loss as an atmospheric muon classified as a neutrino.
No regularisation techniques, such as dropout, are added to the network.The training dataset is large enough, cf.Sec.6.3, so that virtually no overfitting occurs, i.e. the training-phase loss is of the same order as the loss during the validation phase.
Preparation of training, validation and test data
For the training of the background classifier, the simulated data from Tab.In order to balance the datasets with respect to their class frequency, one could split the data into 50% neutrino and 50% non-neutrino events (25% atmospheric muons + 25% random noise).Considering that the neutrino sample has the lowest number of events (about 22.8 × 10 6 ), one would have to remove a significant fraction of the 23.3 × 10 6 generated random noise events, and of the 65.1 × 10 6 atmospheric muon events for a class-balanced data splitting.On the other hand, based on the RF background classifier, it can be expected that the final accuracy of the classifier should be close to 99%.Therefore, a balanced splitting of the data into 50% for each class is not necessary, in order to avoid a local minimum during the training process.The following data splitting is used: 1/3 neutrino events, 1/3 random noise events and 1/3 atmospheric muon events, and hence the final class balance is 1/3 neutrino events and 2/3 non-neutrino events.Using this data splitting and considering the number of MC events summarised in Tab. 1, the size of the used training dataset is larger compared to a 50/50 split.The fractions of different neutrino flavours and interaction types is kept as indicated in Tab. 1.
This rebalanced dataset is then split into 75% training, 2.5% validation, and 22.5% test events, which is a trade-off between maximizing the training dataset and retaining sufficient statistics for performance evaluations.Additionally, the events that have been removed to balance the dataset (mostly atmospheric muons) are added to the test dataset.In total, the training data contains about 42.6 × 10 6 events.
Using a Nvidia Tesla V100 GPU, it takes about a week to fully train this CNN background classifier.The time needed for the training scales more weakly than linear with the number of time bins, which can be increased to improve the time resolution of the input images.
Performance and comparison to Random Forest classifier
The performance of the CNN background classifier is evaluated using the training and validation cross-entropy loss [20] of a specific classifier, the softmax classifier [20], as a function of the number of epochs, and is shown in Fig. 3 In order to compare the CNN performance to the RF background classifier, the same test dataset is used.A pre-selection of these events is carried out to reduce the fraction of atmospheric muon and random noise events to a few percent of all triggered events.For atmospheric muons this is achieved by selecting only events for which the reconstructed particle direction is below the horizon, i.e. which are up-going events in the detector.Furthermore, the events must have been reconstructed with high quality by the KM3NeT/ORCA maximum-likelihood reconstruction algorithms for either track-like or shower-like events.Finally, events reconstructed by the maximum-likelihood-based algorithms as originating from outside of the instrumented volume of the detector are removed.
In total, the pre-selected test dataset consists of about 3.3 × 10 6 neutrino, about 6 × 10 4 atmospheric muon and about 4 × 10 4 random noise events.This selection is used for all of the following performance evaluations.
In order to get a first impression of the CNN-based background classifier, the distribution of the neutrino class probability is investigated for all three event classes (neutrinos, atmospheric muons, random noise), cf.Based on the results shown in Fig. 4, it can be seen that the rate of random noise events misclassified as neutrino-induced events is significantly lower than for atmospheric muons.Using the predicted probability for an event to be classified as neutrino-induced, a threshold value p can be set to remove background events.
In order to quantify the performance of the CNN background classifier, the metric shown below is used to investigate the fraction of remaining atmospheric muon and random noise events for a given threshold value p.The atmospheric muon or random noise contamination, and the neutrino efficiency, are defined as: Here, N µ/RN is the number of atmospheric muon or random noise events, whose probability to be a neutrino-induced event is higher than p, while N total (p) accounts for the total number of events, after the same cut on p. Regarding the neutrino efficiency, N ν (p) is the total number of neutrinos in the dataset whose neutrino probability is greater than the threshold value p, while N ν,total is the number of neutrinos in the dataset, without applying any threshold.Based on the results shown in Fig. 5 for the neutrino efficiency ν eff as a function of the residual atmospheric muon contamination C µ , it can be concluded that the CNN background classifier yields a higher neutrino efficiency, of the order of a few percent, for the same muon contamination compared to the RF background classifier.
Comparing different neutrino energy ranges, 1 GeV to 5 GeV in Fig. 6 and 10 GeV to 20 GeV in Fig. 7, it can be seen that the performance gap between the CNN and the RF classifier widens with increasing neutrino energy.A possible explanation may be that small details in the distribution of the measured hits increase in importance if the neutrino events are less energetic and thus produce less hits.Then, the limitations of the input images, which do not contain the full information about an event, may become more relevant compared to events with higher energies and many signal hits.
For random noise events, the performance of the CNN and the RF classifier are comparable.In particular, both methods achieve about 99% neutrino efficiency at 1% random noise event contamination.As expected, the suppression of atmospheric muon events is significantly more difficult.
Event topology classifier
Similar to the background classifier, a RF is used in the current KM3NeT/ORCA analysis pipeline to separate track-like from shower-like neutrino events.This section introduces a CNN-based event topology classifier that distinguishes between these two event types.
Image generation
The input event images for the CNN-based track-shower classifier are similar to the ones of the background classifier introduced in Sec.6.1, i.e. the input also consists of XYZ-T/P images.The timecut for the hit selection is tighter with respect to the background classifier.Since background events have already been rejected by the background classifier, the data presented to the track-shower classifier are mostly neutrino interactions which produce secondary particles that, on average, traverse smaller parts of the detector compared to atmospheric muon events.The event class that shows the broadest signal hit time distribution is the class of ν CC µ events due to the outgoing muon.Therefore, the timecut of the track-shower classifier is set based on these events.The time distribution of signal hits relative to the mean time of the triggered hits for ν CC µ events is shown in Fig. 1.Based on this distribution the timecut is set to an interval of [−250 ns, +500 ns], as indicated by the dashed, black lines in Fig. 1.
Since the timecut interval is smaller than the one used for the background classifier, less time bins (60) are used for the time dimension.This implies a reduction of the time resolution of about 30% with respect to the background classifier, i.e. 12.5 ns per time bin.The light emission profile of hadronic and electromagnetic showers in the GeV range has an extension of at most a few metres, see Fig. 70 in Ref. [2].Since a muon of comparable energy induces the emission of Cherenkov radiation along a significantly greater path length, the reduced time binning still provides a sufficient resolution to distinguish a shower-like from a track-like event topology, while significantly speeding up the training of the CNN.Consequently, the XYZT images now have 11 × 13 × 18 × 60 pixels.
Network architecture
The network architecture, as depicted in Tab. 4, is the same as in Sec.6.2, except for additional dropout layers.Since the size of the training dataset is significantly smaller than that for the background classifier, overfitting can be observed without any regularisation.Thus, dropout layers with a rate of δ = 0.1 are added in every convolutional block and also in between the last two fully connected layers.
Preparation of training, validation and test data
In order to train the CNN track-shower classifier, only simulated neutrino events are used.The total neutrino dataset is rebalanced such that 50% of the events are track-like (ν CC µ ) and 50% are shower-like.The shower class consists of 50% ν CC e and 50% ν NC e .Additionally, the dataset has been balanced in such a way that the ratio of track-like to shower-like events is always one, independent of neutrino energy.
The rebalanced dataset is then split into three datasets with 70% training, 6% validation, and 24% test events.In total, the training dataset contains about 1.3 × 10 7 events.
Using a Nvidia Tesla V100 GPU, it takes about one and a half weeks to fully train this CNN track-shower classifier.
Performance and comparison to Random Forest classifier
The evolution of the cross-entropy loss [20] during the training is shown in Fig. 8.Even though the validation cross-entropy loss is within the fluctuations of the training loss curve at the end of the training, some minor overfitting occurs.The reason is that during the application of the trained network on the validation dataset, no neurons are dropped by the dropout layers, contrary to the training phase.Therefore, the validation loss should be lower than the average training loss, if no overfitting is observed.This can be seen by investigating the training and validation loss curves in the earlier stages of the training process, e.g. between epoch 0 and 3. Here, the validation loss is typically found at the bottom of the training loss curve.
The binned probability distribution for all used neutrino events with energies in the range of 1 GeV to 40 GeV to be classified as track-like is shown in Fig. 9.This energy range has been chosen since the classification performance saturates at about 40 GeV, as can be seen in Fig. 10 and further discussed below.The classified neutrino events have been selected according to the criteria described in Sec.6.4.About 25% of ν CC µ and ν CC µ events are identified as track-like events with a probability close to one.The correct identification of muon tracks increases with their length and hence with their energy.As the outgoing muon has on average higher energy for ν CC µ than for ν CC µ events, ν CC µ events have a higher probability to be identified as track-like.The top panel of Fig. 10 shows the fraction of events classified as track-like as a function of the neutrino energy in the range of 1 GeV to 40 GeV.An event is accepted if its CNN probability to be a track-like event is 0.5 or above.The fraction of correctly classified ν CC µ and ν CC µ events rises strongly with neutrino energy.The reason is that the spatial and temporal distribution of hits induced by low-energy ν CC µ events is similar to that of shower-like events, since the outgoing muon stops at these energies after propagating only a few meters in the detector.Comparing ν CC e and ν CC e with energies below 10 GeV, it can be seen that ν CC e are more likely classified as track-like by the CNN classifier.This is again due to the fact that the outgoing charged lepton carries on average more energy in ν CC interactions with respect to ν CC interactions.On the other hand, for energies below 10 GeV, NC interactions have a lower misclassification rate compared to ν CC e interactions, since there is no charged secondary lepton that could mimic the signature of a low-energy muon as induced in ν CC µ events.For energies above 10 GeV the opposite is true, since a localised and bright light source disfavours a ν CC µ event.A measure for the separation power of the classifier is the difference between the fractions of recognised track-like events for ν CC µ (f .Fraction of events classified as track-like, f track , with a CNN track probability P i,track > 0.5, for different interaction channels (top panel) versus the true MC neutrino energy.The relative improvement ∆f, cf.Eq. 7.1, of the CNN with respect to the RF classifier in discriminating between ν CC µ events and shower-like ν CC e / ν NC e channels is shown in the bottom panel.Here, the contributions of neutrinos and antineutrinos are averaged for each flavour and interaction.The discrimination power is defined as the difference between the fractions of events classified as track-like and shower-like in the top panel.neutrino energy.The CNN performs better than the RF in the energy range of roughly 3 GeV to 10 GeV, while the RF performs better than the CNN for energies below 2 GeV.As explained at the end of Sec.6.4, this could again be due to limitations in the input images that become more relevant for events with only a few hits.Above 15 GeV the separation power of the two classifiers is roughly equal.
Since the performance comparison shown in Fig. 10 depends on the threshold value that is set for an event to be classified as track-like, another comparison metric independent of the threshold value is introduced.P ν µ track is defined as the probability that a classifier recognises a ν CC µ event as track-like, and similarly for ν CC e .Based on Fig. 9, the ability of a classifier to separate track-like and shower-like events can be estimated by calculating the energy-dependent correlation factor c(E) of the two binned probability distributions P ν µ i,track (E) and P ν e i,track (E).The separability S(E) is then given by: where the index i in P ν µ /ν e i,track is the i-th bin of the distribution P ν µ /ν e track as shown in Fig. 9.The separability S(E) as a function of binned neutrino energy for the CNN and RF classifier is shown in Fig. 11.It is evident that the CNN-based classifier performs better by an absolute value of up to about 20% for energies below 10 GeV.For these energies, the RF-and CNN-based classifier show both a better separability of ν CC µ and ν NC e compared to ν CC e events, although this loss in separability is less pronounced and reverses above 7 GeV for the RF classifier, i.e. at slightly lower energies than for the CNN classifier.
Event regression
This section introduces the CNN-based event regressor.The designed CNN now reconstructs continuous properties of the interacting neutrinos such as the direction, energy, the position of the interaction, i.e. the vertex, and an estimate for the uncertainty of each of these inferred observables.In the reconstruction pipeline for KM3NeT/ORCA as described in Ref. [2], these reconstruction tasks are handled by two maximum-likelihood-based algorithms optimised for track-like and shower-like events, respectively.
The major difference of the CNN for the regression task with respect to the classifiers in the previous sections is that its output is a continuous instead of a categorical variable.
Image generation
For the event regressor, the event images are created in a similar way to the image generation for the background and the track-shower classifier.Thus, both XYZT and XYZP images are produced and then stacked in the last dimension.Since the background classifier has already been applied to the data, at this final stage of the reconstruction pipeline neutrinos remain predominantly as input to the event regressor.Hence, exactly the same images as specified in Sec.7 are used as input for the CNN event regressor.
Network architecture and loss functions
The network architecture of the CNN for the event regressor is similar to the one used for the background and track-shower classifier.However, for this work, it has been found that dropout layers in the CNN event regressor increase fluctuations in the training loss, even with a small dropout rate of δ = 0.05.Furthermore, the validation loss of the event regressor CNNs with dropout layers has proven to be significantly worse than for CNNs without dropout.Thus, no dropout layers are included.The fully-connected layers after the convolutional layers are also different.The properties that should be reconstructed by the network are the energy, the direction, and the vertex position of the initial neutrino interaction.For the direction and the vertex, the CNN output consists of two arrays with three elements each, containing the components of the vertex position vector and the direction cosines, respectively.Consequently, the output of the network consists of seven reconstructed floating-point numbers, denoted as ì ξ reco , one component for the energy, three for the direction and three for the vertex position.The final fully-connected output layer therefore consists of seven neurons.The CNN architecture up to this point is shown in Tab. 5.
Since the aim of this network is to reconstruct continuous variables, a categorical cross-entropy loss [20], as for the background and track-shower classifier, cannot be used.Instead, other crossentropy-based loss functions can be employed, e.g. the mean squared error (MSE) or the mean absolute error (MAE) [20], which are defined as: Dense + Linear 7 Table 5. Network structure of the CNN event regressor with XYZ-T/P input, but without the additional fully-connected network for the uncertainty estimation.In the convolutional blocks and the last fully-connected layers no dropout layer is used.
The main difference between the MSE and the MAE loss is that the MSE loss penalises outlier events that are badly reconstructed with a significantly larger loss value compared to the MAE loss.Since many detectable events are not fully contained in the KM3NeT/ORCA detector, these events are on average reconstructed more poorly.Thus, the MAE loss is used for the energy, direction and vertex estimation, so that the network focuses less on outlier events.Using a neural network, it is also possible to estimate the uncertainties on the seven components of ì ξ reco .One possibility is to let the network predict the absolute residual of the MAE loss.Thus, one needs an additional neuron that yields the estimated uncertainty, σ reco , for each of the reconstructed components of ì ξ reco .Consequently, the loss function that measures the quality of the estimated uncertainty value is: Using this loss function L, the network learns to estimate the average absolute residual: Assuming that the residuals are normally distributed, the estimated uncertainty value σ reco can be converted to a standard deviation by multiplying by π 2 [30].In general, it can be expected that features that are learned by the convolutional part of the network and that are useful for the reconstruction are also crucial for the inference of the uncertainty on the reconstruction.Therefore, both inference tasks are handled by using the same CNN.The CNN, however, must be prevented from focusing too much on the uncertainty estimation in the learning process, since its main goal is the prediction of ì ξ reco and not of the uncertainties.For this purpose, a second fully-connected network with three layers is added after the convolutional output, but the gradient update during the back-propagation stage is stopped after the first fully-connected layer of this sub-network, as shown in Fig. 12.The specific structure of the fully-connected network for the uncertainty reconstruction estimation is shown in Tab. 6.
Preparation of training, validation and test data
In order to train the CNN event regressor, again only simulated neutrino events are used.Similar to the track-shower classifier, the dataset consists of 50% track-like events (ν CC µ ) and 50% shower-like events (25% ν CC e , 25% ν NC e ).Additionally, the total dataset has been split again into a training, validation and test dataset.As for the track-shower classifier, the training data makes up 70%, the validation dataset 6% and the test dataset 24% of the total rebalanced dataset.The track and shower classes have not been balanced to be independent of neutrino energy, cf.Sec.7.3, in order to get a larger training dataset compared to the track-shower classifier.In total, the training dataset then contains about 14 × 10 6 events.The MC energy of ν NC e events is set to the visible energy during the training, since the network cannot infer how much energy is carried away by the outgoing neutrino in the neutral current interaction.
Using a Nvidia Tesla V100 GPU, it takes about one and a half weeks to fully train this CNN event regressor.
Loss functions and loss weights
Before training the CNN event regressor, suitable loss weights need to be defined in order to scale the losses to the same magnitude.For example, the neutrino energy in the dataset ranges from 1 GeV to 100 GeV, while the components of the direction vector range from −1 to 1. Thus, the impact of the energy loss would be overwhelmingly large compared to the loss that is contributed by the direction reconstruction.Consequently, one needs to scale the individual losses of the components of ì ξ reco and their estimated uncertainties with a loss weight, such that all single losses are of the same order of magnitude at the start of the training.However, it has been observed that the CNN does not start to learn during the first few epochs of the training, if the individual losses of the energy, direction and vertex are of the same scale.The reason is that direction and vertex contribute six out of seven variables to be learned so that their impact on the total loss is dominant.Hence, any improvement with respect to the energy reconstruction, i.e. in the discrimination of background and signal hits, is marginalised by random statistical fluctuations in the loss for the direction or vertex components.
It is expected that the first concept the neural network will learn during the training is to distinguish signal and background hits.At the same time, the recognition of signal hits is a prerequisite for the direction and vertex reconstruction.Thus, the loss weights have been set in such a way that the scale of the energy loss at the start of the training is about two times larger than the scale of the three combined directional losses.For the same reason, the loss weights of the vertex reconstruction are set in such a way that they have the same scale as the directional losses.Indeed, it could be observed that the energy loss always converges significantly earlier during the training than the loss for the direction or vertex.Since the scale of the individual losses changes dramatically after the network has started to learn a property such as the energy or the direction, the loss weights need to be tuned during the training.
As a general strategy, it has proven useful to do this after the energy loss has sufficiently converged.How often and by how much the individual loss weights are retuned is a parameter optimisation process.For this work, the loss weights of the direction and vertex components have been increased by a factor of three after the energy loss convergence, and, at the end of the training, these loss weights have been doubled again to investigate if further improvements would occur, which was not the case.Increasing the weights for the direction and vertex losses did not significantly worsen the energy loss.
For the loss weights of the uncertainty outputs none of this matters, since their gradients are stopped after the first fully-connected layer of the uncertainty reconstruction sub-network.Thus, the loss weights for these variables just need to be set such that the scale of the individual losses is about the same.
Energy reconstruction performance
In this section the neutrino energy reconstruction performance of the CNN event regressor is investigated.The study and the comparisons to the maximum-likelihood-based reconstruction algorithm, described in Ref. [2] and called SOR in the following, focus on shower-like events.
Events have been selected according to the criteria detailed in Sec.6.4.In particular, only events reconstructed as up-going by the SOR algorithm are selected for the comparison and, additionally, the vertex position, direction, and energy of true ν CC e events must have been reconstructed with high confidence by SOR.
The reconstructed energy versus the true MC energy for reconstructed ν CC e events is shown in Fig. 13.As can be seen in the figure, the energy reconstruction tends to underestimate the true MC energy of the neutrino, in particular for energies above 30 GeV.This effect has been found to be even more evident for SOR, so that a correction function has been applied to the reconstructed energy.This correction function has been derived from MC simulations and depends on several reconstructed quantities, such as the energy, the zenith angle, and the interaction inelasticity (Sec.D.1.in Ref. [31]).
So as to allow for a comparison between the SOR algorithm and the CNN event regressor, a similar but simpler correction function, only using the reconstructed energy, has been derived and applied to the energy reconstruction of the CNN event regressor for ν CC e events.The resulting distributions can be compared in Fig. 14.In order to allow for quantitative comparisons, the following two performance metrics are used: the median relative error (MRE) and the relative standard deviation (RSD).Both metrics are calculated for each MC energy bin.The MRE is defined as: where E reco (E true ) stands for the reconstructed (true MC) energy.
For the RSD, the standard deviation σ of the reconstructed energy distribution for each MC true energy bin is calculated and then divided by the energy of the bin: For shower-like events, the energy reconstruction performance is intrinsically limited by light yield fluctuations of the hadronic particle cascade [5].Assuming that both the CNN event regressor and the SOR algorithm come close to this intrinsic resolution limit, it is to be expected that the MRE and RSD performances for both methods agree very well, as can be seen in Fig. 15 (left) and Fig. 15 (right), respectively.For the MRE, the CNN shows lower values than the SOR algorithm for energies close to the boundaries of the investigated energy range.This is due to the fact that the CNN learns the limits of the simulated neutrino energy spectrum, i.e. 1 GeV and 100 GeV.Therefore, the CNN, contrary to the SOR algorithm, never reconstructs energies outside of this energy range, which biases the MRE for the CNN towards lower values.For the high-energy boundary, this effect can be easily avoided by extending the simulated energy range.For the low-energy boundary, this is not easily possible and biases the comparison with the SOR algorithm, albeit only in an energy range that is close to the detection energy threshold of KM3NeT/ORCA.On the other hand, it is notable that the RSD for the CNN event regressor improves by a few percent with respect to the SOR algorithm also for energies far from the boundaries.In the case of ν NC e events, the energy reconstruction performance is limited by the non-detectable energy that is carried away by the outgoing neutrino.For ν NC e events, as for charged-current events, the energy reconstruction performance of the CNN event regressor and SOR is similar.
Direction reconstruction performance
The direction reconstruction performance of the CNN event regressor is investigated in the same way as for the energy reconstruction.The X, Y and Z components of the reconstructed direction vector are converted to spherical azimuth and zenith coordinates.Since the DUs, cf.Sec.2.1, are approximately vertical structures, the azimuth angle is defined by the projection of the direction vector onto a plane roughly perpendicular to the DUs, while the zenith roughly corresponds to the angle enclosed by the direction vector pointing back to the particle's source and the DUs.
The median absolute error (ME) is defined as the median of the distribution of ξ reco − ξ true .In Fig. 16 the ME for the azimuth and the zenith angle of the reconstructed direction is compared for the CNN event regressor and SOR.The comparison is done for ν CC e events and as a function of the true MC neutrino energy.For energies below 3 GeV, the ME is very similar for both reconstructions, while the difference in performance increases in favour of the SOR algorithm with increasing energy.At 20 GeV, the ME for the reconstruction of the neutrino zenith (azimuth) angle is about 15% (25%) larger for the CNN event regressor than for SOR.The reason for this worse performance of the CNN might be connected to the rather coarse time binning chosen for the image generation.The corresponding comparison for ν NC e events shows the same general trend, but the performance gap is smaller since the reconstruction is intrinsically limited by the interaction kinematics.
Vertex reconstruction performance
For the comparison of the vertex reconstruction performance the distance between the reconstructed and the true MC vertex point is investigated based on the longitudinal and perpendicular components of the residual vector with respect to the direction of the incoming neutrino.The CNN event regressor is trained such that it reconstructs the neutrino interaction vertex, while the SOR algorithm reconstructs the brightest point of the Cherenkov light emission, which is shifted by an energydependent displacement of the order of meters.The perpendicular versus longitudinal distance of the reconstructed vertex with respect to the true ν CC e interaction point is shown in Fig. 17 (left) for the CNN event regressor, while the right plot shows the same distribution for the SOR algorithm.The above mentioned displaced reconstruction of the brightest emission point can be seen for the SOR algorithm in Fig. 17 (right).The CNN-based vertex resolution is of the order of one meter with a small offset in perpendicular direction in the investigated energy range up to 100 GeV.For the SOR algorithm, the resolution in the longitudinal direction is similar, while it is better in the perpendicular direction.A correlation of longitudinal and perpendicular errors can be seen for the SOR algorithm, which is absent for the CNN reconstruction.
Error estimation
The CNN is used to reconstruct the regression uncertainty of all reconstructed quantities, as explained in Sec.8.2.For the following performance investigation the event pre-selection, based on the current KM3NeT/ORCA reconstruction pipeline, is not applied, i.e. all triggered events of the simulated test dataset are used.In order to evaluate the goodness of the uncertainty reconstruction for any event quantity, e.g.energy or direction, the events are binned according to their estimated uncertainty value, σ reco .For the event distribution in each bin, the standard deviation, σ true , of the residuals of the reconstructed and true MC values |ξ reco − ξ true | is calculated.
The distribution of σ true versus σ reco is shown in Fig. 18 (left) for the energy reconstruction of ν CC e events.Even though σ reco is estimated with a significant fraction of events that were discarded previously by the criteria applied for the energy reconstruction in Sec.8.5, the CNN event regressor estimation of the reconstruction uncertainty is clearly correlated to the true uncertainty.As can be seen, the energy uncertainty is slightly underestimated for events that are difficult to reconstruct and therefore show large estimated uncertainties.
The distribution of σ true versus σ reco is shown in Fig. 18 (right) for the Z component of the reconstructed direction vector in ν CC e events.In the range of 0.15 < σ reco < 0.30 a slight underestimation in the uncertainty reconstruction can be seen.Since there is a clear correlation between the predicted and the true uncertainty, it is possible to discriminate badly reconstructed events based on the estimated uncertainty value σ reco .In order to demonstrate this, the zenith-angle reconstruction for shower-like events is investigated.The standard deviation of the residual distribution of the reconstructed cosine of the zenith angle for ν CC e events versus the fraction of events discarded by the CNN uncertainty estimator is depicted in Fig. 19.For all three energy intervals depicted, one can see that the zenith-angle resolution improves significantly when the events are discarded, that have the largest reconstruction errors as predicted by the CNN.
Figure 1 .
Figure 1.Time distribution of ν CCµ signal hits relative to the mean time of the triggered hits calculated for each individual event.For this distribution, about 3000 ν CC µ events in the energy range from 3 GeV to 100 GeV have been used, cf.Tab. 1.The dashed line in black indicates a possible timecut for the time range used to generate the CNN input images.
Figure 2 .
Figure 2. Time distribution of signal hits in atmospheric muon events relative to the mean time of the triggered hits calculated for each individual event.For this distribution, about 3000 atmospheric muon events have been used.The dashed black line indicates the timecut, set to an interval of [−450 ns, +500 ns], that has been applied for the generation of the background classifier images.
Figure 3 .
Figure 3. Training and validation cross-entropy loss of the background classifier during the training.Each data point of the training loss curve is averaged over 250 batches, i.e. 250 × 64 event images.
Figure 4 .
Figure 4. Distribution of the CNN neutrino probability for pre-selected atmospheric muon (blue), random noise (red), and neutrino (brown) events.All three distributions have been normalised to the area under each histogram.
Figure 5 .
Figure 5. Neutrino efficiency, ν eff , versus atmospheric muon event contamination, C µ , weighted with the Honda atmospheric neutrino flux [17] model.The CNN (RF) performance is depicted in blue (orange).All neutrinos from the pre-selected test dataset are included (1 GeV to 100 GeV).
Figure 6 .Figure 7 .
Figure 6.Neutrino efficiency, ν eff , versus atmospheric muon event contamination, C µ , weighted with the Honda atmospheric neutrino flux [17] model.The CNN (RF) performance is depicted in blue (orange).Only neutrinos with a MC energy in the range of 1 GeV to 5 GeV have been used.
Figure 8 .Figure 9 .
Figure 8. Training (grey lines) and validation (black circles) cross-entropy loss of the track-shower classifier during the training process.Each line element of the training loss represents an average over 250 batches, i.e. 250 × 64 event images.
1 )Figure 10
Figure10.Fraction of events classified as track-like, f track , with a CNN track probability P i,track > 0.5, for different interaction channels (top panel) versus the true MC neutrino energy.The relative improvement ∆f, cf.Eq. 7.1, of the CNN with respect to the RF classifier in discriminating between ν CC µ events and shower-like ν CC e / ν NC e channels is shown in the bottom panel.Here, the contributions of neutrinos and antineutrinos are averaged for each flavour and interaction.The discrimination power is defined as the difference between the fractions of events classified as track-like and shower-like in the top panel.
Figure 11 .
Figure 11.Separability S(E) = 1 − c(E) based on ν CC µ and ν CC e (ν CC µ and ν NC e ) events as a function of true MC neutrino energy for the CNN represented by blue (brown) markers and the RF represented by red (green) markers (top panel).Only ν (not ν) have been used for the comparison.The relative improvement of the CNN with respect to the RF classifier in discriminating between ν CC µ events and shower-like ν CC/NC e channels is shown in the bottom panel.The error bars indicate statistical errors only.
Figure 12 .
Figure 12.Scheme of the CNN architecture used for the event regressor.The properties of the convolutional layers and of the fully-connected network for the reconstruction of ì ξ reco is depicted in Tab. 5 and in Tab. 6.
Figure 13 .
Figure 13.Energy reconstructed by the CNN regressor versus true MC neutrino energy for reconstructed ν CC e events.The distribution has been normalised to unity in each true energy bin.
Figure 14 .
Figure 14.Reconstructed and corrected energy versus true MC neutrino energy for reconstructed ν CC e events.The distribution has been normalised to unity in each true energy bin.Left: CNN event regressor.Right: Maximum-likelihood-based reconstruction algorithm SOR for shower-like events as described in Ref. [2].
Figure 15 .
Figure 15.Left: Median relative error (MRE) on the reconstructed energy versus true MC energy for ν CC e events for the CNN event regressor (blue) and SOR (orange).Right: Relative standard deviation (RSD) on the reconstructed energy versus true MC energy for the CNN event regressor (blue) and SOR (orange).
Figure 16 .
Figure 16.Median absolute error of the zenith and azimuth angle reconstruction for the CNN event regressor and the shower reconstruction algorithm SOR versus true MC energy for ν CC e events.
Figure 17 .
Figure 17.Reconstructed neutrino interaction point with respect to the true MC vertex for ν CC e events.Shown is the perpendicular component versus the longitudinal component of the distance vector.Left: CNN event regressor; Right: SOR algorithm.
Figure 18 .
Figure 18.Standard deviation of the difference of true MC and reconstructed values versus the uncertainty estimated by the CNN event regressor.Left: for the energy reconstruction of ν CC e events.Right: for the Z component of the reconstructed direction vector for ν CC e events.
Table 1 .
List of Monte Carlo simulations for a KM3NeT/ORCA detector composed of 115 DUs.The first column reports the simulated event type.The neutrino simulations comprise neutrino and antineutrino interactions of the indicated type.The second column specifies the power law used to simulate the energy spectrum of the interacting neutrinos.A reweighting scheme is used in this work, where appropriate, to simulate an atmospheric neutrino flux model
Table 2 .
Scheme of a convolutional block used for all CNNs defined in this work.
Table 3 .
1 is split into a training, validation and test dataset.Network structure of the background classifier's three-dimensional CNN model with XYZ-T/P input.No dropout is used due to the large training dataset of 42.6 × 10 6 training events.
Table 4 .
Network structure of the track-shower classifier three-dimensional CNN model with XYZ-T/P input.The symbol δ specifies the dropout rate used in the respective convolutional block, cf.Sec. 3.
Table 6 .
Layout of the fully-connected uncertainty reconstruction network, which is part of the event regressor CNN with XYZ-T/P input.The input to this dense network is the flattened output of the last convolutional layer of the main CNN. | 2022-12-11T07:20:24.913Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "1e4e8aa1e5fdd2201136bb209e9abeba778c8c79",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1748-0221/15/10/P10005/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e6fec041d6e32dad5f7a84c3c4c1c4e69c8a9d3b",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": []
} |
260918508 | pes2o/s2orc | v3-fos-license | Studying Dynamical Characteristics of Oxygen Saturation Variability Signals Using Haar Wavelet
An aim of the analysis of biomedical signals such as heart rate variability signals, brain signals, oxygen saturation variability (OSV) signals, etc., is for the design and development of tools to extract information about the underlying complexity of physiological systems, to detect physiological states, monitor health conditions over time, or predict pathological conditions. Entropy-based complexity measures are commonly used to quantify the complexity of biomedical signals; however novel complexity measures need to be explored in the context of biomedical signal classification. In this work, we present a novel technique that used Haar wavelets to analyze the complexity of OSV signals of subjects during COVID-19 infection and after recovery. The data used to evaluate the performance of the proposed algorithms comprised recordings of OSV signals from 44 COVID-19 patients during illness and after recovery. The performance of the proposed technique was compared with four, scale-based entropy measures: multiscale entropy (MSE); multiscale permutation entropy (MPE); multiscale fuzzy entropy (MFE); multiscale amplitude-aware permutation entropy (MAMPE). Preliminary results of the pilot study revealed that the proposed algorithm outperformed MSE, MPE, MFE, and MMAPE in terms of better accuracy and time efficiency for separating during and after recovery the OSV signals of COVID-19 subjects. Further studies are needed to evaluate the potential of the proposed algorithm for large datasets and in the context of other biomedical signal classifications.
Introduction
The human body is comprised of several complex systems composed of the functions of several organs that work coherently in a healthy human. Different functions or malfunctions of organs caused by disease affect several vital signs, which are very helpful in diagnosing the disease [1,2]. Complexity analysis of medical signals involves the quantitative assessment and characterization of the intricate patterns and dynamics present in physiological signals [3,4]. It aims to unveil hidden information and provide insights into underlying physiological processes and their changes in various health conditions. Complexity analysis offers a framework to investigate medical signal complexity, irregularity, and non-linear dynamics by employing mathematical and computational techniques [5,6]. signal represent well the complexity of the signal. All five sub-signals were normalized to zero-mean for eliminating the bias and the method of counting zero-crossings was used for measuring the variability in each of the zero-mean trend sub-signals. The mean value was taken as the overall score of the structural fidelity of the signal. The accuracy and time efficiency of the proposed technique was compared with well-known complexity measures MSE [14], MPE [22], MFE [23], and MAAPE [24] during COVID-19 and after two months of recovery.
Literature Review
Complexity analysis of physiological signals in the understanding of the functioning of various human body systems has been a popular research area. PhysioBank, PhysioToolkit, and PhysioNet are interconnected resources that provide open-access physiological data, software tools, and educational materials in the field of biomedical engineering [25]. These are very valuable resources of complex physiologic signals which can be analyzed to gain insights into the function of human body systems.
Several works have focused on the analysis of physiological signals in understanding their complex dynamics and their relevance in characterizing physiological processes by considering chaos as an underlying model [26][27][28][29]. Peng et al. [26] examined the mosaic organization of DNA nucleotides, revealing complex patterns within DNA sequences and hidden relationships and complex characteristics in genetic information. Ivanov et al. [27] explored the scaling behavior of heartbeat intervals using wavelet-based time-series analysis, revealing the multifractal nature of heart rate variability. Ivanov et al. [28] investigated the scaling and universality in heart rate variability distributions, uncovering the presence of scaling behavior and universal patterns. Goldberger [29] discussed the application of non-linear dynamics, chaos theory, fractals, and complexity analysis in clinical settings, emphasizing their potential impact on clinical understanding and decision-making.
Entropy is a measure used to quantify the randomness or uncertainty in a time series [10,11]. It provides information about the distribution and predictability of data patterns and gives a measure of the complexity of the data [10][11][12]. In 1991 Pincus [10] proposed approximate entropy to determine the complexity of deterministic chaos and stochastic processes comprising at least 1000 data points. The capability of approximate entropy to discern the complexity of a small number of data points has been explored in numerous fields, especially in medical sciences. Sample entropy [11], a modified version of approximate entropy is used to quantify the complexity and irregularity of a time series by assessing the likelihood of occurring similar patterns, which holds promise for analyzing biomedical signals and physiological data. Fuzzy entropy is a measure used to assess the complexity and irregularity of a time series based on the fuzzy membership function approach [13]. It examines the likelihood that two vectors, which exhibit similarity over a certain number of points (m), will continue to remain similar for the subsequent (m + 1) points.
Bandt and Pompe proposed permutation entropy [12], to analyze the patterns obtained by permuting the order of values and to provide insights into the irregularity and unpredictability of the data. Permutation entropy does not consider the average amplitude values and equal amplitude values in the symbolized time series. Azami and Escudero [30] proposed amplitude-aware permutation entropy to address the issue of amplitude information in the formulation of permutation entropy. Threshold-dependent symbolic entropy was proposed by investigators to estimate the complexity of stride interval time series [31]. Porta et al. [32] furthered understanding by employing entropy and pattern classification techniques to characterize complexity in short heart period variability series, aiding in the classification of cardiac states. Lake and Moorman [33] addressed the challenge of accurately estimating entropy in a very short physiological time series, with a specific focus on the detection of atrial fibrillation in implanted ventricular devices.
The traditional single-scale entropy measures fail to account for multiscale variability inherent in the physiological signals and can yield contradictory results [14,22]. Multiscale entropy (MSE) extends the concept of entropy by considering different time scales. By analyzing the entropy values across different time scales, multiscale entropy provides a more comprehensive understanding of the dynamics and complexity of the time series at various levels of detail [14]. The different versions of scale-based entropy metrics have demonstrated successful applications across various domains. However, it faces a challenge when estimating the statistical reliability of sample entropy for coarse-grained series as the time scale factor increases. Investigators proposed other scale-based entropy measures such as MPE [22], MFE [23], and MAAPE [21] to account for shortcomings of MSE and applied them in numerous fields.
In several recent studies, researchers quantitatively described oxygen saturation dynamics and patterns in COVID-19 patients [34][35][36]. In [34] researchers performed retrospective observational analysis of the clinical information and high-resolution oxygen saturation recordings of 367 hospitalized critical and non-critical COVID-19 patients. The findings revealed that the oximetry-derived digital biomarkers were informative for assessing the disease severity and response to respiratory support of hospitalized COVID-19 patients. Zhang et al. [35] proposed a method to assesses the subject's degree of illness considering the supply chain and Industry 5.0 requirement and calculated the apnea-hypopnea index (AHI) and the subject's disease level. They used oxygen saturation signals to extract features based on the time domain, central tendency measure, approximate entropy, and Lempel-Ziv complexity. Using a random forest classifier, 92% accuracy was obtained in assessing the prevalence of obstructive sleep apnea-hypopnea syndrome. In [36], authors used a mathematical approach to describe the exercise ventilatory responses in patients with dyspnoea and breathing pattern disorder following COVID-19. Aliani et al. [37] conducted a study to investigate heart rate variability (HRV) of healthy subjects and COVID-19 patients and reported significant differences between the groups in several HRV parameters.
Dataset
The data used in the study comprises 44 recordings (26 male and 18 female) of oxygen saturation SpO2 acquired from confirmed COVID-19-infected patients and 44 recordings two months after recovery. Eleven subjects (25%) were diabetic and 18 (40.91%) were hypertensive. Five subjects (11.36%) were non-smokers and 37 (86.36%) were vaccinated. About 93 (40.91%) subjects suffered from COVID-19 infection once and about 7.0% twice. A Beurer PO-80 oximeter, which is a noninvasive device was used for recording SpO2 [38]. The integrated recording function of Beurer PO-80 provides continuous recording of up to 24 h [38]. The Beurer View and Evaluate SpO2 Assistant free software enables the device to synchronize with personal computers via USB. The necessary protocols were followed before acquiring data from patients during COVID-19 infections. The patients were briefed about the purpose of the study, and informed consent was taken for data collection. The University of Azad Jammu & Kashmir Ethical Committee approved the research. All methods were carried out in accordance with relevant guidelines and regulations. The patients were instructed to wash and dry their hands, put a pulse oximeter on the index finger, and bend the fingers so that their fingernails were pointing towards them. The recordings were initially completed over a 2 h period and patients remained in rest state. The same subjects were approached two months after their recovery, and the same data collection process was repeated. The recordings fulfilled the inclusion criteria such as no previous history of pulmonary or cardiovascular problem, recordings > 2 h, containing recordings of both during and of recovery were included in the study. The recordings were scanned for artifacts and replaced with mean values using a zero-line interpolation, which is the most stable method compared to other methods of artifact removal [16]. The smallest number of data points in a file was 7421, which were chosen for each recording to perform pattern analysis of OSV signals using Haar wavelet and different scale-based entropy metrics. Boghal and Mani [16] used 1 h recordings of OSV signals to test the integrity of CRCS in young and elderly healthy individuals. Long duration recordings improve the statistical reliability of evaluated metrics. As the subjects had to remain at rest during the data collection, we were able to record data of more than two hours, which is sufficient for performing pattern analysis of OSV signals.
Discrete Wavelet Transform
Consider the following discrete time series containing N equally spaced samples of an analog signal: Wavelet transform converts this signal to two signals of half its length. The first sub-signal is the running average, or approximation of the time series, and shows the trend of data, whereas the second sub-signal is a running difference that shows fluctuations or the details of the local variations that occur in the data. There have been different types of operations proposed for this transformation, but the key function of all of them is to find one smooth version of the original signal and another version that contains the local details. We can describe these transformations as follows: These sub-signals of a part of a typical OSV time-series signal are shown in Figure 1.
ent scale-based entropy metrics. Boghal and Mani [16] used 1 h recordings of OSV signa to test the integrity of CRCS in young and elderly healthy individuals. Long duration r cordings improve the statistical reliability of evaluated metrics. As the subjects had to r main at rest during the data collection, we were able to record data of more than tw hours, which is sufficient for performing pattern analysis of OSV signals.
Discrete Wavelet Transform
Consider the following discrete time series containing N equally spaced samples an analog signal: Wavelet transform converts this signal to two signals of half its length. The first su signal is the running average, or approximation of the time series, and shows the trend data, whereas the second sub-signal is a running difference that shows fluctuations or th details of the local variations that occur in the data. There have been different types operations proposed for this transformation, but the key function of all of them is to fin one smooth version of the original signal and another version that contains the local d tails. We can describe these transformations as follows: The trend signal can be transformed again using the same procedure to obtain th second stagSe of the trend and fluctuation signals. Depending on the application in han this process can be repeated for as many stages as required. We can describe this proce of signal decomposition as below: The trend signal can be transformed again using the same procedure to obtain the second stagSe of the trend and fluctuation signals. Depending on the application in hand, this process can be repeated for as many stages as required. We can describe this process of signal decomposition as below: where the subscripts added to the trend and fluctuation signals indicate their stage. Note that the sub-signals shown at any stage can be combined to retrieve the original signal F.
Proposed Method
The Haar wavelet is the simplest of all wavelets, and it simply takes the sum and difference of adjacent samples to calculate the trend and the fluctuation sub-signals. The elements of the trend and fluctuation sub-signals of the input signal F can be represented respectively as follows: Note that the Haar transformation divides the output by the square root of 2 to preserve the signal's energy; however, we can carry out our analysis without this division for simplicity, as we are interested only in the analysis of the structure of signals. This can improve computational efficiency.
The integrity of the cardio-respiratory control system has been hypothesized that it can be established by analyzing the trend and fluctuation signals obtained through wavelet transformation of the OSV time series. Several statistical measures can test the structural complexity of these signals, including amplitude, mean value, number (frequency) of peaks over time, etc. We experimented with several of them and found that variation in the trend signal represents well the complexity of the signal.
We subtract the mean value of the trend sub-signal from its original samples computed in (4) to eliminate the effect of any bias induced by measurement or a subsequent operation.
The times of the zero-mean signal change signs are counted from negative to positive or vice versa. This measure is found to reliably distinguish between the signals coming from the patients of COVID-19 disease and those who have recovered. This step finds the zero crossings of the trend signal. For this, first, we calculate the sign of each element in the trend sub-signal A: The value C gives a measure of the structural complexity of signal A.
Complete Proposed Algorithm
The complete proposed algorithm comprising the operations described above is shown in Figure 2 and can be summarized as follows: 1.
Apply the Haar Transform to the OSV time series and convert it into two sub-signals, trend, and fluctuations. Apply this function iteratively five times to obtain the subsignals up to stage 5. 2.
Discard the fluctuation sub-signals and perform analysis on the trend signals only, i.e., [A 1 , A 2 , A 3 , A 4 , A 5 ] computed in the previous step.
3.
The mean value of each trend sub-signal is subtracted to eliminate biases and normalize all five sub-signals to zero-mean form, as given below: Healthcare 2023, 11, 2280 7 of 14 4. The next step is the measurement of variability in each of the zero-mean trend subsignals using the method of counting zero-crossings mentioned earlier in Equation (5). 5.
Step 4 yields five values of the variability measure C computed using Equation (5).
Their mean value is taken as the overall score of the structural fidelity of the signal. malize all five sub-signals to zero-mean form, as given below: 4. The next step is the measurement of variability in each of the zero-mean trend subsignals using the method of counting zero-crossings mentioned earlier in Equation (5). 5.
Step 4 yields five values of the variability measure C computed using Equation (5).
Their mean value is taken as the overall score of the structural fidelity of the signal.
Results and Discussions
Various complexity analysis techniques have been utilized in the past to evaluate the structural complexity of time series data. One widely employed approach for assessing the impact of cardio-respiratory system malfunction is the multi-scale entropy (MSE) analysis of oxygen saturation variability (OSV) signals [14]. In this section, we present the results of the proposed Haar wavelet-based technique and compare them with different versions of scale-based entropy measures, namely MSE [14], moving permutation entropy (MPE) [22], moving fuzzy entropy (MFE) [23], and moving amplitude-aware permutation entropy (MAAPE) [21] at temporal scales ranging from 1 to 20.
To begin the multi-scale entropy analysis, the first step involves constructing a coarse-grained signal by applying a moving average to non-overlapping samples of the OSV signal, with the scale parameter determining the number of samples used in the
Results and Discussions
Various complexity analysis techniques have been utilized in the past to evaluate the structural complexity of time series data. One widely employed approach for assessing the impact of cardio-respiratory system malfunction is the multi-scale entropy (MSE) analysis of oxygen saturation variability (OSV) signals [14]. In this section, we present the results of the proposed Haar wavelet-based technique and compare them with different versions of scale-based entropy measures, namely MSE [14], moving permutation entropy (MPE) [22], moving fuzzy entropy (MFE) [23], and moving amplitude-aware permutation entropy (MAAPE) [21] at temporal scales ranging from 1 to 20.
To begin the multi-scale entropy analysis, the first step involves constructing a coarsegrained signal by applying a moving average to non-overlapping samples of the OSV signal, with the scale parameter determining the number of samples used in the averaging process. Subsequently, the coarse-grained signal is divided into overlapping vectors of samples, where the vector size is determined by a second parameter known as the order. Once the coarse-grained time series is obtained, we compute the complexity of the OSV signals acquired from COVID-19 subjects during the infection and after recovery using entropy estimation techniques such as sample entropy [11], permutation entropy [12], fuzzy entropy [13], and amplitude-aware permutation entropy [29].
In Figure 3, we present the mean values of MSE (top left panel) for the 44 COVID-19 patients during infection and after recovery at different scales within the range of 1 to 20. It can be observed that the mean entropy value after recovery is consistently higher than that during illness across all scales. The maximum separation between the subjects during and after recovery was observed at time scale 8. However, at very high values, the moving average tends to reduce the structural details, leading to a decrease in the observed differences. The top right panel of Figure 3 displays the results obtained using MFE at time scales 1 to 20. It is worth noting that the fuzzy entropy values become undefined at scales greater than 10, and they exhibit abrupt changes with varying scales. For a more detailed analysis, Figure 4 presents the individual entropy values of th 44 subjects during infection and after recovery. The optimal scale value was selected fo each entropy measure based on the analysis of the average scores presented in Figure which exhibited significant differences. For MSE, MFE, MPE, and MAAPE, the entrop estimates showed an increase in 28, 30, 33, and 31 subjects, respectively. Notably, the pe mutation entropy values of 33 subjects (75% of the total) improved after recovery, whi the remaining 11 subjects did not follow this pattern. Although 11 out of 44 may seem lik a significant number, it can be argued that these results are promising given the potenti data capture errors and the inability to repeat the experiment or further investigate th cases of the minority subjects that did not conform to the observed pattern. Similar trend were observed using other entropy measures, as depicted in Figure 5. Moving on to the MPE results shown in the bottom left panel of Figure 3, they aim to differentiate OSV signals during the infection and after two months of recovery from COVID-19 at time scales 1 to 20. It can be observed that the MPE values are consistently smaller during the infection compared to those after two months of recovery. The maximum separation between the subjects during and after recovery was observed at time scale 8. Similarly, the bottom right panel of Figure 3 presents the results obtained using MAAPE, revealing a decrease in complexity during COVID-19 infection and an increase in complexity after recovery. The maximum difference between the subjects during and after COVID-19 infection was observed at time scale 10. Interestingly, unlike other scale-based entropy metrics (MSE, MFE, and MPE), the MAAPE values drop for both infected and recovering classes at higher scales. This behavior can be attributed to the fact that MAAPE considers amplitude information, unlike permutation entropy which focuses on the order of patterns for quantifying signal complexity. In summary, the estimates obtained from all four scalebased entropy measures indicate a loss of complexity during COVID-19 infection and an increase after two months of recovery, which aligns with other studies conducted on various physiological signals [14,21,22], indicating that the loss of complexity is a general characteristic of the disease.
For a more detailed analysis, Figure 4 presents the individual entropy values of the 44 subjects during infection and after recovery. The optimal scale value was selected for each entropy measure based on the analysis of the average scores presented in Figure 4, which exhibited significant differences. For MSE, MFE, MPE, and MAAPE, the entropy estimates showed an increase in 28, 30, 33, and 31 subjects, respectively. Notably, the permutation entropy values of 33 subjects (75% of the total) improved after recovery, while the remaining 11 subjects did not follow this pattern. Although 11 out of 44 may seem like a significant number, it can be argued that these results are promising given the potential data capture errors and the inability to repeat the experiment or further investigate the cases of the minority subjects that did not conform to the observed pattern. Similar trends were observed using other entropy measures, as depicted in Figure 5. Furthermore, we repeated the experiment using the proposed Haar wavelet-based technique and plotted the individual scores assigned by this method to the subjects during infection and after recovery. In this case, 34 subjects (77.3% of all subjects) showed improvement in their scores during COVID-19, while 10 subjects did not exhibit this progress, as illustrated in Figure 5. These results surpass the best outcomes achieved by any form of multi-scale entropy analysis. A comparison of the results is presented in Figure 6. It is important to note that multi-scale entropies necessitate the specification of several parameters, including scale, order of permutation, delay, similarity threshold, etc. In contrast, the proposed Haar wavelet-based technique does not require any user-defined parameters. This aspect contributes to the simplicity and ease of use of the proposed method, providing additional advantages over the existing multi-scale entropy-based methods.
Finally, Figure 7 showcases a comparison of the execution speeds of the various methods being evaluated. The proposed technique proves to be the fastest among all, with an average execution time of only 52.9 milliseconds for one subject's data. On the other Furthermore, we repeated the experiment using the proposed Haar wavelet-based technique and plotted the individual scores assigned by this method to the subjects during infection and after recovery. In this case, 34 subjects (77.3% of all subjects) showed improvement in their scores during COVID-19, while 10 subjects did not exhibit this progress, as illustrated in Figure 5. These results surpass the best outcomes achieved by any form of multi-scale entropy analysis. A comparison of the results is presented in Figure 6. It is important to note that multi-scale entropies necessitate the specification of several parameters, including scale, order of permutation, delay, similarity threshold, etc. In contrast, the proposed Haar wavelet-based technique does not require any user-defined parameters. This aspect contributes to the simplicity and ease of use of the proposed method, providing additional advantages over the existing multi-scale entropy-based It is important to note that multi-scale entropies necessitate the specification of several parameters, including scale, order of permutation, delay, similarity threshold, etc. In contrast, the proposed Haar wavelet-based technique does not require any user-defined parameters. This aspect contributes to the simplicity and ease of use of the proposed method, providing additional advantages over the existing multi-scale entropy-based methods.
Finally, Figure 7 showcases a comparison of the execution speeds of the various methods being evaluated. The proposed technique proves to be the fastest among all, with an average execution time of only 52.9 milliseconds for one subject's data. On the other hand, amplitude-aware permutation entropy is the slowest among the compared methods.
Healthcare 2023, 11, x FOR PEER REVIEW 12 of 15 hand, amplitude-aware permutation entropy is the slowest among the compared methods. A few studies [34][35][36] quantitatively describe oxygen saturation dynamics and patterns in COVID-19 patients using time domain and nonlinear techniques such as approximate entropy and Lempel-Ziv complexity measures. These techniques provided value insights for assessing prevalence of obstructive sleep apnea-hypopnea syndrome, breathing pattern disorder, and assessment of disease severity and response to respiratory support of hospitalized COVID-19 patients. The traditional entropy measures such as approximate entropy and Lempel-Ziv complexity are single scale. Like other physiological signals, OSV signals are the output of numerous interacting sub-components of the cardiorespiratory control systems (CRCS) operating on multiple time scales. Thus, single-scale, traditional entropy measures are unable to accurately yield dynamical information about interacting components of complex biological systems operating on multiple time scales. The scale-based entropy measures such as MSE [14], MPE [22], MFE [23], and MAAPE [21] can have the potential to quantify the complexity of CRCS at multiple time scales using OSV signals. The multi-scaling procedure adds to the computational burden and slows the process of analyzing the complexity of OSV signals [20]. The main contribution of this study was to propose innovative techniques for analyzing the dynamical complexity of OSV signals. The proposed techniques using Haar wavelets not only improved the performance for distinguishing OSV signals during and after recovery, but also was time efficient.
The study also highlighted that complexity of OSV signals of COVID-19 patients decreased during COVID-19 compared to those after two months of recovery. The reduced complexity during COVID-19 reveals the lesser adaptive capability of CRCS to external stresses. After recovery, the adaptive capability of the CRCS started to increase revealing an increase in its dynamical complexity. The proposed technique can have applications in evaluating dynamical models of CRCS and clinical monitoring during disease and after recovery.
One of the major limitations of the study is that the cohort size is modest. In the pilot study data were used for analysis of the dynamical characteristics of OSV signals during A few studies [34][35][36] quantitatively describe oxygen saturation dynamics and patterns in COVID-19 patients using time domain and nonlinear techniques such as approximate entropy and Lempel-Ziv complexity measures. These techniques provided value insights for assessing prevalence of obstructive sleep apnea-hypopnea syndrome, breathing pattern disorder, and assessment of disease severity and response to respiratory support of hospitalized COVID-19 patients. The traditional entropy measures such as approximate entropy and Lempel-Ziv complexity are single scale. Like other physiological signals, OSV signals are the output of numerous interacting sub-components of the cardio-respiratory control systems (CRCS) operating on multiple time scales. Thus, single-scale, traditional entropy measures are unable to accurately yield dynamical information about interacting components of complex biological systems operating on multiple time scales. The scalebased entropy measures such as MSE [14], MPE [22], MFE [23], and MAAPE [21] can have the potential to quantify the complexity of CRCS at multiple time scales using OSV signals. The multi-scaling procedure adds to the computational burden and slows the process of analyzing the complexity of OSV signals [20]. The main contribution of this study was to propose innovative techniques for analyzing the dynamical complexity of OSV signals. The proposed techniques using Haar wavelets not only improved the performance for distinguishing OSV signals during and after recovery, but also was time efficient.
The study also highlighted that complexity of OSV signals of COVID-19 patients decreased during COVID-19 compared to those after two months of recovery. The reduced complexity during COVID-19 reveals the lesser adaptive capability of CRCS to external stresses. After recovery, the adaptive capability of the CRCS started to increase revealing an increase in its dynamical complexity. The proposed technique can have applications in evaluating dynamical models of CRCS and clinical monitoring during disease and after recovery.
One of the major limitations of the study is that the cohort size is modest. In the pilot study data were used for analysis of the dynamical characteristics of OSV signals during COVID-19 and after two months of recovery. However, further studies with a large data set can be useful for accurate clinical decision-making.
Conclusions
Past research has established that the complexity of the human body's cardiac-respiratory signals decreases if the related body organs are not in their best health. Since these signals can be measured quickly with simple wearable devices available on the market, such analysis can help take preventive measures and monitor recovery. In this paper, we presented a simple measure that can determine the structural fidelity of the signals using wavelet transform. In addition, a dataset of oxygen saturation levels of 44 COVID-19 patients comprising more than two hours of recording of each subject was collected when they were infected and being treated and then again after two months of their recovery.
The proposed technique was tested on this dataset to validate its accuracy and time efficiency. It was found that 34 out of 44 subjects improved their score regarding the proposed measure. Improvement was not observed for the remaining ten subjects, or their scores were found to have dropped after recovery. Further investigation could help in determining the cause. The subjects might still not be fully recovered, or some noise might have affected the data collection process. Nevertheless, 34 out of 44 is a significant number which makes the proposed technique a candidate to be seriously considered for further investigations on larger data and as a tool for observing the recovery of patients. Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement: Not applicable. | 2023-08-16T15:18:54.382Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "2c97fe88a5aa1c40f48c4ed589e25ddc4bd92adf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/11/16/2280/pdf?version=1691901248",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "56fd73507c121047b0b3b2a1058bffc2337f2666",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234458580 | pes2o/s2orc | v3-fos-license | Searching for the optimal Transcranial Direct Current Stimulation ( TDCS ) target combined with peripheral electrical stimulation in chronic low back pain : a protocol for a randomized controlled trial
Backgroud: Low back pain (LBP) has been associated with severe impairments, primarily related to activities of daily living, functional ability and quality of life. A multimodal approach to pain management, such as transcranial direct current stimulation (tDCS) and peripheral electrical stimulation (PES), may improve outcomes in chronic LBP. However, the optimal cerebral target for stimulation still remains controversial. This pilot trial aims to investigate whether active stimulation could promote additional gains to the PES results in LBP patients. Our secondary objective is to investigate whether the stimulation of primary motor cortex and dorsolateral prefrontal cortex results in distinct clinical effects for the patients involved. Methods: Sixty patients with chronic low back pain will be randomized into one of three tDCS groups associated with PES: motor primary cortex, dorsolateral prefrontal cortex and sham stimulation. Each group will receive transcranial direct current stimulation at an intensity of 2 mA for 30 minutes daily for 10 consecutive days. Patients will be assessed with a Brief Pain Inventory (BPI), Roland Morris Disability Questionnaire (RMDQ), Medical Outcomes Study 36-item Short Form Health Survey (SF-36) and electromyography at baseline, endpoint (after 10 sessions) and 1-month follow up. Discussion: This study will help to clarify the additive effects of tDCS combined with peripheral electrical stimulation on pain relief, muscle function and improvement in quality of life. Additionally, we will provide data to identify optimal targets for management of chronic low back pain.
Recruitment
The study will be conducted in a Public Neuromodulation Unit, which provides specialized assistance to patients with chronic pain (cLBP). Participants will be recruited from hospitals and clinics, as well as through social media and support groups. Written and informed consent will be obtained from all participants and those interested in participating will have their records reviewed and will be contacted for inclusion in the study, according to the eligibility criteria.
Randomization and Blinding
Participants will be randomly allocated via an online generator (www.random.org) in one of three groups (1:1:1): anodal tDCS of M1 + PES; anodal tDCS of DLPFC + PES; Sham of M1 + PES. This sequence will be performed blindly, independently, and remotely by a blind investigator who will have no contact with other research procedures, and randomization will be hidden until the group is allocated.
The hidden allocation process will be performed using sequential, numbered, opaque and sealed envelopes. The outcome assessors, trialists and participants will be blinded to the performed procedures.
Attrition and Adherence
Attrition will be considered under the following conditions: (a) two consecutive or three alternating absences during treatment; (b) the inability to complete the post-test and follow-up; and (c) the development of any disabling condition preventing the participant's participation in the study. Regarding adherence strategies, up to two non-consecutives absences can be compensated the following week. Flexible therapy hours will also be offered, and the relatives of the patients will be directly contacted by telephone to confirm the dates of evaluation, thereby reinforcing treatment adherence (Brunoni, et al., 2011). Additional measures to avoid dropouts will also be applied, including periodic assessments of treatment satisfaction, discussion of difficulties in continuing treatment (for example, logistics of the trips to the laboratory) and attempts to resolve and avoid possible problems that may affect adherence to and continued participation in the study. Research, Society andDevelopment, v. 9, n. 12, e38991211318, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i12.11318 and 4x4 cm sponges moistened with 0.9% saline solution. The anodic electrode will be placed on C3 to stimulate the left primary motor cortex and on F3 to stimulate the left dorsolateral cortex according to the international 10-20 EEG system (Homan, Herman & Purdy, 1987).
For sham stimulation, the anodic electrode will be positioned on the left primary motor cortex, but the current will be turned off automatically after 30 seconds. The reference electrode will be positioned in the supra orbital contralateral region for all groups.
To verify the chosen electrode configuration, we will perform a simulation of the current distribution and flow for the mentioned tDCS configuration (Figure 2)
PES
The participants will be positioned in ventral decubitus position and submitted to continuous application of PES for 30 minutes, using a strong but comfortable intensity adjusted according to the sensitivity of each volunteer. Four self-adhesive electrodes with dimensions of 5 x 5 cm will be placed at the height between the T12 and S1 vertebrae in order to cover the entire lumbar area. A 20Hz frequency and pulse duration of 330ms will be | 2020-12-31T09:01:02.979Z | 2020-12-27T00:00:00.000 | {
"year": 2020,
"sha1": "3a84511ac9525073e51cf3578badbb37af895d6c",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/11318/10034",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "59b314760b1126170ee04c9ffde0389071b47189",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103649610 | pes2o/s2orc | v3-fos-license | Correlating the synthesis protocol of aromatic polyimide film with the properties of polyamic acid precursor
Thousands of different copolyimide combinations render it technically impossible to have a single universal synthesis method to produce aromatic polyimide film. This study aimed to outline the selection of synthesis protocol, either through the casting of chemically imidized polyimide solution or thermal imidization of polyamic acid (PAA), to produce the polyimide film. The rheological behaviour, molecular weight, and solubility of five structurally different PAA were analysed and correlated to both imidization methods. In this work, a tough polyimide film was successfully synthesized by casting the chemically imidized polyimide derived from high viscosity (> 81 cP) and high molecular weight (≥ 1.35 x 106 g/mol) PAA. On the contrary, both low viscosity (< 13 cP) and high viscosity (> 81 cP) PAA demonstrated the possibility to produce polyimide film via thermal imidization route. The longer molecular chain of ODPA-6FpDA:DABA (3:2) polyimide produced from thermal imidization had restricted the passage of CO2 across the polyimide film when it was applied in the gas separation application. The outcome from this work serves as a guideline for the selection of suitable polyimide film synthesis protocol, which will minimize the time and chemical consumption in future exploration of new polyimide structure.
Introduction
In the advent of technology advancement, global demands in searching high performance materials for industrial applications are increasing. In view of that, aromatic polyimide with attractive features such as high dimensional stability, excellent thermal and thermo-oxidative stability, good dielectric and optical properties is found to be a great potential material to apply in many fields [1][2][3]. At present, polyimides are extensively used as the microelectronic parts, high temperature insulator for aircraft, substrate for flexible printed circuit board, structural adhesive and gas separation membrane [4][5][6][7].
In general, polyimide can be processed into film, mold, fiber and foam. Among the polyimide forms, polyimide film constitutes the largest end user market [8]. It is evidenced by the wide application of Kapton polyimide film produced by Du Pont. Besides, Apical introduced by Kaneka as well as Upilex fabricated by Ube are among the reputable commercially available polyimide films [9]. The properties of polyimide film can be easily tailored through molecular design, in which the dianhydrides and/or diamines monomers used in polyimide synthesis are systematically selected to fulfill the specific application requirement. However, the diversity of dianhydride-diamine combinations makes it impractical to have a single universal synthesis protocol of polyimide film attributed to different processability.
Pioneering works had proposed different routes to produce polyimide film. One of the common synthesis routes is the casting of polyimide dope solution [10][11][12][13]. In this approach, polyimide powder is first obtained by performing the conventional two-step reaction, in which the polyamic acid (PAA) precursor is formed through the polycondensation of dianhydride and diamine; followed by chemical imidization of PAA to produce polyimide. Typically, chemical imidization is carried out by adding a basic catalyst (triethylamine or pyridine) and dehydrating agent (acetic anhydride) into the PAA solution to promote the ring closure process (conversion of amide to imide functional group with the loss of water molecule). From the aspect of energy efficiency, the chemical imidization route is preferable as the ring closure process is readily achieved under ambient temperature [2].
However, owing to the insolubility of certain polyimides, the previous route is not recommended for polyimide film synthesis [14]. In this context, PAA solution is directly casted onto a substrate. Thereafter, thermal imidization is carried out by heating the thin PAA layer gradually up to 250-350 o C to convert the PAA precursor to polyimide, hence producing polyimide film [8,15]. This method is especially useful when the final product is desirable in the form of film. However, a controlled heating protocol is needed as too rapid heating will cause the formation of bubbles in the casted PAA layer [2]. This will subsequently cause the failure in the film production. Basically, this approach is relatively simple and time saving since it omits the extra step of producing polyimide powder as compared to the previous chemical imidization route. Besides, no dangerous reagents are involved in this method.
The present study aimed to outline the selection of synthesis protocol, either through the casting of chemically imidized polyimide solution or thermal imidization of the casted PAA thin layer, to produce the polyimide film. Since PAA acts as the precursor of polyimide, its rheological behavior and molecular weight were correlated to the synthesis protocols. By taking membrane gas separation as the final application, permeation tests were performed to assess the properties of the polyimide films that were produced through the two different imidization approaches.
Synthesis of PAA precursor
PAA precursor was synthesized via polycondensation of dianhydride and diamine. Owing to the water sensitive nature of reaction, all monomers were dried overnight at 40 o C. Throughout the reaction, the set-up was purged continuously with purified nitrogen to remove the trapped moisture. Two diamines (diamine 1 =6FpDA or DAM and diamine 2 =DABA) with molar ratio of 3:2 were first dissolved in 1methyl-2-pyrrolidinone (NMP). Stoichiometric amount of dianhydride (6FDA, BTDA, ODPA or PMDA) was then added slowly into the diamines solution. The solution was stirred for 24 h under ambient temperature (25 o C) to yield 20 wt. % PAA solution. The PAA precursor was then converted to polyimide through either chemical or thermal imidization. A few combinations of dianhydride and diamine were studied, where the polyimide structures are displayed in figure 1. In the discussion, the polyimide was named as A-B:C where A is the dianhydride, B is diamine 1 , C is diamine 2 and the ratio between B:C was 3:2.
Film preparation using chemically imidized polyimide
The PAA was chemically imidized through the addition of equimolar triethylamine (catalyst) and acetic anhydride (dehydrating agent) into the PAA solution. The solution was stirred for 24 h under nitrogen purging and ambient temperature (25 o C) to produce polyimide solution. Thereafter, the polyimide was precipitated and washed for several times using methanol. The precipitated polyimide was then air-dried in fume hood for 16 h followed by drying at 200 o C for 24 h under vacuum condition.
10 wt. % dope solution was prepared by dissolving the polyimide in THF (6FDA and ODPA based polyimide) or NMP (PMDA and BTDA based polyimide). The dope solution with volume corresponding to initial film thickness of 1300 µm was poured onto a glass petri dish placed inside a glove box with controlled environment. After two days of phase inversion, the film was dried in vacuum oven at 180 o C for 24 h to eliminate the residual solvent. Afterward, the film was thermally treated in a dual zone split tube furnace (MTI-OTF-1200X-80-II, USA) under nitrogen purging to promote decarboxylation crosslinking. The thermal treatment was performed in three steps: heating from room temperature to 330 o C at fast ramping rate of 5 o C/min, slow ramping rate of 1 o C/min to 370 o C and dwelling at 370 o C for 1 h.
Film preparation via thermal imidization of PAA precursor
20 wt. % PAA solution was diluted to concentration of 10 wt. % using NMP solvent. Specific volume of 10 wt. % PAA solution corresponding to initial film thickness of 1300 µm was poured onto a glass petri dish placed inside vacuum oven. The casted PAA thin layer was then thermally imidized to polyimide film by successive heating at 60 o C, 150 o C and 200 o C for 1 h each. The produced polyimide film was further thermally treated in a dual zone split tube furnace with similar thermal treatment protocol described in section 2.3. However, an additional 30 min heating step at 300 o C was included to ensure the complete imidization of the PAA precursor.
Characterization
The viscosity of 10 wt. % PAA solution was measured by Brookfield viscometer (model: DV2TLV, USA). The measurement was carried out under room temperature (25 o C). Multiple point averaging method was selected to obtain the average viscosity at rotation speed of 10 rpm.
Gel permeation chromatography (GPC, Agilent Technology 1260 Infinity, USA) equipped with PLgel 5 µm MIXED-C column (300 mm x 7.5 mm, 5 µm particle size) was employed to determine the average molecular weight (M w ) of PAA precursors and polyimides. Samples (PAA precursors and polyimides) were prepared in NMP or THF at 0.05 wt. % concentration. Throughout the analysis, the flow rate of mobile phase was controlled at 1 mL/min. The elution time of samples was detected using the variable wavelength detector (VWD), which was then correlated to the M w of PAA and polyimide.
Gas separation performances of the synthesized polyimide films were analyzed using a membrane cell with effective area of 7.07 cm 2 at 25 o C. The whole system was thoroughly purged with purified nitrogen prior to the permeation test. After achieving steady condition, the permeate flow rate was measured by a bubble flow meter. The gas permeability was calculated based on the steady permeate flow rate according to equation (1) where P is gas permeability in Barrer (1 Barrer = 1 x 10 -10 cm 3 (STP) cm/cm 2 ·s·cmHg), indicates permeate flow rate (cm 3 /s), p refers to downstream pressure (Pa), l signifies membrane thickness (µm), T denotes operating temperature (K), A represents effective membrane area (cm 2 ) and stands for differential pressure (cmHg).
Results and discussion
The five PAA precursors studied in this work demonstrated two extremely different rheological behaviors which can be categorized into two groups: high viscosity (> 81 cP) and low viscosity ( Principally, the viscosity of PAA solution is closely related to the PAA molecular weight, which can be qualitatively interpreted from its precipitated form. Figure 2 displays the big clusters of PAA precipitant when the high viscosity PAA precursors (>81 cP) were precipitated in deionized (DI) water. On the contrary, the precipitation of low viscosity PAA precursors (< 13 cP) yielded a fine, powdery deposit. By comparing both precipitated forms, it can be deduced that the bigger lump of PAA (left image in figure 2) visually indicated higher molecular weight than the powdery deposit (right image in figure 2). It is suggested that PAA of higher M w should have more restricted molecular mobility attributed to the higher internal friction between molecules. The greater flow resistance hence increased the viscosity of the solution. The result was in well agreement to the findings by other researchers, where a more viscous polymer solution always reflected in higher molecular weight of polymer [16,17].
Quantitatively, the molecular weight of PAA was analyzed by GPC, as summarized in table 1. The high viscosity PAA (>81 cP) had consistently revealed higher molecular weight than that of the low viscosity PAA (< 13 cP). As for the less viscous BTDA-DAM:DABA and PMDA-DAM:DABA, the molecular weight of PAA were an order of magnitude smaller than those of high viscosity PAA. Indeed, the GPC result supported well the previous discussion on figure 2, where higher molecular weight PAA was present in the more viscous solution. In this work, the solubility of PAA precipitant was tested using THF, the preferable solvent used to synthesize polyimide film due to its high volatility. It is hypothesized that if PAA is undissolved in THF, the THF will also be unable to dissolve the latter chemically imidized polyimide since their chemical properties are literally identical (same monomer combinations). In such case, the chemically imidized polyimide approach would not be feasible to produce the polyimide film. Instead, thermal imidization route is preferable. After 24 h of stirring, the PAA precipitant of 6FDA-DAM:DABA, 6FDA-6FpDA:DABA and ODPA-6FpDA:DABA were shown to be fully dissolved in THF, where clear and homogeneous solutions were observed (data not shown). Meanwhile, suspensions were observed when dissolving the PAA of BTDA-DAM:DABA and PMDA-DAM:DABA, which indicated the poor solubility of both PAA precursors in THF. Hence, it was deduced that further chemical imidization of BTDA-DAM:DABA and PMDA-DAM:DABA PAA precursors was not recommended. Both PAA precursors should proceed for membrane casting followed by thermal imidization.
For fair comparison between the different synthesis routes, all the five PAA combinations were subjected to both chemical and thermal imidization in producing polyimide film. NMP was used instead as the casting solvent for those polyimide undissolved in THF. As illustrated in figure 3, defect free polyimide thin films were obtained for the chemically imidized 6FDA-DAM:DABA, 6FDA6FpDA:DABA and ODPA-DA:DABA polyimide (figures 3a, 3c and 3e). The successful formation of defect free polyimide films had a direct relationship to the M w of polyimide. It is reasonable to assume that the trend of polyimide M w behaves similarly to that of PAA M w , as polyimide is transformed from PAA precursor with only the loss of water molecule [18]. Therefore, the high M w of 6FDADAM:DABA, 6FDA-6FpDA:DABA and ODPA-6FpDA:DABA PAA precursors in table 1 reflected to the high M w of the corresponding polyimides. During film formation, mass exchange occurred between the solvent (THF/NMP in casting dope solution) and non-solvent (water vapor in atmosphere), where polyimide molecules in the casting dope tended to attract to each other. The higher M w of polyimide had promoted better interaction between the molecules, hence forming a tough and defect free polyimide thin film [19]. The significant contribution of polymer M w in film-forming was further proven as the polyimide with lower M w failed to produce a tough thin film (figures 3g and 3i). BTDA-DAM:DABA polyimide film showed serious shrinkage and cracked (figure 3g). Meanwhile, a fragile cracked film was also observed for the PMDA-DAM:DABA polyimide film (figure 3i). It was believed that the interaction between the low molecular weight polyimide was insufficient to ensure a proper film formation. Up to this point, the experimental observation stressed the necessity of high viscosity and high molecular weight of PAA precursor for successful film fabrication via chemically imidized polyimide approach.
Unfortunately, the polyimide film-forming via thermal imidization of casted PAA thin layer did not correlate well with the viscosity and/or molecular weight of PAA precursors. Either high viscosityhigh molecular weight PAA (ODPA-6FpDA:DABA, figure 3f) or low viscosity-low molecular weight PAA (BTDA-DAM:DABA, figure 3h) showed the possibility to form a nice piece of polyimide film. The results hinted that another factor might be governing the film formation during thermal imidization. Thence, the stiffness of thermally imidized polyimides were tested by dissolving the thermally imidized polyimide films (figures 3b, 3d, 3f, 3h and 3j) in THF. As found, the nicely formed polyimide films in figures 3f and 3h (ODPA-6FpDA:DABA and BTDA-DAM:DABA) were insoluble in THF while other cracked films (figures 3b, 3d and 3j) were soluble in THF. This indicated that stiffer polyimide chains were formed by the ODPA-6FpDA:DABA and BTDA-DAM:DABA PAA throughout the thermal imidization process [20]. It is well accepted that the stiffness of polyimide chain increased with the degree of imidization [2], hence it can be inferred that both ODPA6FpDA:DABA and BTDA-DAM:DABA polyimide films had achieved the sufficient imidization degree. Owing to the stronger mechanical strength, the stiffer ODPA-6FpDA:DABA and BTDADAM:DABA polyimide chains were thus able to produce the nice pieces of polyimide films (figures 3f and 3h).
It is of interest to analyze the properties of polyimides obtained from the two different imidization approaches. Figure 4 presents the molecular weight of polyimides synthesized from chemical imidization and thermal imidization. Generally, thermally imidized polyimides consistently demonstrated higher molecular weight than that of chemically imidized polyimides. This phenomenon can be explained from the aspect of PAA hydrolytic stability. PAA precursor is well-known to be hydrolytically unstable, in which PAA chain cleaves under humid condition hence lowers the molecular weight of the resulting polyimide [21]. In chemical imidization, the water liberated during ring closure process (imidization) was removed by acetic anhydride. However, the removal of water molecules occurred by chance based on the effective contact between acetic anhydride and water molecules during stirring. This meant the possibility of chain cleavage by water molecules during chemical imidization still existed. Thus, relatively shorter polyimide chains were observed (figure 4). Since the thermally imidized polyimides possessed the higher molecular weight, it could be postulated that thermal heating was more effective in removing the water by-product from system, hence minimized the chain scissoring that reduced the molecular weight of polyimide. The molecular weight of polyimide will not only affect the film formation but also its effectiveness in gas separation. In view of the increasing interest on carbon dioxide removal, by taking ODPA6FpDA:DABA as example, the gas transport behavior of the polyimide films prepared from the two approaches was evaluated. As denoted in figure 5, the chemically imidized ODPA-6FpDA:DABA film had demonstrated higher CO 2 permeability over the tested pressure range (up to 6 bar). As discussed earlier, the chemically imidized polyimide exhibited lower molecular weight than the thermally imidized polyimide (figure 4). In regards, more polymer chain ends existed within the chemically imidized polyimide film matrix [22]. In fact, the higher number of polymer chain ends contributed towards higher free volume of the polyimide film [23]. Hence, more free spaces were available for the transport of CO 2 across the chemically imidized ODPA-6FpDA:DABA film, which subsequently increased the CO 2 permeability.
Conclusions
The present work provided a guidance on the selection of synthesis protocol to produce polyimide film, by correlating the synthesis protocol to the molecular weight and rheological behavior of PAA precursor. A high viscosity (> 81 cP) and high molecular weight (≥ 1.35 x 10 6 g/mol) of PAA precursor was necessary to ensure the formation of defect free polyimide film through the casting of chemically imidized polyimide solution. On the contrary, the production of polyimide film via thermal imidization of casted PAA layer was feasible by using both low viscosity (< 13 cP) and high viscosity (> 81 cP) PAA precursor. It was speculated that high imidization degree was the prerequisite to obtain a defect free polyimide film via the thermal imidization route. In comparison, the chemically imidized polyimide had consistently showed lower molecular weight than that of thermally imidized polyimide. The chemically imidized ODPA-6FpDA:DABA (3:2) film with shorter chain length had reflected in higher free spaces for the transport of CO 2 across the film, which subsequently increased the CO 2 permeability. | 2019-04-09T13:02:10.940Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "1da2fea60e20a2ed9a7127a1671b9847f282fb14",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/206/1/012049",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b1107fd840c9ce1817dd3849193428efde68d838",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
252943165 | pes2o/s2orc | v3-fos-license | Revitalizing an Urban Forest Park in Surakarta
. Balekambang City Park is one of the tourist attractions in Surakarta City. This park is unique because it is a tourism forest located in the middle of the city. This part of the city was revitalized in 2007 after some parts, such as the urban forest, were damaged due to neglect from the local Government and park managers. Now, in 2022, another revitalization process is needed, as the park has once again fallen into poor condition and the urban forest has been closed. This park has considerable tourism potential because it also plays an essential role in environmental sustainability. The present study aimed to describe, design and analyze a revitalization effort for Balekambang City Park to maximize its potential. Qualitative descriptive methods were used.
ICGE 2021
Apart from its cultural potential, Balekambang City Park also has educational potential. This park has an urban forest with various plants, such as weeping paperbark, pine, and other plants [2]. In addition, there are also some interesting animals such as muntjac, deer, geese, and many others. If used and managed properly, Balekambang City Park can be a place and a means of educating the public about several types of flora and fauna and their functions. Based on the explanation above, this study aimed to maximize the potential of Balekambang City Park to add some positive impacts for the government, managers, and community.
Preservation of the urban forest in Balekambang City Park, Manahan Village, Banjarsari District, Surakarta City is one of the efforts to revitalize natural resources, especially soil, water, and vegetation. Efforts to revitalize urban forests need to be carried out because urban forests provide many benefits for the environment and the community.
Based on the background, the research problem is: "What efforts must be made to revitalize the urban forest in Balekambang City Park?" We would explore the inhibiting and supporting factors of urban forest revitalization efforts as a foundation in planning and implementing the revitalization effort. After the revitalization, the government, related parties, and the community must take maintenance measures to support the sustainability of the urban forest and avoid it being neglected again.
According to Danisworo, revitalization is an effort to revitalize an area or part of a city that was once vital but then experienced a decline (degradation). Revitalization comes in two scales, macro and micro. The process of area revitalization includes the improvement of physical aspects, economic aspects, and social aspects. The revitalization approach must recognize and utilize the potential of the environment (history, meaning, uniqueness of the location, and image of the place). Ministry of Settlement and Regional Infrastructure of the Republic of Indonesia states that revitalization is a series of efforts to revive areas that tend to stop developing, increase the strategic and significant vitality values of areas with high potential, and/or control disorganized areas.
From the above definition, it can be concluded that revitalization is an effort to recycle to revitalize the primary function, or in other words, returning a place to the vitality and the primary function [3]. The Minister of Public Works and Public Housing, Basuki Hadimoeljono, said that his ministry would start working on revitalizing Balekambang City Park, Surakarta, Central Java, to become the center of Javanese culture in 2022.
The main objective of this research was to direct and restore the function of recreation and cultural tourism in Balekambang City Park through the revitalization of urban forests.
The other purpose was to develop and formulate a Planning Program for the park's utilization as an arts and cultural center and botanical, educational, and recreational ICGE 2021 park. Meanwhile, this research would be useful in providing input and direction to restore the functions of recreation, education, and cultural arts in Balekambang City Park, which later could be used as planning recommendations for the regional government and interested parties and contribute to tourism sector development.
Method
This research was conducted at one of the tourist attractions in Surakarta City, namely Balekambang City Park, located on Jl. Balekambang Number 1, Manahan, Banjarsari District, Surakarta City, Central Java 57139. This location was chosen because the park was poorly managed and the suboptimal use of its potential, such as land use, urban forest management, and cultural and tourism potential. These factors made Balekambang City Park less attractive to tourists and the people of Surakarta City.
Research Types and Approach
This research aimed to describe, analyze, and design an update or revitalization for Balekambang City Park to maximize the existing potential. This study employed a qualitative descriptive method. According to [4], qualitative descriptive analysis is used to analyze, describe, and summarize various conditions and situations from the various data collected, from interview or observation results, about the studied problems. This method helps research an object, with the researcher acting as the key instrument.
Data are collected through triangulation (combined) techniques, which will be analyzed inductively or qualitatively [5]. Instead of generalization, the results of qualitative research put an emphasis more on meaning. Meanwhile, according to [6], qualitative descriptive analysis is an analytical method based on post-positivist philosophy, which is used to examine the condition of natural objects, where the researcher is the key instrument. Qualitative research emphasizes meaning rather than generalization [7].
Primary Data
Primary data is a source of research data that directly provides data to data collectors and not through intermediary media [6]. The primary data in this research were the results of observations and interviews with the Department of Culture and Tourism managers.
Secondary Data
Secondary data is a source of research data that does not directly provide data to data collectors [8]. The secondary data in this research was an overview of several literature studies, such as journals and other literacy sources.
Data Collection Technique
Data collection methods used in this research were: 1. Unstructured interviews: data are collected by asking questions directly to respondents, and researchers do not use interview guides.
2. Observation: data are collected by making direct observations of existing objects, not limited to human behavior [9].
3. Documentation: a document study complements to observations and interviews in qualitative research. It helps to improve the credibility of qualitative research results [10].
Result and Discussion
This research took place in Balekambang City Park, one of the tourism attractions in Surakarta City. Surakarta is a small city in Central Java Province. Now the city is increasingly attracting tourists with its cultural and historical heritage. In addition, Surakarta is unique compared to other cities. One of its uniqueness is Balekambang City Park, an urban forest located in the middle of the town. This park has an area of about five hectares, with many trees growing there. This tourism attraction was built in 1921 by KGPAA Mangkunegara. At that time, this park's name was Partini Tuin, taken from the name of the king's favorite daughter, Partini. This park was once not wellmaintained, but in 2007, the Surakarta City Government revitalized and turned it into a new tourist spot. As previously explained, the park was built as a gift for the king's two daughters, namely GRAy Partini Husein Djayaningrat and GRAy Partinah Sukanta. The two daughters' names were used to name parks in Balekambang, namely Partini Tuin Park and Partinah Bosch Park.
The last revitalization process took place in 2007, or more than 13 years ago. Since then, there has been no other revitalization process. The park has become poorly maintained, and many places have been abandoned or closed, such as the urban forest park. This urban forest park has considerable tourism potential and an essential role ICGE 2021 in environmental sustainability. Urban forests are important because they help clean the air, maintain groundwater availability, prevent the hot temperature (sun protection), protect the animals, and become recreational. Urban forests can reduce the impact of harsh weather, such as reducing wind speed, reducing flooding, providing shade, and reducing the global warming effect. Here are some benefits of urban forests:
Aesthetics
Concrete and skyscrapers can indeed form a beautiful urban landscape. However, this beauty will become meaningless if it is not interspersed with green trees The combination of natural beauty and buildings can form a more aesthetic city. Often, big cities in the world look more beautiful because they have green and lush garden [11].
Hydrology
Urban forest soil and trees can regulate water management. During the rainy season, the forest soil and trees collect rainwater to not flow directly to lower places, thereby reducing the risk of flooding. Meanwhile, the forest soil and trees can provide groundwater for the townspeople in the dry season.
Climatology
Urban forests can affect the surrounding microclimate, such as lowering the ground surface temperature so the temperature is cooler. It will be beneficial, especially for cities with tropical climates like those in Indonesia.
Animal Habitat ICGE 2021
Urban forests are places for plants and also various types of animals. The living space of animals in urban areas is increasingly decreasing, and urban forests can protect these animals.
Pollution Reduction
Big cities are usually full of pollution, both air and water pollution. Trees can reduce harmful pollution as leaves can filter out dust, dirt, and other toxic gases.
Carbon Storage
CO2 is one of the greenhouse gases that cause global warming. Forests or trees, and biomass such as wood and leaves, are effective CO2 absorbers from the air.
Education
Urban forests can be a place for environmental education, especially for children.
Children may learn many things from a natural ecosystem, especially those related to life sciences. In addition, urban forests can raise public awareness of the importance of conservation.
Recreation
Urban forests are good to relieve stress from the hustle and bustle of city life.
People can also use it as a place for sports activities, such as jogging or cycling.
Economics
If the management is good, the urban forest can become a tourism attraction.
Many big cities in the world sell the beauty of their urban forests to travelers. The direct economic impact of tourism comes from collecting entrance tickets, and the indirect benefit comes from the hotel, restaurants, souvenir crafts, and other community businesses.
If the management is good, the urban forest can become a tourism attraction. Many big cities in the world sell the beauty of their urban forests to travelers. The direct economic impact of tourism comes from collecting entrance tickets, and the indirect benefit comes from the hotel, restaurants, souvenir crafts, and other community businesses.
According to the revitalization plan, Partini Tuin still functions as a water park and water catchment area. The development of Partini Tuin will focus more on the conservation value. One way to keep the water clean is by installing or placing a Joglo or a hall in the middle of the pool. The Joglo will also add a functional value because people can relax there and enjoy the beauty of the lake. It can also be used for Keroncong music performances or other performances regularly. Tertoyoso swimming pool is reused, and Naming each tree can help to support the educational purpose for tourists. Table 2 presents data on trees in Balekambang City Park: The softscape data tell the percentage of Softscape elements in Balekambang City Park combined with the classification of plant types and sizes. The percentage of softscape elements is presented in the following table: ICGE 2021
Conclusion
Balekambang City Park is one of the tourism attractions in Surakarta City. The park is unique because it is a tourism forest located in the middle of the city. It has an area of about five hectares as a home for various kinds of trees. This park was once not wellmaintained until the 2007 revitalization by the Surakarta City Government that turned it into a new tourist attraction. Since then, there has been no other revitalization process.
The park has become poorly maintained, and many places have been abandoned ICGE 2021 tourism potential and an essential role in environmental sustainability. According to the revitalization plan, Partini Tuin still functions as a water park and water catchment area. The development of Partini Tuin will focus more on the conservation value. All facilities and infrastructure that are currently being arranged will be fully used to benefit the public and complement all activities held at Balekambang City Park. During the observation, Balekambang City Park was quite good because it has been revitalized recently. However, the forest area located in the front corner of Balekambang City Park was still closed to support revitalization optimally. Through the Minister of Public Works and Public Housing (PUPR), some sources stated that the central government would start working on the revitalization of Balekambang City Park. By 2022, the park is expected to become the center of Javanese culture. | 2022-10-18T17:25:32.743Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "3006663b83f4154ba9254195e7dc999c9b03d58d",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/12164/19827",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3b166cda2f192d0d833726823a6cccd516e6a66f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
7920004 | pes2o/s2orc | v3-fos-license | Nanoparticle-based hyperthermia distinctly impacts production of ROS, expression of Ki-67, TOP2A, and TPX2, and induction of apoptosis in pancreatic cancer
So far, the therapeutic outcome of hyperthermia has shown heterogeneous responses depending on how thermal stress is applied. We studied whether extrinsic heating (EH, hot air) and intrinsic heating (magnetic heating [MH] mediated by nanoparticles) induce distinct effects on pancreatic cancer cells (PANC-1 and BxPC-3 cells). The impact of MH (100 µg magnetic nanoparticles [MNP]/mL; H=23.9 kA/m; f=410 kHz) was always superior to that of EH. The thermal effects were confirmed by the following observations: 1) decreased number of vital cells, 2) altered expression of pro-caspases, and 3) production of reactive oxygen species, and 4) altered mRNA expression of Ki-67, TOP2A, and TPX2. The MH treatment of tumor xenografts significantly (P≤0.05) reduced tumor volumes. This means that different therapeutic outcomes of hyperthermia are related to the different responses cells exert to thermal stress. In particular, intratumoral MH is a valuable tool for the treatment of pancreatic cancers.
Introduction
With the aim to circumvent the harmful side effects related to most conventional oncologic therapeutic modalities and increase the survival rate after cancer diagnosis, intensive research is being performed to assess the therapeutic potential of raising the tumor temperature (ie, hyperthermia) in order to kill proliferating cancer cells. [1][2][3] In this context, different means to induce hyperthermia have been suggested. Thus, water bath, infrared radiation, focused ultrasound, and micro-or radiowaves are some examples of heat sources placed outside the body (external heating sources). 4,5 Additionally, the progress of nanotechnology has brought a minimally-invasive approach based on the use of biocompatible 6 iron oxide magnetic nanoparticles (MNPs) as heating mediators when subjected to alternating magnetic fields (AMFs). Under these conditions, heat dissipation occurs due to magnetization reversal processes of MNP magnetic moments. [7][8][9][10] Being easily internalized into cells by endocytotic mechanisms, 11 MNPs become efficient local heaters distributed along the cytoplasm. 12 The thermal stress induced by MNPs tightly depends on the nanoparticle load and their heating efficiency. 13 The heat dose applied to cancer cells is known to exhibit a distinct impact on cellular functions, in particular in relation to DNA stability, 14 protein conformation, 15 and/or expression. 16 All these molecular alterations manifest themselves in the cell viability. 17 Furthermore, the formation of reactive oxygen species (ROS) is known to be induced via hyperthermia. 18 ROS can induce apoptosis. [19][20][21] Moreover, the cytotoxic effects of hyperthermia might differ among different tumor cells. 22,23 Besides the promising prospective of hyperthermia, in clinical studies, the therapeutic outcome was revealed to be rather heterogeneous. 5,24 Whereas the extrinsic heating (EH) of organs or tissues benefits tumor regression when their temperature is maintained between 40°C and 42°C for short times, 17 magnetic heating (MH) reveals that their heat dissipation may efficiently reduce cell viability. 25 Therefore, in the present study, we sought to compare the cellular responses caused by distinct heat generation modalities such as by an internal heat source represented by MNPs deposited in the target region (ie, tumor) and exposure to an AMF (magnetic hyperthermia, MH) or via an external heat source (EH) applied from outside the body (such as hot air). Both modalities are expected to exert different effects, 26 which are still poorly understood. We analyzed the "anti-cancer" effects of both hyperthermia modalities in terms of cell viability, apoptosis induction, the formation of ROS, and the expression of proliferation markers, which are expressed in different cell cycle phases, for example, Ki-67 is present in all phases of the cell cycle but not in the resting one (G 0 ); topoisomerase 2-α (TOP2A) controls the topologic states during DNA transcription and its gene was shown to be amplified in cancer cells; 27 and the expression of TPX2 (a microtubule-associated protein) in cancer cells is associated with vessel invasion and metastasis. 28 Additionally, we examined the transferability of the cellular effects of MH observed in vitro to the in vivo conditions.
Material and methods cell culture
PANC-1 and BxPC3-cells (human pancreatic adenocarcinoma) were cultivated at 37°C and 5% CO 2 using DMEM and RPMI 1640 media (Thermo Fisher Scientific, Waltham, MA, USA).
Magnetic nanoparticles
Superparamagnetic iron oxide nanoparticles (MF66) were obtained from Liquids Research Limited (Bangor, Gwynedd, UK). MNP synthesis was performed by co-precipitation technique as described elsewhere. 29 The average core size of the MNPs was 12±3 nm, hydrodynamic diameter 85 nm, and specific absorption rate value 900 W/g Fe (corresponding to intrinsic loss power values of 8.7 nHm 2 *kg −1 ). The ζ-potential was −41 mV. 29 In vitro extrinsic hyperthermia (eh) BxPC-3 cells were incubated for 60 minutes at different temperatures (41°C, 43°C, and 47°C). Temperatures were monitored using fiber optic temperature probe and thermometer (TS5 & FOTEMPMK-19; Optocon AG, Dresden, Germany). MNP untreated cells at 37°C were used as controls. The recorded temperature data were used to calculate the thermal dose as cumulative equivalent minutes above 43°C (CEM43) as described elsewhere. 30 In vitro (intrinsic) magnetic hyperthermia PANC-1 and BxPC-3 cells were incubated with MNP (100 µg/mL) for 24 hours at 37°C. Afterwards, cells were exposed to AMF (H=23.9 kA/m, f=410 kHz) for 60 minutes so as to induce temperatures of 41°C, 43°C, or 47°C. Temperatures were monitored and cells were processed for further analysis as described earlier. Uptake of cellular MNP at 24 hours after incubation was quantified by atomic absorption spectroscopy (AAS 5 FL; Analytik Jena AG, Jena, Germany) as described elsewhere and was found to be of 22±6 pg Fe/cell (PANC-1) and 67±17 pg Fe/cell (BxPC-3), respectively, in a prototype experiment. 29
Determination of cell viability
To determine the amount of vital, apoptotic, and necrotic PANC-1 and BxPC-3 cells after hyperthermia treatments (ie, 24 or 48 hours after MH or EH), cells were washed with HBSS and incubated with CellEvent ® Caspase-3/7 Green Flow Cytometry Assay Kit (Molecular Probes ® ) in accordance with the manufacturer's protocol. Cell viability was analyzed using a FACSCalibur (BD Bioscience, San Jose, CA, USA; λ exc =488 nm, λ em =533 nm/647 nm). Cells declared as vital were not stained by either dye (Cell Event Caspase 3/7 Green nor SYTOX AADvanced dead cell stain), early apoptotic cells were positively stained only with Cell Event ® Caspase 3/7 Green, late apoptotic cells were positively stained with Cell Event ® Caspase 3/7 Green and SYTOX ® AADvanced™ dead cell stain, and necrotic cells were positively stained only with SYTOX ® AADvanced™.
Western blot analysis
PANC-1 cells were harvested after treatment, washed, and lysed with RIPA lysis buffer containing protease inhibitors (Hoffman-La Roche Ltd., Basel, Switzerland). Protein concentration was determined using the Bradford protein assay. Protein extracts were separated using sodium dodecyl sulfate polyacrylamide gel electrophoresis. Membranes were blocked and incubated with primary Bax (mouse monoclonal,
Proliferation marker expression after hyperthermia
Total RNA was extracted and reverse RNA transcription was performed. For quantitative real-time reverse transcription polymerase chain reaction analysis of Ki-67, TOP2A, TPX2, and β-2-microglobulin (B2M; reference gene) transcript expression, splice junction crossing primers were designed. 31 The relative expression of genes of interest was calculated by normalization of the mRNA expression after treatment to that of the reference gene B2M and by subtraction of the relative mRNA expression of non-treated cells.
In vivo magnetic hyperthermia of xenograft models
All experiments were conducted according to the national and EU norms guidelines on the ethical use of animals and were approved by the regional animal care committee (Freistaat Thüringen, Landesamt für Verbraucherschutz). Animals were anesthetized with isoflurane. PANC-1 cells were implanted subcutaneously into female athymic nude mice (Hsd:Athymic Nude-Foxn1 nu ; Harlan Laboratories, Venray, the Netherlands). Experiments were started when tumors reached volumes between 60 and 470 mm 3 . Animals were divided into four independent treatment groups: group 1 animals (animals that underwent treatment) were treated with MH (intratumoral injection of 0.2 mg Fe in 100 mm 3 volume of colloidal MNP dispersion and application of AMF for 60 minutes twice with a time interval of 6 days; parameters of the magnetic field: H=15.4 kA/m, f=435 kHz); group 2 (MNP control group) received intratumoral injection of 0.2 mg Fe in 100 mm 3 volume of colloidal MNP dispersion (control of MNP influence); group 3 (AMF control group; exposure to AMF two times for 60 minutes each and a time interval of 6 days) and group 4 (tumor control group) received intratumoral injection of ddH 2 O in a volume of 100 mm 3 . Group 3 (AMF control group) was additionally exposed to AMF to assess the impact of AMF on tumor growth. Treatment of PANC-1 xenografts with MH, monitoring of temperature, and calculation of the thermal dose CEM43T90 covering at least 90% (T90) of tumor surface were performed as described elsewhere. 29 High temperatures were applied during MH so that even the outermost tumor regions were exposed to temperatures of 43°C. To determine treatment outcome, tumor volume (caliper) and body weight were analyzed every 3-4 days. Blood count of MH-treated animals and controls was analyzed as described elsewhere. 29 statistics In order to assess the heating effects (i.e. temperature, time, therapeutic modality etc. see Figure 1), the corresponding groups (cell samples or animals) were compared via the utilization of the Mann-Whitney U-test as indicated in the figures. P-values #0.05-0.001 were considered to be significantly different.
Impact of hyperthermia on cell viability and apoptosis induction cell viability
As shown for BxPC-3 cells, the heat dose supplied by MH distinctly decreased cell viability at temperatures of 43°C or higher (Figure 1, upper and lower left). The induction of necrotic cells was particularly prominent. BxPC-3 cells responded to EH only at a temperature of 47°C (late apoptosis and sparse amount of necrotic cells, Figure 1, upper and lower right). For the corresponding reference temperature, the exposure to MH or EH led to comparable heat doses as revealed by calculations of CEM43 (Table 1).
Protein expression
Western blot analysis revealed the potential of MH to induce cell death in investigated pancreatic cells.
1013
Nanoparticle-based hyperthermia and its metabolic impact after these cells were exposed to the highest temperature. The expression of the protein cPARP was prominent (highest temperature, 48 hours post-treatment, HSP70 not ( Figure S1A).
Impact of hyperthermia on the formation of rOs
MH exhibited a marked impact on the cellular ROS content of PANC-1 cells. ROS formation increased with increasing temperatures compared to the non-treated cells ( Figure 2B). Exposure of PANC-1 cells to MNP alone (100 µg/mL) slightly increased the formation of ROS. However, EH treatment of PANC-1 had almost no effect on the production of ROS ( Figure 2B). Regarding BxPC-3 cells, MH led to a similar response in the formation of ROS as observed for PANC-1 cells ( Figure S3B). In contrast, the exposure of these cells to EH led to ROS production only at a temperature of 47°C ( Figure S2B).
Impact of hyperthermia on the expression of proliferation markers
Analysis of the mRNA expression of the proliferation markers Ki-67, TOP2A, and TPX2 revealed a marked reduction Figure 2C, upper left). MNP incubation (100 µg/mL) exerted a comparably lower impact. The highest impact on mRNA expression was observed for MH treated cells at a temperature of 43°C. In contrast, PANC-1 cells exposed to EH showed an increased mRNA expression of Ki-67, TOP2A, and TPX2 ( Figure 2C).
Similar to PANC-1 cells, MH treatment of BxPC-3 cells or MNP controls (100 µg/mL) revealed a significant (P#0.05) reduction in the mRNA expression of proliferation markers Ki-67, TOP2A, and TPX2 ( Figure S1C, 24 hours of post-treatment). MH at a temperature of 43°C showed the highest impact. At later post exposure times to MH (or MF66 MNP alone) mRNA expression was almost comparable to the non-treated cells ( Figure S2C, lower left). In contrast to MH exposure, EH treatment of BxPC-3 cells resulted in an increased mRNA expression of all the three markers at 24 hours of post-treatment ( Figure S1C, upper right). After 48 hours, the expression levels were almost comparable to the non-treated cells ( Figure S1C, lower right); however, EH at a temperature of 43°C still distinctly affected the TOP2A expression.
In vivo potential of magnetic hyperthermia in pancreatic cancer therapy using PaNc-1 xenografts MH treatment of PANC-1 tumor-bearing animals (thermal dose CEM43T90: 137±118 min) significantly reduced (P#0.05) tumor volumes from day 8 to day 28 ( Figure 3A). At the end of the observation period, animals displayed distinctly lower relative tumor volume compared to non-treated animals. Intratumoral injection of MNP in combination with AMF led to the application of comparable heat doses achieving similar hyperthermic temperatures. In most cases, a smaller tumor area got affected when subjected to MH treatment for the second time compared to the first time. The first MH treatment often led to variation of the MNP location inside the tumor tissue and/or the loss of MNP with respect to the initial intratumoral injection of MNP. Figure 3B
1015
Nanoparticle-based hyperthermia and its metabolic impact shows the local changes on the surface of tumor xenografts that underwent treatment, such as the formation of eschars, followed by the development of emarginations, the loss of tumor tissue, and the formation of scar tissue. Furthermore, intratumoral MNP injection resulted in an inhomogeneous MNP distribution within the tumor, thus resulting in the formation of local heat spots and temperature gradients within the AMF ( Figure 3C). Contrary to the MH treatment, the presence of MNP or the application of AMF (controls) did not result in any detectable temperature increase at the tumor surface of the xenograft models ( Figure 3C). Thus, MH treatment progressively and continuously decreased tumor volumes as seen in Figure 3A, whereas the tumor volumes of control animal groups increased as expected. Interestingly, the presence of MNP attenuated the increase in tumor volumes compared to AMF and tumor control groups.
Hematological parameters such as the number of white and red blood cells and the hemoglobin concentration of animals exposed to MH displayed no differences compared to control groups ( Figure S2A). White blood cell count exhibited a high variability between different animal groups. However, the overall hematological parameters of the animal groups were not far from the blood counts reported by the supplier of the animal models (black line in Figure S1A). Furthermore, the body weight of mice remained stable and was, therefore, not affected by either treatment ( Figure S2B). Finally, it is worth to mention that intratumoral injection of MNP and subsequent MH treatment did not lead to the accumulation of MNP in other organs and/or tissues of group 1 animals 28 days after the first application of AMF ( Figure S3).
Discussion
In general, our in vitro data revealed the superiority of MH over EH in killing cancer cells and inhibiting tumor cell proliferation. Intracellular MH mainly increased the number of cells in necrosis. MH triggered ROS formation and reduced the expression of the proliferation markers Ki-67, TOP2A, and TPX2 at mRNA level. EH prominently increased the number of cells in apoptosis; ROS formation was less pronounced and the expression of the proliferation markers and cell protective proteins (HSP70) increased in some cases. Despite the differences between both hyperthermia modalities (MH and EH), we also detected distinct responses depending on the cell line. The MH treatment of tumorbearing xenografts inhibited tumor growth compared to untreated animals.
In particular, the strong effect of MH treatment on the induction of cell necrosis and ROS formation may be related to the large heat doses in the intracellular MNP surroundings. 32 The fact that a decreased cell viability for different cell lines after MH was reported, though no rise in temperature of the surrounding medium could be detected in other studies, 25 further reinforces our observations. The by-passing of thermotolerance induction due to the inhibition of HSP expression as well as increased cellular stress due to MNP rotation within the AMF could be two further explanations for the higher impact of MH exposure with respect to EH.
The fact that no distinct signs of apoptosis (eg, cell fraction analysis, expression of increased Bax and decreased Bcl-xL and pro-caspases 3 and 8, no cPARP and HSP70) were observed on protein expression after MH in both cell lines is an indication that the impact of the produced intracellular temperatures resulted in an immediate destruction of cells (cell necrosis). This is in agreement with our data from cell population analysis. The effect of EH is comparatively lower since cells do die in a retarded manner via the induction of metabolically programmed process of apoptosis, in particular at higher temperatures. Consequently, this means that MH is superior to EH in inactivating tumor cells. The in vivo situation, particularly the MH treatment, shows that the induction of necrosis or apoptosis depends on the degree of internalization of the nanoparticles after their application to the tumor area and the formation of intracellular heating spots capable of triggering cellular death as our in vitro results show. The fact that magnetic hyperthermia of tumors decreases cell viability (in terms of proliferation) has been emphasized by a decreased presence of cells expressing Ki67. 29 Hence, these complex relationships indicate that the hyperthermia treatment modality, the heat dose, and the phenotype of the cancer cell line determine the regulation of molecular processes associated with cell death.
Interestingly, there was a tendency detectable for a higher sensitivity of PANC-1 compared to BxPC-3 to MH, but not for EH. This means that MH might well have a higher impact on cell-specific differences in metabolism compared to EH. In particular, the pattern of protein expression involved in apoptosis (Bax, Bcl-xL, pro-caspases 3 and 8, PARP; see below) was very different in both cell lines and depended upon whether MH or EH was applied. The same holds true for the expression of Ki-67, TOP2A, and TPX2. Different thermosensitivity could also be due to alterations in the expression of p53 and pRb as well as differential expression of HSP70 and HSP90. [33][34][35] This means that the response of tumor cells to thermal stress is cell-line dependent as well as from the therapeutic strategy. From a clinical translation perspective, there will be a need to characterize the tumor entities for their therapeutic ability of MH. Our data show that the concept of precision medicine seems to be valid for nanotherapies too.
Exposure to MH treatment dramatically increased the formation of cellular ROS in both cell lines, whereas EH treatment showed almost no impact (up to a temperature of 43°C). Therefore, we can assume that the ROS-based response to hyperthermic temperatures does not depend on the cell line. ROS can severely damage cellular structures like DNA, proteins, lipids, and cofactors of enzymes due to oxidation and, therefore, induce apoptosis. Moreover, it is known that high levels of oxidative stress can destroy tumor cells or at least arrest tumor growth. 36 In this context, the increased amount of ROS with increasing temperatures correlated well with the induction of apoptosis and the viability of the cells after either treatment.
Interestingly, the exposure of cells to bare MNP also led to an increased formation of ROS. One reason could be that the partial degradation of internalized MNP is known to take place in endolysosomes. Iron in its elemental state will then be exposed to the cell, which can participate in the Fenton reaction leading to the generation of ROS. 37 Accordingly, ROS formation after MH is a consequence, at least in parts, of the presence and degradation of the MNP within the cells. It is conceivable that the MNP-associated ROS production acts as secondary stress factor during and after MH treatment.
Furthermore, MH led to a reduction in the mRNA expression of the proliferation markers Ki-67, TOP2A, and TPX2 compared to the non-treated cells. This indicates that a higher percentage of cells from the population corresponds to the resting (Go) phase of the cell cycle (Ki-67); moreover, DNAtranscription activity (TOP2A) and vessel invasion potential and metastasis (TPX2) are reduced. This observation underlines the effects of MH on the tumor cells. In contrast, EH mainly increased marker expression. This finding might be associated to hormesis, that is, the adaptive response of cells to the comparatively "moderate" heat stress induced by EH. It was shown that the molecular mechanisms involved in hormesis include the increased production of cytoprotective and restorative proteins and protein chaperones. 38 In this view, one could also explain the expression of the antagonistic proteins regulating apoptosis (Bax and Bcl-xL) in response to EH (ie, at higher temperatures). In consequence, the impact of EH on the expression of the tested proliferation markers is less prominent than of MH.
The differential expression pattern of Ki-67, TOP2A, and TPX2 in both cell lines (considering MH vs EH) indicates that their short-term response pattern on the expression of the investigated proliferation markers seems not to be cell line specific, whereas the period of time after which the impact of hyperthermia is still detectable is a rather cell-specific response. Hence, in the view of a clinical translation, hyperthermia treatment of tumors derived of the same organ could result in a slightly increased or reduced effectiveness of the respective treatment due to the metabolic heterogeneity of the tumor cells. As a result, tumor cells of a same entity exhibit a different response in terms of proliferation marker expression with consideration of the heating modality, sensitivity against heat, and time after therapy.
The in vivo treatment of tumors via MH using a heating dose of 137±118 min CEM43T90 led to a significant (P#0.05) reduction in tumor volume and inhibition in tumor growth compared to animals treated with AMF, MNP, or untreated ones. Earlier studies reported a significant inhibition of tumor growth and a prolonged survival of xenografts bearing murine pancreatic cancer cells (MPC-83) after treatment with MH using temperatures between 47°C and 51°C for up to 30 minutes. 39 Despite the comparably much lower magnetic material dosing (0.2 mg Fe vs 49.6 mg Fe) used in the present study, the treatment of tumors with physiological amounts of MNP still leads to a marked impact on tumor growth. 39 The reason as to why the presence of MNP per se led to a reduction in tumor volumes in comparison to the untreated controls is speculative. Our in vitro data showed that the MNPs were not cytotoxic to tumor cells. On the other hand, it may well be conceivable that the presence of MNP in the extracellular tumor compartments led to some impairment of nutrient supply that resulted in a retarded tumor cell proliferation.
MH treatment of PANC-1 xenografts led to a reduction in tumor volume. BxPC-3 xenografts treated in the same manner showed a regrowth in tumor volume starting at day 14 after the first MH treatment. 29 These differences in tumor regrowth were due to a lower magnetic material dosing (0.1 mg Fe in a volume of 100 mm 3 ) in BxPC-3 xenografts with a high interstitial fluid pressure within the tumors compared to PANC-1 xenografts (0.2 mg Fe in a volume of 100 mm 3 ) as well as a higher proliferating potential of BxPC-3 cells as indicated by a tumor growth rate of 8.3%±3.4%/day of untreated animals compared to untreated PANC-1 xenografts (4.9%±4.0%/day). Furthermore, a lower thermal dose was applied to BxPC-3 xenografts (CEM43T90: 1±1) compared to PANC-1 xenografts (CEM43T90: 137±118). A clear tendency toward the inhibition of tumor growth caused by MH was visible. Treating BxPC-3 xenografts with a magnetic material dosing and heat dosages as high as those applied for PANC-1 xenografts very likely would have resulted in a complete inhibition of tumor relapse, at least during the observation period of 28 days. Nevertheless, as observed in the in vitro experiments, various metabolisms of cancer cells and individual tumors could slightly influence the outcome of hyperthermia treatment when using MH. With consideration of the aforementioned results, we also expect that the in vivo treatment of tumors by EH exposure would lead to a lower therapeutic outcome than MH. In addition, neither MNP nor AMF exposure led to significant anticancer effects on the tumor volume growth.
No damage to organs or interference with their functions was observed after in vivo treatment of tumors via MH. The fact that MH, MNP, or AMF exposure did not influence the blood count or the body weight of the xenograft models further proves the high biocompatibility of MH and the employed MNP. In consequence, our in vivo results rise up the evidence that the MH therapeutic modality is highly biocompatible.
Potential limitations of this study are that the observed effects of MH and EH are not based on in situ studies of isolated tumors but on pancreatic tumor cells in culture. Moreover, our data on proliferation markers are based on the mRNA expression levels. Due to post-translational regulations, the expression on the protein level could be different.
Conclusion
In conclusion, our data showed that MH treatment was superior in inducing thermal stress in comparison to EH. MH exposure triggers cell death more aggressively (increased cell necrosis) and is more efficient in producing ROS and in reducing the expression of the proliferation markers Ki-67, TOP2A, and TPX2 on mRNA level compared to EH. Additionally, the presence of MNP in tumor cells might secondarily promote the efficiency of MH due to their core degradation process that favors oxidative stress reactions (Fenton reactions). In most cases, there was an increased degree of complexity in the variability hyperthermia therapeutic outcome when considering the applied temperature, thermal sensitivity of the tumor cells, and temporal response after treatment. With consideration of the clinical situation, these findings outline the preference for the induction of intracellular temperatures via MH together with the necessity to identify those pancreatic tumor entities with high ability to respond to MH at defined temperatures, as demonstrated by the different sensitivity of the used cell lines. In this context, MH may also be subjected to the concept of precision medicine in order to optimize its therapeutic efficacy. | 2018-04-03T00:39:13.638Z | 2017-02-07T00:00:00.000 | {
"year": 2017,
"sha1": "b8fbbb4d4e6cf7bdc01bebf3939c03e3092ab1e1",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=34777",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94981f6508fa0c07f8bd3a26671ccacef4157b3b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
49677559 | pes2o/s2orc | v3-fos-license | Managing Human Factors Related Risks The Advanced Training Model in Dangerous Goods Transport on Roads
This paper studies the methodological essence of dangerous goods (DG) training courses for drivers and dangerous goods safety advisers (DGSA). The research aims to advance existing teacher-centered course model in Estonia with learner-centered methods that best suit specific objectives and meet expected learning outcomes, as well as to improve DG training model with the integrated use of interactive teaching methods. The paper presents a qualitative development research strategy based on studies regarding ADR regulations training courses in Estonia as well as on the analysis of teaching methods applied in the professional training of adults. The data is collected in two steps: firstly by implementing questionnaires for consignors/ consignees, freight forwarders carrier companies and drivers, secondly during in-depth interviews/ focus group meeting with DG regulations training companies’ providers. Implementing methodology of qualitative comparison analysis (QCA) combination of best suitable teaching methods is identified. After following in-depth interviews and performing a focus group, these combinations are further used as input for developing existing course model with integrated use of blended learning alternatives, where digital media meets with traditional classroom methods. Results of this research contribute coming up with interactive methodological approach within ADR regulations training courses that meet the best trainees’ expectations and fulfills the risk management aim. Keywords—DG training courses, teaching methods, qualitative comparison analysis, blended learning
Introduction
The transportation of DG on the road always involves risks. If substances are mishandled, injury and property damage risks are increased. From the perspective of road transport, this concerns primarily main parties of a transportation chain, i.e., consignors/ consignees and carrier companies (including drivers), but also freight forwarders, and third parties. A transport containing DG can have an impact on the environment if an accident occurs and these often incur a higher cost for the society than non-dangerous goods accidents. This is one reason why it is essential to focus on improving the efficiency and security of DG transport and avoid potential accidents [41].
Training courses for drivers and DGSA involved into dangerous goods transport (DGT) are based accordingly to the European Agreement concerning the International Carriage of Dangerous Goods by Road (i.e. ADR, Chapter 8.2) and the European Commission Directive (96/35/EC) on the appointment and qualification of Safety Advisers for the transport of dangerous goods by road, rail and inland waterways [43,45]. In addition to these documents, there is the Adult Education Act that sets additional requirements for adult education in Estonia on a national level [31]. The role of DG training courses has an essential impact on the human factors aspect that reveals during DG handling and transportation processes as the human factors are crucial why accidents occur within a transportation chain.
The role of educational technology in teaching today has importance due to combining the amount of information and communication technologies [41]. What comes to in-service training with the focus on practice, it is complicated to implement suitable interactive teaching methods and techniques effectively. In the scope DGT by roads, there is no doubt that adequate training of drivers and DGSA may affect the safety aspects in peculiar transportations, such as the one of DG. Training may not only include regulations, technical and procedural elements, but also important psychophysical aspects such as how to manage fatigue [3,33].
The provider of training may be different according to national legislation. It can be the role of the employer (in the US and Canada) to ensure appropriate truck-driver training for the transportation of DG. In Sweden and the Netherlands, as well as in Estonia, a competent national authority must accredit training institutions or trainers and monitor the examination of truck drivers [20]. However, all training system approaches to pursue the same goal: to ensure appropriate training and prevent the accidental release of DG during transportation. By implementing specific interactive teaching methods, remarkable improvement of course participants' learning can be achieved. Moreover, operational risks related to human factors' issues can be reduced within entire transportation chain of DG.
When considering an approach to instruction, teachers are aspired to use methods that are most beneficial for all of their students. Using both approaches, teachercentered as well as student-centered together, learners can sense the positives of both types of education. By implementing interactive teaching methods to support existing teacher-centered ADR training course model in Estonia remarkable improvement of course participants' learning can be achieved. To implement the procedural approach, a designer has to understand the contents of the whole system, its structure, the principle of operation and behaviour [21] fully. It becomes very difficult to describe complex systems using only procedural techniques. The reason lies in the nature of a modeled object because any procedural model implies a one-sided, incomplete, and prejudiced glance on the original [27]. In the scope of this paper relation between concepts of a training system, training model, training process and training requirements is visualized as shown in Figure 1. In Estonia, ADR regulations training courses are formed based on teacher-centered course design mainly. This methodological approach is outdated as the concept of the learner is changing rapidly. The problem discussed in the scope of this paper is a part of a broader study and refers to an outdated methodological approach in carrying out DG training in Estonia, both for drivers and safety advisers. Based on conducted survey research among representatives of different parties of a DG transportation chain in Estonia, best suitable interactive teaching methods are studied. From developed combinations of techniques, advanced training course models are created with the implementation of blended learning elements. As this methodological approach is in the scope for discussion of a focus group with DG training provider companies and the representative of Estonian Road Administration, it finally represents a comprehensive training model that considers human factor risk managing elements of all parties. Results present readily handled ADR regulations training course model that could be implemented by DG training provider companies of Estonia in the coming years. All this will contribute to improved human risk management of DGT by road.
Literature review
The global trend of increasing traffic due to globalisation leads to a higher number of DGT [11]. Several studies have focused primarily on the critical analysis of ADR implementation concepts in European countries [Ibid.]. What comes to performance indicators supplemented regarding the transport and handling of dangerous goods, the number of DGSA as well as the number of ADR training certificates, are critical controlling the performance of handling dangerous goods in green transport corridor [36]. Chances and challenges coming along with the ADR ratification were illustrated, and the concept / recommended procedures of how to train involved people in the framework of DG was developed from in-depth analysis and critics of current training methods.
The broader approach with regards to blended learning issues within in-service training, in general, has been studied a lot. These studies focus mainly on training school teachers with implementing different types of blended (mixed) learning scenarios of information and communication technology (ICT) related subjects. When modeling practical scenarios based on a combination of different face-to-face interactive approaches (such as problem-based learning, collaborative and project-based approaches, and diversity of e-learning activities and resources within), it is funda-mental to take into account learner's previous experience and ICT skills. Better results in acquiring the content of the course are in a healthy relationship with learner's previous experience in ICT [22].
Specific models, methods, and technologies have also been studied in the scope of support the training of drivers involved in the transport of DG [5]. Italian developed online training environment (TIP -Transport Integrated Platform) is addressed to operators in the transport sector and combines classroom-based training with online self-learning possibilities on a distance. The platform has been continuously upgraded with innovative tools and presents a component of blended learning model where online digital media meets with traditional classroom methods [5,39,40]. Implementing blended learning methodology within classes keeps students active not allowing them to disconnect from the subject. This leads to a better attitude to improve learners' thinking and writing, motivating them for further study and development of new thinking skills [13,22].
Training of safety and DG topics is essential for a risk and accident minimisation in the handling of DG and their transports. According to previous research studies on DGT the awareness of different parties of transportation chain in Estonia, there is a lack of professional knowledge among personnel on the national level [17]. According to a comparative analysis of teaching methods of ADR driver training courses of France, the Netherlands, and Estonia, remarkable differences were identified [18]. In Estonia, a significant lack of learning tools and no ARD based activities to endorse training courses and to increase the proportion of practice are so far in use [Ibid.].
Human-related risk preventive mean lies in efficient staff training. In following parts of this paper, the methodology of QCA is implemented in to analyse specific methods as cases due a set of relations and assess of their consistency. Existing teacher-centered DG training model will be completed with blended learning approach and evaluated within focus group meeting to define its' relevance toward risk management of human factors related risks when transporting DG by roads.
Dangerous goods regulations training courses
As DG and their transport need special handling and attention due to their risk for the environment and health of people, the training of any persons having to deal with those goods is essential for safe processing [15]. Common legal requirements (ADR) states in details that drivers when transporting DG (with small exceptions) shall undergo training in the form of a course approved by the competent authority. Concerning chapter 1.3 of the ADR, every employee, which has to commit the duties of DG regulations, needs to be specially trained [1]. Other parties involved within operations with DG can be: manufacturer or owner of DG, owner of tank containers, persons carrying out forwarder duties, persons writing and preparing transport documents, persons working for the DG receiving, persons committing packaging procedures, filling personnel of tanks, vehicle drivers, who do not need an ADR certificate, persons carrying out carrier and vehicle owner duties [2,23].
Persons mentioned above often carry obligations of DGSA as they are involved in operations with DG in road transportation. A DGSA is a consultant or an owner or employee of an organisation appointed by a company that transports, loads or unloads DG in the European Union and other countries [38]. There is no specific classification regarding DGSA courses. However, ADR driver training courses can be classified according to two aspects. Figure 2 visualises the content of training programs and training courses, highlighting common and distinctive elements of ADR driver training courses. Firstly, training programs are identified by the level of the training program (initial or refresher training program), and secondly, training courses within programs are divided according to specificity (basic or specialisation training course). The minimum duration of the theoretical element of each initial training course or part of the comprehensive training course is set according to common legal requirements. The overall length of the comprehensive training course may be determined by the competent authority, which shall maintain the duration of the basic training course and the specialisation training course for tanks, but may supplement it with shortened specialisation training courses for Class 1 (explosives) and Class 7 (radioactive materials) [25]. Refresher training has to be undertaken by drivers (as well as by DGSAs) at regular intervals in every five years. As the form of a training program is defined by compulsory topics and minimum learning hours only (according to ADR), it is free to choose the methodological approach to conduct the training itself [18].
Interactive teaching in adult training
Today classrooms challenge traditional, teacher-centered curriculum to meet the increasingly diverse needs of students and make the required increases in achievement gains [7]. The fact that the adult teaching method is to a great extent different from the system in which students of various ages are schooled is felt in the assimilation of knowledge, in the means which they put into practice and understanding at a conceptual level of the theories and models proposed in the course program. Moreover, what comes to in-service training with a focus on practice, it is much more complicated to implement suitable interactive teaching methods and techniques efficiently.
Today, adult learning theories have series of characteristics that differentiate adult learners. These determine the teaching methods that will most successfully promote learning in an older population of students 1 [37]. According to theories and practices on adult learning these characteristics are as follows: 1. Selective learning -adults determine what is meaningful to them. 2. Self-directed learning -adults take responsibility for their education. 3. Previous knowledge and experience of adult learners. 4. Problem-centered approach -adults are interested in content that has a direct application to their lives. 5. Anxiety and low self-esteem due to possible negative previous experiences with school [14,16,32].
The impact of these characteristics on adult learning is not limited to the face-toface classroom as they also affect the way that adult learners will approach learning in the online environment as well [24]. Named characteristics have to be considered when training personnel within ADR regulations training. From the perspective of a diversity of methods in use and resulting approach to adult learner's peculiarity within ADR training courses, the United Kingdom can be highlighted as a best-practice. The existing models of ADR related training in the UK apparently differentiate learners by their category -drivers and DGSA. When registering for the training course the learner can select among different approaches how to study. Due to the preferences traditional classroom learning, full or partial e-learning, as well as webinarbased learning options, are possible. The training model of existing ADR training provider companies of Estonia is alike and methodologically outdated as it doesn't take into account learners' unique features nor their preferences.
Rapid development of ICT has facilitated an approach to traditional face-to-face and technology-mediated learning environments, which is called "blended/hybrid learning." In the scope of this paper blended learning methodological approach, where digital media meets with traditional classroom methods is brought into focus as appropriate for Estonia's case to start with the methodological development of an existing model of ADR regulations training courses. In following parts of this paper alternative, learner-centered training model is proposed for efficient ADR regulations training courses with the integrated use of interactive teaching methods.
Problem description
The primary purpose of teaching at any level of education is to bring a fundamental change in the learner [42]. Due to the high risk of DG, there is a must to learn be-fore doing in the content of ensuring safety. The ADR implementation and the knowledge transfer concerning DG are complex.
Existing learning model of DG training courses in Estonia today is standard for all learners without differentiating them into categories: drivers and DGSA. Moreover, ADR regulations training courses are formed based on teacher-centered course design mainly, i.e., learning activity is performed during classroom lectures supported by the slideshow presentation. ADR regulations training courses are mostly in-class and theoretical proceedings, even in cases, where a practical example would be considered necessary, as in the case of fire confronting and first aid issues. In most cases, in-class training is followed by the use of books, issued by the training companies, slide presentations and internal tests [18].
Today this methodological approach is outdated as the concept of a learner with its needs is changing rapidly. Moreover, existing learning form does not meet efficient risk management within the transportation chain that is evolving more complex due to the number of parties involved as well as due to additional risks concerned new DG and their danger characteristics. The methodological approach of professional training should be student-centered and focused on developing learner autonomy and independence by putting responsibility for the learning path in the hands of learners [12]. This approach ensures the fact that after completing the training course a trainee can handle problems in practice independently. This is essential in the scope of DGT. The present paper aims to perform the analysis and identification of teaching methods suitable to be integrated into existing ADR professional training courses in Estonia with the scope to increase the proportion of practice and thereby to minimise operational risks related to human factors in further studies.
Data collection and analysis
A research design is the set of methods and procedures used in collecting and analysing measures of the variables specified in the research problem research study [10]. The research problem defines the research design of this study according to which the methodological approach of ADR regulations training courses in Estonia is outdated as the concept of a learner is changing rapidly. In the scope of this paper primary data collecting on learners' attitude regarding the current format of courses is collected from all main parties who operate with DG on a daily basis, i.e. consignor/ consignee, freight forwarder and carrier company. Respondents were divided into clusters according to the type of ADR regulation training course type which is aimed at them. Clustering was performed as follows: 1. CLUSTER 1 (truck drivers; ADR driver training course), 2. CLUSTER 2 (consignors/ consignees, freight forwarders, carrier companies, other participants; ADR DGSA training course).
Truck drivers have been separated from carrier role to identify their preferences individually. The primary objective is to understand attitudes and preferences by clusters toward specific teaching methods respectively. The essence of specific methods that were focused on was explained to respondents. A structured questionnaire with close-ended ordinal-scale questions has been prepared as main data collecting form, where respondents were asked to decide where they fit along a scale continuum regarding the use of particular teaching method within ADR training classes.
Implementing methodology of qualitative comparison analysis (QCA) combinations of suitable teaching methods are identified that are effective both in the scope of operational risk management as well as from the perspective of learner's needs and expectations. QCA is a means of analysing the causal contribution of different conditions (e.g., aspects of an intervention and the broader context) to an outcome of interest [28]. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome [29,34]. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account all the observed outcomes, as well as their absence. Results are typically represented in statements expressed in ordinary language or as Boolean algebra. According to formula (1) expressed in Boolean notation combination of Condition A AND (*) condition B OR (+) a combination of condition C AND (*) condition D will lead to an OUTCOME (!) E [Ibid.].
The paper presents a qualitative development research strategy based on studies regarding ADR regulations training courses in Estonia as well as on the analysis of teaching methods applied in the professional training of adults with the implementation of ICT possibilities to contribute to effective human factor risk management. Upon the results of QCA analysis and in-depth interviews with DG training companies' representatives, preliminary models of training courses are developed for further validation during the focus group with selected experts from DG training activity. Focus group research involves an organised discussion with a selected group of individuals to gain information about their views and experiences on a topic [19]. Within this research stage, the initially developed training model for drivers and DGSA are in focus. The participants of a focus group influence each other through their answers to the ideas and contributions during the discussion by assessing advanced training model with regards to human risk management.
Research design
Within the process of developing research, the study can be broken down into 3-4 distinct stages. Firstly it is establishing a research type, secondly naming research strategy and finally determining a research design by defining specific methods and research procedures [10]. The research design refers to the overall strategy that is chosen to integrate different components of the study in a coherent and logical way, thereby, ensuring the effective address to the research problem [Ibid.].
The research problem defines the research design of this study according to which the existing course model in Estonia is teacher-centered and the role of using interactive teaching methods within ADR regulations training courses are underestimated by trainees. In the scope of this paper, the research object is the existing model of ADR regulations training courses in Estonia, methodologically the same both for drivers and for DGSA. The research design for this study is built upon the principle of qualitative development research as it is seen in Figure 3. The first step in a complete research design of this study involves identifying top previous research on a topic related and reviewing the published empirical articles to diversify possible methods. At this stage, the best practice is identified (the training models of the UK) and results of previous studies on the example of Estonia [18] are brought together.
The second step presents a combined questionnaire survey on learners' attitude and preferences concerning the methodological format of courses. QCA analyses collected data. Hence, the methodological approach to training is developed respectively for ADR course training for drivers and DGSA separately.
Individual in-depth interviews with ADR training provider companies within the third stage of the research is a data-collecting phase mainly. According to the information from Estonian Road Administration, there are altogether five trainer companies that have a license to train drivers and one that prepares DGSA [25]. Based on some trainees per trainers in 2016 four interviewee trainer companies that provide ADR training for drivers is chosen. Regarding training DGSA interview with the single representative business was carried out (share of 100%). The results of interviews are structured with the implementation of comparative analysis methodology and commented by contrasting them with the best practice on an example of the UK training course models. Focus group with DG training provider companies and the representative of Estonian Road Administration gives an objective assessment to the advanced ADR regulations training model that considers human factor risk managing elements of all parties.
Learners' methodological approach
The data collecting on learners' attitude and preferences concerning the methodological format of courses was performed during the period from February 3 -May 3, 2017. The online survey was prepared using Google Forms both in Estonian and in Russian. The distribution of the questionnaire was provided via email invitations (60 companies that work with DG on a daily basis) and social media channels addressed directly to specialty-focused groups (e.g., Estonian truck drivers with an estimated number of 1800 ADR licensed drivers). Altogether 189 replies were gathered (CLUSTER 1 -151 respondents, CLUSTER 2 -38 respondents). By theory, the sample must represent the population as well as possible. Current sub-samples are not statistically representative enough to draw accurate conclusions concerning population. To ensure the representativeness, the sub-samplings were formatted in a nonprobability sampling technique where the samples are gathered in a process that does not give all the individuals in the population equal chances of being selected [4]. In the scope of this study, samplings are also qualified as purposive samplings where sub-jects are chosen to be part of the sample with a specific purpose in mind that sufficient to draw objective conclusions concerning the methodological approach of some subjects are fit for the research compared to other individuals [Ibid.]. This is ARD regulations training courses but is insufficient to give an accurate picture of attitudes and preferences of all DG transportation chain participants in details.
Within the structured questionnaire, interactive teaching methods were firstly explained thoroughly and then proposed to be evaluated in contrast to leading existing methodological approach today -classroom lecturing with the support of slideshow. These methods were selected into the study mainly based on the practice of other countries (i.e., France, the Netherlands). See Table 1 and Table 2 that present respondents' attitude and preferences by clusters concerning different methods that learners have experienced or are willing to undergo when taking ADR regulations training courses. Results are given in some respondents and percentage share of the total cluster.
Source: Authors
By implementing QCA methodology best, suitable combinations of teaching methods were studied. As learners' operational risks within DG transportation chain differ, as well as expectations toward training courses, two separate truth tables were formed. According to methodological approach, categorical variables (conditions) were defined as following: e-learning on a distance (A), peer-learning (B), practical tasks (C), solving case studies in groups (D), etc. As a result combinations of conditions A-G were combined that would lead to the outcome. Effective methodological approach (outcome W) for ADR regulations training courses for drivers (W1 for CLUSTER 1) and DGSAs (W2 for CLUSTER 2) in Estonia are expressed in Boolean notation below in the form of formulas (2) and (3).
The results underline that methodological approach differs by learners' category. Empirical results indicate that traditional lecturing with the support of slide presentation is still adequate and suitable teaching method concerning drivers training. Learner-centered interactive methods are expected to be implemented within classroom lessons, and individual theoretical learning is clearly outdated with regards to DGSAs training. Hence, interactive methods differ greatly on a national level. Wellimplemented blended learning methodological approach on the example of Italy (TIP) is not suitable for Estonia's case according to results of this study. This leads to the stand-point that trainees clearly underestimate the attitude towards the possible use of blended learning methodology at this point within ADR regulations training courses.
Advanced methodological approach
This chapter gives an overview of results of analysed data collected during indepth interviews with four ADR training provider companies for drivers and one training company which is responsible for training DGSA in Estonia. As in-depth interviews are useful when the focus is on getting detailed information about a person's thoughts and behaviors or the aim is to explore new issues in depth on the particular matter [6], this method was chosen suitable for collecting data within the third stage of the research. Table 3 gives a summary of essential findings of interviews that are relevant input for improving training models with the integrated use of interactive teaching methods and implementing blended learning. Results are presented summarised in the form of table where training provider companies' names are left hidden (named as Trainer A, B, C, D for driver training companies, Trainer E for DGSA training company), as the intention of comparability analysis is not to compare companies or their services, but to identify opinions and views regarding integration of ICT opportunities and interactive teaching methods into existing ADR regulations training course system in Estonia.
The result of individual interviews confirms the aspect that ADR regulations training courses in Estonia are primarily teacher-centered since the only mainly used learners-centered method is a discussion according to main findings presented in Table 3. However, some points indicate on the fact that training providers are interested in implementing new approaches to carry out training courses, including with support of ICT possibilities. At the moment none of the interviewed trainers in Estonia are taking advantage of ICT opportunities with-in ADR training course for drivers. On the other hand, implementing partial e-learning is considered as further development within the existing course model. Such topics as first aid, basic knowledge of the use of protective equipment, etc. can be presented in the form of e-learning already soon.
Considering results of QCA of this study and results of in-depth interviews, preliminary training model for ADR regulations training courses for drivers and DGSA with the implementation of interactive and e-teaching methods were developed. This was presented as an interim result of research during a focus group meeting with ADR training provider companies and a representative of Estonian Road Administration to collect opinions on the relevance of the methodological approach in the scope of applicability in training and the possible effect on managing human-related risks. Considering remarks made by focus group participants the advanced training model for the model for ADR regulations training courses for drivers and DGSA was developed as it is presented in Figure 4.
When developing models for ADR related training courses in Estonia following principles and additional remarks made by focus group participants were taken into account: 1. Teaching methods make a difference with regards to human-related risk management. 2. Transition to blended learning course model has to be slow and step-by-step to take into account both trainers' possibilities as well as learners' readiness for a renewed approach to learning. 3. The DGSA trainee is more independent learner than the trainee who is undergoing ADR training course for drivers. Therefore methods that support independent learning (e-learning opportunities) are included in training model for DGSA (seen in Figure 4). 4. Due to personal learning habits and preferences, learner needs for different learning options. 5. Learners' ICT skills have to be considered. 6. During self-assessment as well as final-assessment the use of materials (Internet) should be allowed. The assessment has to be more integrated in the learning process and, learners will also take responsibility in it [35]. 7. Implementation of the advanced methodological approach of ADR related training courses in Estonia should begin with DGSA training. Within this research finally developed training course model is final and considered ready to be implemented in practice for piloting. Herein opinions of all parties have been viewed with regards to applying blended learning techniques into ADR regulations training courses. Developed blended learning training course model is considered to be a good starting point for piloting and for establishing specific conditions and metrics on its' effect with regards to managing human-related risks when transporting DG by roads. 2 Simulating complex incidents and accidents with DG on roads may have a positive effect on managing risks, as drivers/ DGSA may never face similar situations in practice unlike the awareness of a danger that is acquired through simulation. Similar simulations are in use for training of fire and medical emergency situations on example of German Chemical Industry. Firefighters can train their behavior on complex transport accidents with dangerous goods on motorways, rails, and country roads. Most of the firefighters have not been called very often to those accidents in their daily business. Within virtual training spaces, it is possible to train staff's behavior and to cope with complex operations [31].
Conclusions
There are many prescriptions, which need to be followed by different parties within the transportation chain of DG to ensure safe transport and handling operations as well as to minimise operational risks related to human factors. The change in existing teaching practice today regarding ADR training courses is necessary due to many aspects. Due to the continuously increasing number of the possible harm to the health of people and the environment in general, it is essential that all parties being involved are trained accordingly.
Educated and competent personnel is the critical factor that defines the competitiveness and efficiency of a system. What comes to competitive and efficient transportation chain of DG this all refers to a minimised level of risks; hence it is essential that personnel involved is capable of managing these risks properly when arranging or performing DGT. Due to possible risks with high consequence and the fact that trainees are adults, the training of employees of transportation chain of DG has to be detailed and practical giving a learner the opportunity to acquire the knowledge using different methods. Integration of ICT and implementing blended learning methodology within existing ADR regulations training courses were studied within this research.
According to collected and analysed data as well as to results in the form of developed training courses model conclusions have didactical and regulative nature. Didactical findings are directly related to principles on which improved training models are developed. Regulative conclusions refer to an overall ADR regulations training course system in Estonia. These are as following: 1. The trainer's qualification requirements are questionable -review and, if necessary, change conditions. 2. The trainer's knowledge of the methods used is insufficient 3. The control system of trainees has to be improved.
Conclusions presented above on regulative issues of ADR regulations training courses system rises next questions that need attention on a national level. Further researches related to this issue will focus on testing improved ADR regulations training course models in practice. | 2018-07-04T13:02:13.715Z | 2018-06-20T00:00:00.000 | {
"year": 2018,
"sha1": "5a458afb31130ea2716dd68e9eedc3d8ab795a16",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3991/ijep.v8i4.8150",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79d7d39fe9881d56635bc16c43d591404df8af23",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
58887745 | pes2o/s2orc | v3-fos-license | Arrangement and planning of developer’s activities in the course of construction megaprojects implementation
The paper deals with the specific features of megaprojects. Problems related to real estate reproduction in the course of construction megaprojects implementation were analyzed. Considering the specifics of construction megaprojects, recommendations for their performance assessment were developed. Issues relating to development arrangement and planning at all management levels (strategic, tactical and operational) were considered. Recommendations provided in this paper can be used for the assessment of megaproject performance and formation of KPIs for the participants of the construction process.
Introduction
Topical issues of management in construction and in the course of construction projects implementation are considered in modern scientific and business literature [1][2][3][4][5][6].Considerable attention is paid to specific features of implementation of construction projects in different segments of real estate markets (hotel, retail, office facilities, etc.), to introduction of different innovations in the investment and construction area.At the same time construction megaprojects that are actively implemented in different countries and affect various economic segments require to be separately considered and studied [7,8].
These projects have a number of peculiarities that allow to consider them as individual and important objects of scientific research.
When considering megaprojects as individual objects of research, special mention should go to the following specific features [9][10][11]: • High investment value (over 1 bln.USD).
• Considerable expenditures for infrastructure development.
• Engagement of many participants in project implementation (co-investors) and use of different funding models.
• Impact on socio-economic indicators of the city, region or country as a whole.
Megaprojects represent integrated target programs [16] that can include subprograms and projects amalgamating interests of several (or many) co-investors and, what is a critical aspect, state interests at different management levels (from federal up to regional and municipal).
One can refer the following programs to megaprojects: programs for the creation of special economic zones, territorial clusters, priority development territories [17], projects for comprehensive development of territories and urban development, implementation of complex infrastructure projects such as construction of toll roads, power facilities, etc. as well as other large-scale projects with the participation of public funding and private capital.Special focus should be placed on global projects, which are implemented with the participation of several countries [18].
When studying development megaprojects, particular attention should be paid to state participation in implementation of such programs.Mechanisms and forms of state participation in such projects can vary significantly, ranging from the creation of a favorable mode of project management up to state participation in project financing.
It should be noted that construction activity is the basic component of megaprojects implementation, an important condition ensuring target programs competitiveness in general and performance of local projects of individual investors.
Conditions for the effective implementation of target programs are created via the process of real estate reproduction in its various forms.Moreover, competitiveness and megaproject performance depend on the performance and quality of reproduction processes within the framework of construction activities at all stages of a megaproject life cycle.
The major difficulties in the course of real estate reproduction within megaprojects are as follows: • Difficulties related to concept development and substantiation as well as investment assessment of megaprojects under conditions of risk and uncertainty.
• The need to follow the common goal and idea of megaproject implementation at all management levels.
• The need to consider interests of multiple participants of the investment process.
• Technical complexities related to megaprojects implementation (high capital costs, long project implementation periods, often unique engineering solutions, the need for comprehensive development of the territory, etc.).
• Implementation of projects under conditions of increased investment risks.
• Problems related to effective management of facilities at the operation stage under conditions of large-scale comprehensive development.
At the same time, the issue related to the arrangement and planning of construction activities, including use of development tools as an effective form of project management is urgent.The purpose of this paper is to develop a common approach to the assessment of megaproject performance, as well as recommendations for the arrangement and planning of developer's activities at all levels (strategic, tactical and operational).
Methodology
Within the framework of this paper, the following methods were used in the course of development of a common approach to the assessment of construction megaproject performance, as well as recommendations for the arrangement and planning of developer's activities: a qualitative scientific analysis, comparison and generalization methods within the framework of a literature review; an economic analysis, economic and mathematical methods, optimization models and an expert evaluation method in the course of formation of the integrating indicator of construction megaprojects performance; a strategic analysis and management tools in the course of arrangement and planning of developer's activities.
The following assumptions should be taken into account when developing the approach to the assessment of megaproject performance: 1.Complexity in determining the factor space for the performance analysis at early stages of megaproject implementation and in carrying out a correct quantitative assessment of their impact on the final performance indicator.
2.The need to ensure interrelations at all levels of megaproject planning (strategic, tactical and operational).
3.Feasibility of megaproject decomposition followed by separation of a construction megaproject and, perhaps, individual construction projects within the global target program.It will allow to link megaproject objectives to tasks at the level of construction, which in turn will provide for more efficient arrangement and planning of construction activities.
4.Program development should be considered as the most efficient form of construction arrangement in the course of planning of construction megaprojects implementation.
The complexity in determining the factor space is due to the fact that the total performance of a megaproject depends on many factors that cannot be considered at an early stage of a project (for example, future costs for construction and infrastructure development, if they are assessed at the conceptual stage).
In addition, when determining performance indicators one should take into account requirements of all participants of the investment process: both private capital and the state at different levels of management.
Significant deviations of actual indicators from the planned ones in the course of megaproject implementation, in addition to uncertainty factors and incorrect risk assessment, are also associated with an inefficient system of planning, when global objectives of a megaproject are set in the early stages and due attention to operational and tactical tasks is not paid, i.e. when there is no clear understanding of how these objectives will be achieved.
The construction component within a megaproject plays a significant role, since it is responsible for the creation of the final product.Therefore, within a megaproject one should separately consider the construction megaproject, which includes individual construction projects (Figure 1).
In modern construction, there are different approaches to the implementation of construction projects: a contracting method, a project method and a development approach.The development approach that presupposes an integrated center of investment process management at all stages of the life cycle is the best one as well as in the course of megaproject implementation, because it allows to link megaproject objectives to tasks at all management levels.
Taking into account the prerequisites specified above, a common approach to megaproject performance assessment was formulated.It is the basic foundation for the arrangement and planning of developer's activities.
Results
Within the framework of a megaproject, special mention should go to the following two main components of performance: megaproject performance from the perspective of the state (from now on state interests are also understood to be interests on the regional and local management levels), as well as performance for private capital.
Megaproject performance at the state level (G) is the function of multiple variables like performance for private capital (I).At the same time both the state and private investors strive for the maximization of this performance (1).It should be noted that objectives and tasks at the state and private levels may clash with each other, so the task related to performance maximization is transformed to an optimization task, when there is such a factor combination that can ensure the optimum variant of megaproject implementation that in turn can satisfy all participants.
The list of the most essential factors that will be considered in the optimization model can be developed both on the basis of the financial model of the megaproject and on the basis of expert estimates of the factors that cannot be subjected to qualitative assessment at the conceptual stage of the project.
One can carry out a qualitative assessment of the factors that determine performance for private capital.These are conventional indicators for investment analysis: NPV, IRR, PP, DPP, and PI.
Indicators that reflect achievement of objectives and tasks of the state within the framework of a megaproject depend on the specifics of the project and can include: • Decline in unemployment, job creation.
• Increase in GRP and/or GDP.
• Increase in the level of industrial production in the region.• People's lives improvement (for example, by means of project implementation in the area of housing and utility sector, road infrastructure development, etc.).
• Attraction of foreign investments (for example, within the framework of projects for the establishment of special economic zones and priority development areas).
Since it is impossible to determine many of the abovementioned performance indicators in terms of quantity on the basis of the financial model, it is proposed to use expert estimates.
If we use a ten-point scale for the assessment and decline in unemployment is considered as a factor, the expert will give 1 point, if he/she thinks that project implementation will have no effect on this indicator.If the expert thinks that the rate of unemployment will drop down up to the minimum target level specified in the megaproject, he/she will give 10 points.
Space factor determination for performance assessment can be based on Delphi method [31], and assessment of significance of each fact can be based on the analytic hierarchy process.
So, we can expand the abovementioned expressions (1) using a ten-point scale and additive economic and mathematical models in the following way: E min -the lowest estimate of the comprehensive indicator of megaproject performance that allows to the project to be approved for implementation (values of comprehensive indicator E vary between 0 and 10).
k 1 , k 2 -weighted coefficients that reflect the significance of G and K respectively, when determining the comprehensive indicator of performance (to be determined by the expert).
d 1 …d n -weighted coefficients for the factors of performance comprehensive indicator G (to be determined by the expert).Generally the comprehensive indicator of construction megaproject development performance (Dev) can be determined in the following way: A set of resource includes the following: • Human resources.
• Raw materials potential of the land and property complex for the purposes of development (location factor, environmental conditions, site characteristics).
• Public resources (creation of favorable conditions for megaproject implementation, direct and indirect control of investment and entrepreneurial activities, tax concessions and remissions, etc.).
• Other resources (depend on the specifics of megaproject implementation).
The need to carry out the analysis of a resource base is associated with the scale of a megaproject and it may result in disproportionality between supply and demand by individual types of resources and megaproject performance degradation both by means of cost supplement and extension of the project implementation period and by means of quality deterioration of finished objects.Based on the above, the algorithm of making a decision on the feasibility of megaproject implementation considering target indicators of project participants and the resource base available for construction can be presented as the algorithm of sequential actions (Fig. 2).Making a positive investment decision at the strategic level is the basis for KPIs formation for the development company at the level of a construction megaproject (tactical level) and individual construction projects within the framework of the global target program (operational level).
Fig. 1 .
Fig. 1.Megaproject implementation levels n -multiple factors determining performance at the state level.b 1 …b m -multiple factors determining performance for private capital.
d 1 '
…d m ' -weighted coefficients for the factors of comprehensive indicator I. Within the framework of developer's activity planning in the course of construction megaproject implementation it is important to determine key performance indicators (KPI) for the development company of the entire megaproject and individual projects within the framework of the global target program.
C
-costs for construction project implementation.T -megaproject implementation period.Q -qualitative characteristics of a megaproject (compliance with the standards, customer's requirements, etc.).n -assessment of the quality and availability of resources for the implementation of a construction megaproject.
Fig. .
Fig. .Megaproject implementation planning at three management levels | 2018-12-15T03:32:22.592Z | 2017-05-23T00:00:00.000 | {
"year": 2017,
"sha1": "309e3944875451590a87ee76c3d2ced8de33c4bf",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/20/matecconf_spbw2017_08013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "309e3944875451590a87ee76c3d2ced8de33c4bf",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
218860780 | pes2o/s2orc | v3-fos-license | Suspected Aripiprazole-induced neutropenia in a geriatric patient: a case report
Background Aripiprazole, a third-generation antipsychotic medication, has been used to treat a range of psychiatric disorders. According to the U.S. Food and Drug Administration’s prescribing information, the most common adverse reactions in adult patients in clinical trials (≥10%) were nausea, vomiting, constipation, headache, dizziness, akathisia, anxiety, and insomnia. While hematological adverse effects may occur with aripiprazole, there is very limited information in the published literature on such adverse outcomes. Case presentation A 68-year-old Caucasian male with treatment resistant depression was hospitalized for suicidal ideation. The patient developed neutropenia after aripiprazole was introduced as an augmentation agent. The neutropenia was reversible with discontinuation of the medication. Conclusions To our knowledge, we describe the first case report of suspected neutropenia-induced by aripiprazole use in a geriatric patient. While hematological adverse reactions are rare, we recommend adding CBC to the standard adverse systemic reaction monitoring of antipsychotic medications, particularly among the elderly.
Background
Aripiprazole is a third-generation [also known as atypical] antipsychotic that acts as a partial agonist at dopamine receptors subtype 2 (D 2 ), a partial agonist at serotonin 1A (5-HT 1A ) receptors and an antagonist at serotonin 2A (5-HT 2A ) receptors. Aripiprazole has been approved by the U.S. Food and Drug Administration (FDA) for treatment of schizophrenia, bipolar mania, Tourette's disorder, irritability associated with autism spectrum disorder, and adjunct to antidepressants in treatment of major depressive disorder [1][2][3]. While neuropsychiatric (e.g., headache, akathisia, insomnia, somnolence), cardiometabolic (e.g., hyperglycemia, dyslipidemia, weight gain) and gastrointestinal (e.g., nausea, vomiting, salivary hypersecretion, etc.) are more common adverse reactions, hematological abnormalities associated with aripiprazole use are rare. Existing literature is limited, with documented neutropenia associated with aripiprazole use in children in three case-reports [4][5][6]; and some case reports in adults [6][7][8][9][10][11][12]. The FDA Adverse Event Reporting System (FAERS) Public Dashboard lists 10 suspected cases of aripiprazole-associated neutropenia in geriatric patients; but to our knowledge, there is no published literature describing aripiprazoleinduced neutropenia in a geriatric patient. Informed consent was obtained from the patient about both participating in this study and for publication of this case report. The Kern Medical Institutional Review Board approved the study protocol.
Case presentation
A 68-year-old Caucasian male was brought to the emergency department by local police due to suicidal ideation with plan and intent. The patient has a diagnosis of major depressive disorder with multiple involuntary psychiatric hospitalizations. The patient's family history is unknown. He had been prescribed citalopram (20 mg daily) and bupropion (extended release 300 mg daily) upon discharge from a psychiatric hospitalization 1 year prior, but he had not been taking these medications prior to this admission due to self-reported lack of efficacy. The patient has multiple medical comorbidities including uncontrolled type 2 diabetes mellitus, hypertension, benign prostatic hyperplasia and chronic obstructive pulmonary disease. His medical conditions were being treated with his home medications, which include insulin glargine, insulin lispro, metformin (1000 mg daily), and lisinopril (20 mg daily).
On admission, his labs were unremarkable except for a mild normocytic anemia (hemoglobin: 12.6 × 10 3 , mean corpuscular volume: 88.6 μm 3 ) and thrombocytopenia (108 × 10 3 ) both of which he has had chronically for at least 2 years based on our records. His white blood cell count (WBC) was 4.5 × 10 3 , and his absolute neutrophil count (ANC) was 2.6 × 10 3 on admission. On hospital day 1, he was started on his home medications and citalopram (20 mg daily) due to a positive response from a previous hospitalization. Bupropion was not used this hospitalization. Additionally, a Foley catheter was inserted due to urinary retention. The patient continued the same medication regimen and continued to have severe suicidal ideation for hospitalization days two through eight. On hospital day nine, the patient continued to endorse severe depressive symptoms and suicidal ideations and plan, necessitating more aggressive medication management to address these symptoms, which the patient agreed to. Levothyroxine 125 micrograms daily was added to his regimen. On day ten, further augmentation was started with aripiprazole 5 mg daily.
The patient responded well to the augmented regimen based on daily assessments showing improved mood and decreased suicidality. However, on hospital day 13, hypotension prompted a lab work up due to concern for sepsis as the patient had an indwelling Foley catheter since admission. These labs (day 3 of aripiprazole use) were unremarkable except for WBC count of 3.3 × 10 3 and ANC of 1.3 × 10 3 (Fig. 1). Due to concerns of neutropenia, complete blood count (CBC) with differential was repeated the next day which showed no appreciable change. At this point, an internal medicine consult was sought and citalopram, aripiprazole, levothyroxine and lisinopril were held per consult recommendation (5 days of aripiprazole use total). Laboratory work up for other causes of blood dyscrasias included negative screenings for chronic infections [human immunodeficiency virus (HIV) and hepatitis], normal serum levels of albumin, vitamin B12 and folate. However, a positive antinuclear antibody (ANA) with nucleolar pattern and titer of 1:40 was found.
His WBC and ANC continued to demonstrate a downward trend for the following 2 days, to as low as 3.0 × 10 3 for WBC and most concerning of all an ANC value of 0.7 × 10 3 on day 17. Despite the abnormal CBC's, the patient was asymptomatic for signs of infection throughout his hospitalization. On day 19, citalopram and lisinopril were restarted due to an uptrend in ANC. Levothyroxine was restarted once his ANC returned to baseline (2.8 × 10 3 ) on day 21. By day 21, the patient was discharged as he was no longer endorsing suicidal thoughts, his neutropenia had resolved, and he was in remission from depression. No additional pro re nata (PRN) medications were administered this hospitalization.
Discussion
This is the first descriptive case report of suspected aripiprazole-induced neutropenia in a geriatric patient. While a few cases of aripiprazole-induced neutropenia have been documented in the literature for younger age groups [4,5,[7][8][9][10], the true incidence of this adverse reaction is not known as regular monitoring of CBC with aripiprazole use is not the current standard of care [13][14][15]. There is potential for concern, as neutropenia can be a life-threatening condition, especially in specific populations such as an elderly patient who may already be suffering from comorbid conditions and/or immunosuppression. Lab work to monitor for metabolic changes (fasting blood glucose and lipids) when starting atypical antipsychotics is the standard of care [15]. We recommend adding CBC to routine monitoring, as it is an inexpensive yet widely accessible test.
The diagnostic approach to neutropenia in an elderly patient is most efficiently investigated with a bone marrow biopsy, however, this may not always be clinically indicated if an etiology is able to be explained by patient history and/or basic laboratory panels [16]. A history of hereditary blood dyscrasias, and chronic infections should be inquired. Basic laboratory studies such as vitamin B12, folate, albumin, ANA, HIV screening, hepatitis panel should be obtained upon initial evaluation. If neutropenia is still unable to be explained, the first specific action should be discontinuation of all drugs not necessary for maintenance of vital functions [16]. If still unable to be explained, a bone marrow biopsy should be obtained [16]. In this case, the patient had no history of blood dyscrasias, chronic infections and basic laboratory panels were unrevealing. The patient did have a positive ANA with nucleolar pattern; therefore, it is possible that there was an autoimmune component to the patient's neutropenia. However, the patient had no clinical correlation to the symptoms of the autoimmune diseases typically associated with a nuclear pattern (systemic lupus erythematosus, rheumatoid arthritis, Sjögren's syndrome, scleroderma and systemic sclerosis, Raynaud phenomenon) [17][18][19]. Further, the interpretation of ANA patterns and titers is largely controversial when there is no clinical correlation, as healthy individuals (including the elderly) have been shown to have a positive ANA with various patterns and at various titers [17][18][19].
Instead, there are stronger reasons to associate the patient's neutropenia with aripiprazole exposure as a temporal relationship was demonstrated. Application of the Naranjo adverse drug reaction probability scale results in score of + 4, categorizing the likelihood of causality as possible [20]. A consideration in determining causality was lacking from this study, re-challenging the suspected agent. However, in the few similar cases where patients were re-challenged with aripiprazole, they invariably developed neutropenia again [4][5][6][7][8][9][10][11][12]21]. Additionally, given the risks associated with neutropenia (particularly in the context of patients who are hospitalized having higher risk of nosocomial infections), risks outweighed the benefits of re-challenging aripiprazole in this patient. Furthermore, the patient was clinically improving from his depression without aripiprazole at the time that rechallenge would have been warranted.
Although this patient was lost to follow up so no posthospitalization lab work was able to be obtained, review of the literature of similar studies showed that both children and adults had their blood levels (WBC and ANC) uptrend within 3-4 days after discontinuing aripiprazole with baseline levels being achieved by 1 week [4][5][6][7][8][9][10][11][12]21]. This trend is consistent with the patient described in this study as his blood levels (WBC and ANC) began to uptrend after the fourth day of discontinuing aripiprazole, with baseline levels achieved after 7 days. In the reported cases that included follow up laboratory studies after hospital discharge; at 2 weeks [5,7], 4 weeks [5,8], and 6 months [5]; WBC and ANC counts remained within normal limits with each post-hospitalization CBC. Given these previous findings, in patients who do not restart aripiprazole after an adverse neutropenic reaction occurs, follow up CBC's does not appear to be necessary or revealing. Additionally, each case reviewed demonstrated a temporal relationship of neutropenia with exposure to aripiprazole [4][5][6][7][8][9][10][11][12].
It is highly unlikely that the patient developed the neutropenia from lisinopril or citalopram, as there is also no established association with neutropenia in either of these drugs [22,23]. Although not required, a temporal relationship between an adverse reaction and exposure to the suspected agent is highly suspicious, and in this case a temporal relationship only exists with aripiprazole. Additionally, the patient did not have a prior adverse reaction to lisinopril or citalopram during previous hospitalizations and restarting the medications did not result in a return of neutropenia. Levothyroxine's adverse effects are typically limited to symptoms of hyperthyroidism, and it is not known to be associated with neutropenia [24]. It is highly unlikely metformin or insulin induced the patient's neutropenia as he has taken these medications for decades due to his chronic diabetes mellitus. Additionally, while the patient's anemia and thrombocytopenia mildly fluctuated throughout this hospitalization, there was no significant change or trend away from baseline, suggesting that this was an isolated neutropenic dyscrasia.
The exact mechanism for aripiprazole-associated neutropenia is unknown, and there are not enough reported cases to establish which populations may be at risk. However, clozapine induced agranulocytosis has been extensively studied and may partially explain the observed association in this case. Aripiprazole is like clozapine and olanzapine in that their metabolism involves the production of nitrenium ions [25]. In clozapine studies, these nitrenium ions were shown to covalently bind to neutrophil proteins and act as a hapten, initiating an immunemediated destruction of affected neutrophils [26]. Additionally, the HLA-B38 phenotype appeared to be more often affected suggesting genetic elements are involved as well. Olanzapine has a similar molecular structure to clozapine (Fig. 2) and has a similar mechanism of action as they are both tricyclic atypical antipsychotics. However, neutropenia is less prevalent with olanzapine use compared to clozapine. This is likely due to olanzapine's longer half-life resulting in less activation of the cytochrome P450 system and therefore less nitrenium ion formation [27]. Aripiprazole, although it is an atypical antipsychotic, it is not of the tricyclic class. Aripiprazole belongs to the phenylpiperazine class, and it has a markedly longer halflife than both clozapine and olanzapine [28]. This is likely a contributing reason for aripiprazole's significantly improved safety regarding hematologic adverse reactions, when compared to other atypical antipsychotics [27].
In conclusion, we report occurrence of neutropenia in an elderly male who was prescribed aripiprazole to augment an antidepressant medication that was reversible with discontinuation of the medication. While hematological adverse reactions are rare, we recommend adding CBC to the standard adverse systemic reaction monitoring of antipsychotic medications, particularly among the elderly. | 2020-05-24T13:59:11.041Z | 2020-05-24T00:00:00.000 | {
"year": 2020,
"sha1": "dd26a09db7add86f7a7952b9b81ab3bd3e545bef",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-020-01514-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd26a09db7add86f7a7952b9b81ab3bd3e545bef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6002755 | pes2o/s2orc | v3-fos-license | Surgical sympathectomy for Buerger’s disease
Buerger's disease is characterized by recurring progressive inflammation and occlusions in small and medium arteries and veins of the limbs. Its cause is unknown, but it is most common in young men with a history of tobacco use. It is responsible for ischemic ulcers and extreme pain in the hands and feet. In many cases, notably in patients with the most severe presentations, there is no possibility of improving the condition with surgery (limb revascularisation), and therefore, alternative therapies (e.g. sympathectomy) is used. This review assessed the effectiveness of surgical sympathectomy compared with any other therapy in patients with Buerger's disease. As a result, only one randomised controlled study (162 participants) compared sympathectomy with prostacyclin analogue (iloprost) was incorporated to the review. Such comparison shown that iloprost is more effective than sympathectomy to complete healing ulcers at four weeks (risk ratio 0.65; 95% confidence interval 0.45 to 0.95; P = 0.02; very low quality evidence) and at twenty four weeks (risk ratio 0.62; 95% confidence interval 0.48 to 0.82; P < 0.01; very low quality evidence) after the start of treatment and to relief rest pain at four weeks (risk ratio 1.90; 95% confidence interval 1.17 to 3.10; P = 0.01; very low quality evidence) but not more effective at twenty four weeks (risk ratio 1.68; 95% confidence interval 1.00 to 2.84; P = .10; very low quality evidence) after the start of treatment. We concluded, with very low quality of evidence, that intravenous iloprost is more effective than lumbar sympathectomyin the healing of ischemic ulcers and pain at rest in patients with Buerger's disease. Therefore, until now, the preference of the usage of intravenous iloprost over the lumbar sympathectomy (and vice versa) does not find robust evidence for its routine use.
Introduction
Buerger's disease (thromboangiitis obliterans) is a non-atherosclerotic, occlusive, thrombotic, segmental inflammatory pathology that most commonly affects the small-and medium-sized arteries, veins and nerves in the upper and lower extremities. 1 Von Winiwarter 2 first described a patient with the disease in 1879, but it was Leo Buerger, 3 in 1908, who published a detailed description of the pathological findings on 11 amputated limbs and named the disease.
The aetiology is unknown, but involves tobacco exposure, hereditary susceptibility, infectious, immune and coagulation responses. 4 Features distinguishing Buerger's disease from atherosclerosis include the distribution of pathology (with involvement of both the upper and lower extremities), associated superficial venous thrombosis, a paucity of atherosclerotic risk factors and normal proximal large arteries. 5 Why it is important to do this review?
Buerger's disease is a debilitating condition which can affect productive, young people. In patients with critical limb ischaemia and poor chances of surgical revascularisation, as seen in many patients diagnosed with Buerger's disease, alternative treatments, as surgical sympathectomy, are often performed. Thus, a systematic review about effectiveness of surgical sympathectomy in patients with Buerger's disease is opportune and extremely relevant.
Objective
To assess the effectiveness of surgical sympathectomy compared with any other therapy in patients with Buerger's disease.
Materials and methods
Study design. Systematic review of randomised controlled studies. Details of the protocol for this systematic review were registered on PROSPERO and can be accessed at http://www.crd.york.ac.uk/ PROSPERO_REBRANDING/display_record.asp? ID¼CRD42016037911. Selection criteria. We included studies designed as randomised controlled trials involving patients clinically diagnosed with Buerger's disease (e.g. Shionoya's criteria 6 ) and submitted to surgical sympathectomy (without previous revascularisation of the affected member). Two review authors independently assessed all studies that were identified by the search strategy for inclusion. Disagreements were solved by discussion.
Data extraction. For all eligible studies, two review authors (DGC and DHM) extracted data using the Cochrane Vascular's data extraction table. We entered the data into Review Manager 5.3. Primary outcomes collected were ulcer healing, pain, rate of amputation and death. Secondary outcomes collected were surgical sympathectomy complications (e.g. bleeding, infection, etc.) and side effects.
Risk of bias (quality) assessment. Two review authors independently assessed the included studies for risk of bias using Cochrane's 'Risk of bias' tool as described in the Cochrane Handbook for Systematic Reviews of Interventions (available in http:// handbook.cochrane.org/). The information about the risk of bias of the included studies was presented in the form of a table.
Statistical analysis. As measures of treatment effect were used as follows: (a) for dichotomous (categorical) data: presented as summary risk ratios with 95% confidence intervals; (b) for continuous data: presented as mean difference with 95% where there was consistency in the outcome measure, or the standardised mean difference to combine trials that measured the same outcome but used different confidence intervals methods; and (c) time-to-event data: presented as hazard ratios with 95% confidence intervals to measure the treatment effect for any time-to-event outcomes. We used RevManager 5.3 to perform statistical analysis (method Mantel-Haenszel).
Heterogeneity. Heterogeneity among the eligible studies was quantified using the Chi square test and I 2 statistic, specifically using the formula I 2 ¼ (Q À df/Q) Â 100% where Q was the Chi square statistic and df represented the degree of freedom. The I 2 statistic values were interpreted as follows: 0-25% ¼ low heterogeneity, 25-75% ¼ moderate heterogeneity, more than 75% ¼ substantial heterogeneity. 6 Where substantial heterogeneity was detected, according to the criteria above, we had planned to perform a further investigation based on the prespecified subgroup analysis.
Analysis of subgroups. We planned to perform subgroup analyses according to the following features: (a) tobacco exposure (cigarette, cannabis or any other form of smoking either measured in a laboratory or declared) after the intervention; (b) severity of the ischaemia, according to the Fontaine or Rutherford classification and (c) ischaemic territory (upper or lower limb).
Missing data. We contacted contact authors of included trials about methodological queries but none of them answered the solicitation. Where possible, we had planned to analyse all outcome measures on an 'intention-to-treat' basis by including data from all participants assessed.
Summary of findings. We presented the main findings of the review results for the quality of evidence, the magnitude of effect of the interventions examined and the sum of available data on the primary outcomes in 'Summary of findings' tables, according to Higgins and Green 7 and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working group. 8 Since we assessed different intervention comparisons, a 'Summary of findings' table was developed for each comparison included in the 'Results' section. The GRADEprofiler software was used to assist in the preparation of the 'Summary of findings' tables.
Results of the search
A flow diagram of the search results is shown in Figure 1.
Included studies
Only one randomised controlled study was included in this review. Bozkurt et al. 9 reported on a study of 162 participants with Buerger's disease (Fontaine III and IV of ischaemia), who were submitted to lumbar sympathectomy and a prostacyclin analogue (iloprost) for four weeks. They compared complete healing rate, analgesic requirement, size of the ulcer, 50% reduction of the ulcer size and the SVS/ISCS (Society for Vascular Surgery and the North American Chapter of the International Society for Cardiovascular Surgery) grading scale, with follow up of 24 weeks. More details about characteristics of included studies are given in Table 1. Table 2.
Secondary outcomes
Side effects: Bozkurt et al. 9 described the fact that more participants reported side effects in the iloprost group, including headache (45.3%), flushing (43.06%), nausea (28.38%) and abdominal discomfort (12.12%). According to the authors, in one participant these symptoms were severe enough to stop the treatment. Minor wound infection occurred in five patients following surgical procedure.
Other outcomes
Rate of amputation, amputation-free survival, walking distance or pain-free walking, and ankle brachial index were not assessed by Bozkurt et al. 9
Discussion
The purpose of this systematic review was to collect the largest number of studies on the usage of the sympathectomy for the treatment of patients with Buerger's disease, in order to establish the best available Losses were not justified, downgraded by one level. 3 Adoption of 'as-treated' (per protocol) analyses, downgraded by one level. 4 Selective reporting (does not describe amputation rate), downgraded by one level. 5 One single study (doubt about reproducibility of data), downgraded by one level.
evidence to date. Although it is one of the oldest treatments for patients with Buerger's disease in critical limb ischaemia, the lumbar sympathectomy was investigated by only one study capable of generating good quality evidence, that is a randomised and controlled clinical trial, by comparing the lumbar surgical sympathectomy to the intravenous iloprost (prostacyclin analogue). Accordingly, the generated evidence has been limited to one study and one comparison. The researched outcomes were intended to treat patients with critical limb ischaemia, which is the improvement in pain at rest and ulcers healing. However, what draws attention is the lack of information related to amputation rate and amputation-free survival, relevant data for patients with critical ischaemia.
Regarding the risk of bias, we have observed that the only study about the review was considered to be high risk in most of the categories. Except for the absence of blinding of the involved patients (prevented by the very nature of surgical versus medicated treatment comparison), the other studied categories such as allocation concealment, outcome evaluators blinding, the protocol analysis and not by intention-to-treat analysis (which reduces the randomised effect) and the description of reasons for the remarkable number of losses (about 20% of the sympathectomy group) could have been evaded by the study authors. These methodological failures unquestionably jeopardise not only the rate but also the course of the observed effect.
The main evidence found on this study refers to the greater effectiveness of iloprost in relation to the lumbar sympathectomy on the healing of ischaemic ulcers and pain at rest on patients with the Buerger's disease. The greater effectiveness of iloprost in patients with Buerger's disease and critical limb ischaemia was verified by recent Cochrane systematic review with moderate quality evidence when compared to treatment with aspirin. However, because of the high risk of bias presented by the only study selected in this review and following the GRADE parameters, 8 the observed evidence was ranked very low. We summarised the quality of the evidence in Table 3.
As for implications for future studies, we emphasise the need for further and good quality studies, incorporating the same comparison (sympathectomy versus intravenous iloprost), other types of drugs (e.g. cilostazol, pentoxifylline, clopidogrel, etc.), other therapeutic (use of stem cells, omental transplantation, foot venous arch arterialisation, acupuncture, etc.) against surgical sympathectomy.
As for implications for practice, we can say that in view of the fragile evidence found here regarding the treatment for Buerger's disease in patients with critical limb ischaemia, the preference for the use of intravenous iloprost in relation to lumbar sympathectomy for this profile of patients is not fully based.
Conclusions
Very low evidence suggests that intravenous iloprost is more effective than the lumbar sympathectomy in the healing of ischaemic ulcers and pain at rest in patients with Buerger's disease. Therefore, until now, the preference of the usage of intravenous iloprost over the lumbar sympathectomy (and vice versa) is not supported by robust evidence for its routine use.
Declarations
Competing Interests: None declared. | 2018-04-03T01:23:20.891Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "efa5881004b4d6c396f4012a9d21ef3f52a90804",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2054270417717666",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efa5881004b4d6c396f4012a9d21ef3f52a90804",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59307275 | pes2o/s2orc | v3-fos-license | A visual illusion that influences perception and action through the dorsal pathway
There are two main anatomically and physiologically defined visual pathways connecting the primary visual cortex with higher visual areas: the ventral and the dorsal pathway. The influential two-visual-systems hypothesis postulates that visual attributes are analyzed differently for different functions: in the dorsal pathway visual information is analyzed to guide actions, whereas in the ventral pathway visual information is analyzed for perceptual judgments. We here show that a person who cannot identify objects due to an extensive bilateral ventral brain lesion is able to judge the velocity at which an object moves. Moreover, both his velocity judgements and his interceptive actions are as susceptible to a motion illusion as those of people without brain lesions. These findings speak in favor of the idea that dorsal structures process information about attributes such as velocity, irrespective of whether such information is used for perceptual judgments or to guide actions.
T wo main visual pathways have been identified in the human brain: one going from the primary visual cortex to the inferior temporal cortex via V4 (ventral pathway) and one from the primary visual cortex to the posterior parietal cortex via the middle temporal visual area (MT, dorsal pathway) 1 . The separation was originally ascribed to a distinction between the processing of attributes related to object identity and object location, respectively 2 . We will refer to this distinction as the attributes hypothesis. The very influential two-visual-systems hypothesis 3 later elaborated on this by linking the ventral and the dorsal pathways to independent processing of information for different purposes: for action and perception.
According to the two-visual-systems hypothesis, all visual attributes are processed twice: in the ventral pathway for recognizing objects (perception) and in the dorsal pathway for guiding one's hand towards them (action) 4 . This claim is based on the notion that in order to perform an action such as grasping a cup, people primarily need information about the cup's precise position with respect to themselves. This information changes whenever they move. In contrast, in order to identify the cup as being their own cup, they need a detailed analysis of its characteristics such as its color, size, and shape. They will mainly rely on information that is independent of their position with respect to the cup, but might take information from the environment into account such as that the cup is lying next to the book they were reading. Such differences between guiding actions and recognizing objects calls for different analyses of the available information, which according to the two-visual-systems hypothesis occurs separately for perception and action tasks in the two-visual pathways. According to the original attributes hypothesis, each attribute is processed in only one of the pathways, irrespective of whether it is being used for perception or action.
Despite the fundamental difference between the attributes and the two-visual-systems hypothesis in terms of whether or not the same attribute is processed twice, distinguishing between the two hypotheses is difficult, precisely because recognizing objects often requires information about different attributes than does manipulating them. The initial evidence favouring the two-visualsystems hypothesis was based on two types of clinical cases: individuals with visual form agnosia such as DF, who could not report the dimensions or orientations of objects but could successfully interact with them 5,6 , and individuals with optic ataxia, who could judge objects' shapes but could not successfully interact with them 6,7 . This evidence has been questioned on the grounds of psychophysical 8-10 , neuropsychological 11,12 , and neurophysiological 13,14 data. More importantly, clinical evidence does not show that information about the same attribute is processed separately for perception and for action, it only shows that sometimes certain attributes cannot be used for one or the other 15 .
The second line of evidence for visual information being processed differently for perception than for action is based on studies on the effects of visual illusions on actions. According to the two-visual-systems hypothesis, perceptual illusions originate in the ventral pathway, where all information that could help make perceptual judgments is taken into account. Thus, illusions should not affect actions. Many studies have compared the influence of size illusions on the maximal grip opening when reaching to grasp objects with their influence on perceptual judgments of the objects' sizes 8,16,17 . Other actions that have been compared with perceptual judgments of size or distance include saccades [18][19][20] , manual tracking 21 , lifting 17,22 , pointing 23,24 , bimanual grasping 25,26 , and stepping 27,28 . Some authors interpret the results as support for the two-visualsystems hypothesis, whereas others do not 10,29-31 . One reason why the same findings can be interpreted in different ways is that illusions often only influence a specific attribute, and there is no consensus about the attributes that are used to guide some actions 15 .
Until now, studies have mainly examined illusions of attributes such as size that, according to what we call the attributes hypothesis, are processed in the ventral pathway. A prototypical example of an attribute that is processed in the dorsal pathway is motion 32 . One way to create motion illusions is by embedding motion within a target. Embedded motion has been shown to affect a static target's apparent position [33][34][35][36] , and a moving target's apparent motion direction 37,38 and speed 39,40 . We previously reported that in contrast to what the two-visual-systems hypothesis would predict, embedded motion led to errors in interception that were equivalent to the illusory changes in the perceived target motion 40 . This finding supports the idea that the dorsal pathway is essential for all aspects of motion processing, regardless of whether one only needs to perceive the object or needs to interact with it.
Finding that an individual with damage to the ventral pathway (such as DF) can judge a target's velocity would provide evidence for the idea that the dorsal pathway is involved in all aspects of motion processing. Finding that such an individual's perceptual judgments and actions are both susceptible to a motion illusion would confirm that the same pathway is responsible for both kinds of tasks. To explore this, such an individual with a ventral lesion (MS) and an agematched control (BK) participated in a slightly modified version of an experiment that we recently conducted with ten young participants 40 . We show that MS's goal-directed movements are affected by a visual illusion that changes the target's apparent velocity. Furthermore, his judgments about how fast the target moves are as susceptible to the illusion as are those of control participants. This shows that the effect of the illusion on both tasks must originate in the dorsal pathway. Our results therefore provide evidence against separate processing of information for perception and action within two segregated pathways 3 . They speak in favor of the dorsal pathway being specialised for processing attributes that are likely to be used for the on-line control of movement 2 .
Results
Participants and task. MS is a 65-year-old male who suffered bilateral damage to temporo-occipital regions in 1970 41 . As a result, he has a severe agnosia for objects 42 but can perceive motion 43 and has corrected to normal Snellen acuity in both eyes. The age-matched control (BK) is a 63-year-old female with normal vision and without known history of brain damage. Due to his object agnosia and co-morbid impairments, explaining the tasks to MS was more difficult than usual. Moreover, he often needed to be reminded to keep his eyes on the fixation point.
Our experiment consisted of three tasks (performed in separate blocks): two in which we evaluated the ability to judge the target's velocity and position relative to the fixation point 100 ms before the target reached this point (Fig. 1a, see Methods for further details), and one in which we evaluated the ability to tap on the target when it reached the fixation point (Fig. 1b). The target was a moving Gabor composed of a vertical sine wave grating of which the contrast was modulated by a two-dimensional Gaussian. The Gabor patches moved horizontally towards the fixation point at a constant velocity of either 40 or 50 cm/s (black arrow in Fig. 1b). The grating also moved horizontally within the Gaussian that defined the patches. It moved at 10 cm/s, either in the same direction as the Gaussian (blue arrow in Fig. 1b) or in the opposite direction (red arrow in Fig. 1b).
Perceptual judgments about moving targets. Although MS was unable to report any characteristics of the target, judging in which of two intervals the target moved faster was remarkably effortless. The direction of the embedded motion had a clear effect on MS's perceived velocity (separation between blue and red curves in Fig. 2a). When embedded motion was in the same direction as that of the target, the target was perceived to move faster than it really was (blue curve). When embedded motion was in the opposite direction than that of the patch, the target was perceived to move slower than it really was (red curve). The mean influence of the 20 cm/s difference in embedded motion was 14.6 cm/s. MS's reports in judging the velocity were twice as variable as BK's, as is visible in the difference in steepness of the thick and dashed curves in Fig. 2a (standard deviation in velocity of 24 and 12 cm/s, respectively). However, MS's reports were not much more variable than those of some of the young controls (range 11-20 cm/s).
The position task was almost impossible for MS to complete. He reported finding it very difficult to identify which of two subsequent intervals contained the target that disappeared closer to the fixation point, so testing was stopped after 314 of the 560 planned trials. His difficulties are reflected in the very shallow slopes of his psychometric functions in Fig. 2b. BK could complete such a task without problems, as the dashed psychometric curves show. MS's average standard deviation was 5.9 cm, whereas BK's was 0.6 cm, which is even better than the best young control (range 1.0-1.6).
Interception of moving targets. Despite his difficulties in reporting patches' relative positions, MS could perform the interception task. Embedded motion influenced his tapping errors (vertical position of the squares in Fig. 2c) in much the same manner as it did for the controls. We compared the errors that participants made in trying to hit the targets with what one would expect based on an extrapolation of 100 ms using the patches' judged velocities (see Methods). The errors in perception led to equivalent errors in interception (symbols close to unity line). MS's tapping performance was mainly more variable than that of BK and the young controls (standard deviation of 4.5 cm, compared with 2 cm for BK and a range from 1.1-2.5 cm for the young controls).
Discussion
Two aspects of our results are in clear conflict with predictions derived from the two-visual-systems hypothesis. Firstly, the fact that MS could report object motion so well despite a severely damaged ventral pathway shows that an intact ventral pathway is not needed to process motion information for perception. Secondly, we found that our motion illusion affected perception and action to the same extent in MS as in controls. Since the dorsal pathway includes MT 2 , which is known to process visual information about motion, our results support the idea that the dorsal pathway processes the information about motion for both perception and action. One might argue that the illusion originates at an early stage of visual processing, before the two pathways separate 44 . However, even if the illusion were to originate before the separation, its influence on perception cannot be mediated by the ventral pathway in MS because his ventral pathway is severely damaged. Finding an influence on both perception and action is also consistent with the illusion being the result of an interaction between local and global motion signals in areas further down the dorsal pathway than MT 45 .
We are not the first to suggest that the dorsal stream is involved in perception tasks. Interactions between the ventral and the dorsal pathways have been acknowledged by many authors 9,12,46-50 , including Goodale and Milner 4 , who stated that the ultimate role of the ventral stream is also to serve action rather than perception per se. Rizzolatti of the inferior parietal lobule. According to their description, the ventro-dorsal stream is involved both in perception and in action, while the dorso-dorsal stream is specifically involved in motor tasks. The role of MT in the ventro-dorsal stream that follows from this subdivision is in line with our results on motion perception and tapping errors. MS's tapping performance is consistent with his perceptual performance. MS was about twice as variable as controls in both his speed judgments and his tapping performance. Thus, MS's perceptual speed judgments are not more strongly affected by the damage to his ventral pathway than are his actions, contrary to what one would expect according to the two-visual-systems hypothesis. Moreover, his tapping performance is also much more variable than that of controls, which is inconsistent with the ventral pathway not being involved in action. The relatively steep slopes of the lines in Fig. 2c for both MS and BK may be the result of us having underestimated the predicted effect. The predicted magnitude of the illusion's effect on interception depends on the time for which the motion has to be extrapolated (the visuomotor delay). We are likely to have underestimated the visuomotor delay for MS and BK because we did not consider the fact that the visuomotor delay is known to increase slightly with age 52 . However, this cannot explain the whole pattern, as for MS the predicted error for embedded motion in the opposite direction than the patch's motion is mainly small because of his small velocity judgment errors in this condition.
Judging the positions relative to the fixation point was particularly difficult for MS. This was not the case for BK (who even outperformed young controls), which allows us to rule out the possibility of age being responsible for MS's difficulties in this task. Although MS's judgments are not 'random', they are ten times less precise than those of the controls. He may have been unable to judge the position of the target relative to the fixation point because relative positions might be processed in the ventral pathway. He might thus have been forced to rely exclusively on judgments of the endpoint positions relative to himself, which is presumably processed in the dorsal pathway. Although the illusion that we used is known to influence judgments of both velocity and position, the influence on the perceived velocity is much larger, so we could compare the interception data with the velocity judgments, ignoring any influence on the perceived position. This was also true for MS, whose judgments were more variable but who did not seem to be influenced to a different extent by the illusion in any of the tasks.
Lisi and Cavanagh 37,38 reported that embedded motion in the direction orthogonal to that of the motion of the Gabor patch as a whole, which makes the patch appear to move in a different direction than it really is moving, influences reaching for the remembered position of the patch with the hand 38 but not saccades to the patch 37 . In their studies the influence on action always appeared to be much smaller than that on perception, but we attribute this to the way they compare the tasks rather than to a true difference in performance 40 . To make a correct comparison, one should consider that predictions are only needed for the last part of the target's motion, for which on-line control cannot correct. In our quantitative predictions for the movement errors, we therefore anticipate that the errors in interception will correspond with the part of the influence of any perceptual misjudgement due to the illusion during the last 100 ms before the end of the movement.
The present study shows evidence that clearly contradicts earlier conclusions based on DF 3,4 . Following the two-visualsystems hypothesis one would have expected to find a dissociation between perception and action in MS's responses to the moving target. However, his errors in interception are completely consistent with his errors in perception. These findings support 40 ). All error bars are 95% confidence intervals for each point of subjective equality (PSE) calculated using parametric bootstrapping with 1000 samples the alternative interpretation provided earlier 53 by showing that perception and action are based on the same dorsal processing of motion information. It is the visual attribute that determines whether information is processed in the dorsal or ventral pathway, rather than whether such information is used for perception or for the on-line control of action.
Methods
Participants and experimental design. MS is a 65-year-old, left-handed male with bilateral ventromedial damage to his temporo-occipital regions (Fig. 3) due to an idiopathic herpes encephalitis in 1970 41 . As a result, he experiences achromatopsia and left homonymous hemianopia, but his Snellen acuity is normal for both eyes. His other prominent visual disorder is a severe agnosia for objects and faces 42 , but he can read and perceive motion 43 . We compared his performance with that of BK, a 63-year-old, right-handed female with normal vision and without known history of brain damage, as well as with the performance of young participants in a previous study. Those participants, whose data are shown here as additional controls, were one author and nine naïve young subjects (age range [25][26][27][28][29][30][31][32][33][34]. They were all right handed, with normal or corrected-to-normal vision and without evident motor abnormalities 40 . All participants gave written informed consent. The study was approved by the ethical committee of the Faculty of Behavioral and Movement Sciences at the Vrije Universiteit Amsterdam. The experiments were carried out in accordance with the approved guidelines.
The experiment consisted of two perceptual tasks in which we measured the perceived velocity and the perceived position of a moving target, and an action task in which participants had to intercept similar targets by tapping on them. As both tasks involved motion in the peripheral visual field, a possible ventral contribution to central motion perception 54 should not play a role. The methods were the same as in the earlier study in which we measured the young controls 40 , except for a few minor modifications that we made to make it easier for MS to participate. We doubled the diameter and increased the contrast of the fixation dot to facilitate fixation. We also used these adjusted values for BK. For MS we also left-right mirrored the stimuli so that the targets moved in his intact hemifield and were easier to intercept with his dominant left hand in the interception task.
MS and BK stood in front of a large screen (Techplex 150, acrylic rear projection screen, width: 1.25 m, height: 1.00 m; tilted backwards by 30°to make tapping more comfortable) onto which stimuli were back-projected (InFocus DepthQ Stereoscopic Projector; resolution 800 by 600 pixels; screen refresh rate: 120 Hz). They had to fixate a 3 cm diameter white dot, 16 cm to the left of the screen center for MS and to its right for BK. The targets were Gabor patches (σ = 2 cm, spatial frequency 0.29 cycles/cm) that moved at a constant velocity (40 or 50 cm/s) from right to left for MS and left to right for BK. The gratings within the patches moved at 10 cm/s relative to the Gaussian envelope that defined the patches as a whole. They either moved in the same direction as the patch or in the opposite direction than the patch.
In separate blocks, sequential two-alternative forced-choice discrimination tasks were used to obtain perceptual judgments of position and velocity relative to fixation (Fig. 1a). In the position task, participants had to judge which of two sequentially presented moving patches disappeared closer to the fixation point. Trials started with the fixation point being presented for a random period between 0.5 and 0.7 s, followed by the first of the two moving patches. After a period between 0.5 and 0.7 s, the second moving patch was presented. After these presentations, participants had to report whether the first or the second target ended closer to the fixation point. The researcher recorded the answer by pressing '1' or '2' on a computer keyboard. Once the response was entered, the next trial started. In the velocity task, the sequence of events was the same, but participants had to report which of the two moving patches was faster. In each pair of trials, one patch was the standard, with embedded motion, and the other was a comparison without such motion. The order of the two patches was chosen randomly for each trial.
There were four standard patches. Standard patches were moving with a velocity of either 40 or 50 cm/s, with embedded motion of 10 or −10 cm/s. The presentation time was 700 ms for patches moving at 40 cm/s and 540 ms for patches moving at 50 cm/s. In order to be able to directly compare the effect that the illusion had on perception and on action, we designed the perceptual tasks so that they would reveal participants' judgments 100 ms before the target reached the fixation position. We chose that moment because 100 ms is the approximate value of the visuomotor delay, the time that it takes to correct an ongoing movement based on new visual information 55,56 . We have reason to believe that how one judges the position and velocity of an object at that moment determines how one will tap on it 57 . Thus, all standard patches disappeared 100 ms before reaching the fixation position. Comparison patches never had embedded motion. In the position task, comparison patches moved at the same velocity as the standard patch but disappeared either at the same position as the standard patch, or 1, 2, or 3 cm further to the left or to the right than the standard patch. In the velocity task, the comparison patches could move at the same velocity as the standard or 10, 20, or 30 cm/s faster or more slowly, always disappearing 100 ms before they would reach the fixation point.
In the interception task we attached an infrared marker to the index finger of the dominant hand and recorded its position with an infrared camera (Optotrak 3020, Northern Digital) 40,57 . Participants started each tapping trial by placing the index finger of their dominant hand on a 3 cm diameter red disk (starting point), 20 cm below the fixation dot. After between 500 and 800 ms a target started moving (from right to left for MS; from left to right for BK). The participants had to tap on the target when it reached the fixation position (Fig. 1b). The targets were the same as the standard patches used in the perceptual tasks, supplemented with targets that had no embedded motion. The targets were visible along the whole trajectory. It took 800 ms and 640 ms for the 40 and 50 cm/s targets to reach the fixation position, respectively. Feedback was provided after each tap. If a target was hit it stopped moving. If the tap was also within the fixation dot, participants got rewarding auditory feedback. If a target was missed, it deflected away from the finger, remaining visible for 500 ms after the tap. MS performed four blocks of 120 trials, with 20 trials for each combination of velocity and embedded motion in each block in random order. BK performed one block of 120 trials. We used more blocks for MS than for BK and the young controls 40 because MS's performance was more variable and we observed that many of his trials would have to be rejected, because he frequently either did not move (probably due to his difficulties to follow the fixation instructions given his object agnosia), or made a sliding movement across the screen instead of tapping which made it unclear which position he was aiming for.
Data analysis. All analyses were performed using R statistical software 58 . To analyse the perceptual tasks we used the 'quickpsy' package 59 . Cumulative Gaussian distributions were fit to the proportion of trials in which comparison patches were judged to disappear further than the standard patch, or to move faster. Since MS's relative position judgments were unreliable (Fig. 2b), and misjudgement of position error previously had a negligible influence on the predicted error 40 , we only used the effect on velocity perception (multiplied by a visuomotor delay of 0.1 s) to predict the effect on tapping. Thus, we made a prediction of MS's and BK's errors in interception following the equation (modified from 40 ): Error predicted ¼ delay error velocity We defined the tapping error as the horizontal distance between the tapped position and the position of the center of the patch at the moment of the tap. We removed any biases that were not related to the embedded motion by subtracting the participant's mean error when tapping on targets without embedded motion a b c Fig. 3 3T anatomical MRI scans of MS's brain in 2018. Images show the sagittal (a), horizontal (b), and coronal (c) planes of the brain. The left hemisphere has damage to the temporal and occipital lobes, but the parietal and frontal lobes are largely preserved. The right hemisphere damage is much more substantial. The dorsal part of both hemispheres is intact. An extensive and detailed description of MS's lesion can be found elsewhere 41 (for the same target velocity) from these values. Trials were excluded from the analyses if no tap was detected, if the tapping error was further than three times the standard deviation from the mean for that kind of patch, or if the tap was further than 1.5 cm from the fixation position. This left us with 291 of MS's 480 trials. The data of the young controls 40 were re-analysed using the same equation. In all cases, we calculated 95% confidence intervals for each point of subjective equality (PSE) using parametric bootstrapping with 1000 samples 60 . Once we had the predictions and the actual errors for each standard patch, we averaged across the two velocities (40 and 50 cm/s).
Data availability
Data supporting these findings is available 61 through the Open Science Framework following this link: https://osf.io/kj8fa/. | 2019-01-29T15:11:24.264Z | 2019-01-28T00:00:00.000 | {
"year": 2019,
"sha1": "5d1c9706bc31d487a4a11a2202f35d422d68661d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-019-0293-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d1c9706bc31d487a4a11a2202f35d422d68661d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
239629435 | pes2o/s2orc | v3-fos-license | Risk factors related to complications of the fingers and hand after arthroscopic rotator cuff repair – carpal tunnel syndrome, flexor tenosynovitis, and complex regional pain syndrome
Hypothesis/Background Complications involving the fingers and hand after arthroscopic rotator cuff repair (ARCR) include complex regional pain syndrome, carpal tunnel syndrome (CTS), and flexor tenosynovitis (TS). The aims of this study were to diagnose the complications after ARCR and investigate the risk factors that could predispose individuals to these finger and hand complications. Methods Fifty patients (50 shoulders) who underwent ARCR participated in this study. The patients’ ages ranged from 36 to 84 years (mean, 63 years). Before ARCR, we determined the disease history of the fingers and hand (CTS or TS) and subjectively assessed their symptoms using a questionnaire that included a scale ranging from 1 (no symptoms or no disability) to 5 (the worst symptoms or severest disability). ARCR was performed in all patients using suture anchors. The mean observation period after surgery was 15.5 months (range, 12-48 months). We diagnosed complications involving the fingers and hand after ARCR and investigated the preoperative, intraoperative, and postoperative risk factors that could predispose patients to these complications using univariable and multivariable analyses. Results After ARCR, 20 patients (20 hands) (40%) had complications of the fingers and hand. Among them, the diagnosis was CTS in 2 hands, TS in 15 hands, and both CTS and TS in 3 hands. None of the hands exhibited complex regional pain syndrome. These complications occurred at an average of 1.8 months (range, 0.1-4 months) after ARCR. In the 47 patients who did not have symptoms just before the operation, both univariable and multivariable analyses between the complication group (n = 17) and the no-complications group (n = 30) showed a significant difference in the presence of a past history of CTS or TS (complication frequency: past history: 88%, no past history: 25%) (P < .05) and the preoperative subjective assessment for edema of the fingers and hand (complication frequency: edema ≥ 2 points: 89%, edema < 2 points: 24%) (P < .05). There were no relationships between the other candidate intraoperative and postoperative factors and complications. Conclusion In all 20 hands with complications of the fingers and hand after ARCR, the diagnosis was CTS or TS. Complications of the fingers and hand after ARCR easily occurred in patients with a past history of CTS or TS and in patients with edema as per a subjective assessment. We speculate that the ARCR triggered the occurrence of CTS and TS postoperatively in patients who had subclinical CTS or TS before surgery.
per the Japanese Orthopedic Association score 30 have been reported. After open rotator cuff repair, the prevalence of CRPS has been reported to range from 13% to 20%. 8,10,11,31 Regarding the risk factors related to CRPS after open rotator cuff repair, female sex, 10 a long duration between the onset of pain and surgery, 31 and postoperative shoulder ROM limitations in active flexion and external rotation 31 have been reported, although these studies were written in Japanese rather than English. 10,31 In our retrospective 4 and prospective 5 studies about the complications of the fingers and hand that occur after ARCR, we observed cubital tunnel syndrome, carpal tunnel syndrome (CTS), and flexor tenosynovitis (TS), but no patients exhibited CRPS. However, no studies have investigated the factors related to complications of the fingers and hand after ARCR, including cubital tunnel syndrome, CTS, and TS. The aims of this study were to diagnose complications in the fingers and hand after ARCR and to investigate the risk factors that could predispose patients to these complications.
Materials and methods
Institutional review board approval was acquired before initiation of this study, and informed consent was obtained from all patients. From 2015 to 2019, 51 patients underwent arthroscopic repair for rotator cuff tears. We excluded one patient complicated with a distal radius fracture; the remaining 50 patients (50 shoulders) were prospectively examined. All of the patients were followed up for more than one year after surgery. The baseline demographic characteristics of all patients before ARCR are shown in Table I.
We examined the disease history of the fingers and hand (CTS or TS) and subjectively assessed their symptoms using a questionnaire before ARCR. Regarding the disease history of the fingers and hand (CTS or TS) on the operated side, we checked whether surgery or injection had been performed. In a previous study, 32 the symptoms of the fingers and hand were subjectively assessed using a questionnaire regarding pain, difficulty in flexion, difficulty in extension, triggering, stiffness, edema, numbness, performance, and difficulty in performing activities in daily life; the scale ranged from 1 (no symptoms or no disability) to 5 (the worst symptoms or the severest disability) for each assessment. One of the authors (M.H.), who is an orthopedic shoulder surgeon, examined all patients directly (face to face) to assess the symptoms of the fingers and hand before ARCR. For patients with symptoms such as pain or numbness, whether the symptoms were related to a diagnosis of CTS or TS was confirmed. As per the diagnostic criteria for CTS and TS used in previous studies, 21,22 the diagnoses of CTS and TS were confirmed (Table II). To subjectively assess the symptoms, the questionnaire was administered by a nurse or physical therapist blinded to the patient groups and who did not exchange information with the doctors who examined the patients face-to-face to assess their symptoms. The preoperative baseline demographic characteristics of all patients before ARCR are shown in Table I.
After general anesthesia was induced, the patient was placed in the beach-chair position. ARCR was performed using suture anchors in all patients. After surgery, the arms were immobilized using an UltraSling (UltraSling, DONJOY, Ontario, Canada). On postoperative day 1, all patients in this consecutive series began an organized exercise program including active ROM exercises of the fingers and elbow under the supervision of a physical therapist. The patients began performing passive ROM shoulder exercises 1 to 3 weeks (mean, 1.5 weeks) after surgery. The mean observation period after surgery was 15.5 ± 4.9 months (range, 12-48 months). The intraoperative and postoperative findings of all patients are shown in Table III.
We examined the fingers and hand on the operated side for complications, including numbness, pain, edema, and movement limitations, for one year after ARCR, diagnosed them, and attempted to proactively treat them. The mechanisms underlying the development of the complications in the fingers and hand were classified into the following three categories: (1) current diseases that were present just before the operation (presence of a past history preoperatively, presence of preoperative symptoms), (2) subclinical diseases that were present before surgery (presence of a past history preoperatively, absence of preoperative symptoms), and (3) acute diseases that had newly developed after surgery (absence of a past history preoperatively, absence of preoperative symptoms). CTS and TS were diagnosed based on the aforementioned criteria. A diagnosis of CRPS was made as per the Budapest clinical diagnostic criteria. 6,7 In the first investigation, we investigated the following risk factors that could predispose patients who did not have a current disease just before the operation to complications involving the fingers and hand after ARCR: (1) preoperative factors including age, sex, job, reason for the onset of shoulder pain, duration between symptom onset and surgery (symptom duration), past history of CTS or TS, subjective assessment for the symptoms in the fingers and hand, subjective assessment for upper extremity function (Hand 20), Japanese Orthopedic Association score, ROM of the shoulder, and grip strength; (2) intraoperative factors including the use of an arm holder, tear size, subscapularis tendon tear, operation method, biceps tendon release or tenodesis, number of anchors, operation time, postoperative analgesia, and postoperative continuous intravenous infusion; and (3) postoperative factors including the frequency of use of suppositories or injections within one week after the operation, start time of shoulder ROM exercises, retears, and the Japanese Orthopedic Association score one year after ARCR; ROM of the shoulder at 3, 6, and 12 months; and stiffness at 12 months. In the second investigation, for subjects who Stiffness was diagnosed in the operative shoulder when passive forward flexion of less than 120 was achieved.
did not have a current disease just before the operation, we also investigated the relationships between a preoperative disease history of the fingers and hand and preoperative subjective assessment for symptoms in the fingers or hand. In the third investigation, for patients with acute disease not including those who had a current or subclinical disease just before the operation, we investigated the associations of the same factors with complications of the fingers and hand after ARCR as in the first investigation.
In the statistical analysis of the first and third investigations, we compared the patients with complications of the fingers and hand (complication group) to those without complications (no complications group). For univariable analysis, Fisher's exact test and the Mann-Whitney U test were used, and differences at P < .05 were considered to be significant. For the factors that were significantly different s per Fisher's exact test, multivariable logistic regression models were created. In addition, for the factors that were significantly different as per the Mann-Whitney U test, cutoff values for complications were calculated using receiver operating characteristic curve analysis. To compare the cutoff values and the factors we examined, univariable analysis was performed using Fisher's exact test, and multivariable logistic regression analysis was performed.
In the statistical analysis of the second investigation, we compared the preoperative subjective assessment of the symptoms in the fingers or hand between the patients with a preoperative disease history of the fingers and hand (past history group) and those without the history (no past history group). Fisher's exact test and the Mann-Whitney U test were used, and differences at P < .05 were considered to be significant. Calculations were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan). 9
Results
Regarding the preoperative disease history in the fingers and hand before ARCR, TS was present in 9 patients (18%), including 1 with a history of surgery involving the middle finger and 8 with a history of injections; the injections were made in the thumb in 3 patients, in the middle finger in 3, in the ring finger in 1, in the little finger in 1, and in an unclear region in 1 (one patient had a past history of TS in both the middle and little fingers). CTS was present in 4 patients (8%), including 1 with a history of surgery and 3 with a history of injections. Two patients had a past history of both TS and CTS. Preoperatively, the subjective assessment for the symptoms of the fingers and hand determined using the questionnaire (1: no symptoms or no disability~5: the worst symptoms or the most severe disability) was 1.3 ± 0.7 points for pain, 1.1 ± 0.4 for difficulty in flexion, 1.1 ± 1.1 for difficulty in extension, 1.1 ± 0.5 for triggering, 1.4 ± 0.8 for stiffness, 1.2 ± 0.5 for edema, 1.3 ± 0.7 for numbness, 1.3 ± 0.7 for performance, and 1.3 ± 0.7 for difficulty performing activities of daily living. By directly examining (face to face) the patients, the orthopedic shoulder surgeon found that 3 patients had current symptoms just before the operation, including TS of the middle finger in 1 hand, CTS in 1 hand, and both TS of the middle finger and CTS in 1 hand. The two patients with TS in the middle finger had a past history of injection, and the one patient with CTS alone had a past history of injection.
After ARCR, 20 patients (20 hands) (40%) had complications of the fingers and hand, such as numbness, pain, edema, and movement limitations on the operated side, which occurred an average of 1.8 ± 0.9 months (range, 0.1-4 months) after ARCR. Among the 20 patients, the diagnosis was CTS in 2 hands, TS in 15 hands, and both CTS and TS in 3 hands. None of the 18 hands with TS exhibited triggering of the fingers.
None of the hands exhibited CRPS. Regarding the mechanisms of the development of CTS and TS in the 20 patients diagnosed with either condition, a current disease had been present just before the operation in 3 hands (CTS in 1, TS in 1, both CTS and TS in 1), a subclinical disease had been present before surgery in 7 hands (CTS in 1, TS in 4, both CTS and TS in 2), and an acute disease had developed after surgery in 10 hands (TS in 10) (Fig. 1). All 20 hands diagnosed with CTS or TS were treated with corticosteroid injections, and the symptoms in all 20 hands resolved completely.
In the first investigation, for the 47 patients without a current disease just before the operation (CTS in 1, TS in 1, and both CTS and TS in 1), we compared those with complications of the fingers and hand (complication group: 17 hands) to those without complications (no complications group: 30 hands) to investigate the preoperative, intraoperative, and postoperative risk factors that could predispose them to complications in the fingers and hand after ARCR (Table IV). Univariable analysis showed that female sex (complication frequency: female, 67%; male, 25%), the presence of a past history of CTS or TS (complication frequency: past history: 88%, no past history: 25%), and edema according to the preoperative subjective assessment (complication group: 1.5 points, no complications group: 1.0 points) were significant factors predisposing patients to complications (all P < .05) (Table IV). There were no relationships between the other candidate intraoperative and postoperative factors and complications. Multivariable logistic regression models for the risk factors that were significantly different showed that although female sex was not a significant factor, the odds ratio (95% confidence interval, P value) of the presence of a past history of CTS or TS was 22.9 (1.71, 307.0, P < .01) ( Table V). The cutoff value for edema as per the preoperative subjective assessment for complications was 2.0 points (specificity: 0.96, sensitivity: 0.47, area under the curve: 0.71, 95% confidence intervals: 0.59-0.84). The frequency of complications in the patients with a preoperative subjective assessment score of 2 or more points for edema was significantly higher than that in the patients with a score of 2 points or less (complication frequency: edema 2 points: 89%, edema < 2 points: 24%) (P < .05) ( Table IV). The multivariable logistic regression models showed that the odds ratio (95% confidence interval, P value) of a preoperative subjective edema score of 2 or more points was 30.2 (1.28, 711.0, P < .05) ( Table V).
In the second investigation, for the 47 patients without a current disease just before the operation (CTS in 1, TS in 1, and both CTS and TS in 1), we compared the preoperative subjective assessment for the symptoms in the fingers or hand between the patients with a preoperative disease history of the fingers and hand (past history group: 8 hands) and those without this past history (no past history group: 39 hands). The preoperative subjective scores regarding difficulty in flexion, triggering, stiffness, numbness, and performance of the patients in the past history group were significantly higher than those of the patients in the no past history group (all P < .05) (Table VI). In addition, although the difference was not significant, the score for difficulty in extension in the past history group tended to be higher than those in the no past history group (P ¼ .0741) (Table VI). Although the difference was not significant, the frequency of having a past history tended to be higher in the patients with a preoperative subjective score of edema of 2 or more points than in those with a score of 2 points or less (past history frequency: edema 2 points: 33%, edema < 2 points: 13%) (P ¼ .16) (Table VI).
In the third investigation, for the 40 patients who did not have a current disease or subclinical disease just before the operation, we compared the patients with complications of the fingers and hand (complication group: 10 hands) with those without complications (no complications group: 30 hands) to investigate the preoperative, intraoperative, and postoperative risk factors that could predispose them to complications in the fingers and hand after ARCR (Table VII). All 10 patients in this complication group had an acute disease that was diagnosed as TS. Univariable analysis showed a significant difference in the preoperative subjective edema scores (complications group: 1.5 points, no complications group: 1.0 points) (Table VII). Although the difference was not significant, univariable analysis showed that the frequency of complications in female patients tended to be higher than that in male patients (complication frequency: female, 50%; male, 18%) (P ¼ .08) and that the number of postoperative injections performed within 1 week after ARCR in the complication group tended to be higher than that in the no complication group (postoperative injection frequency within 1 week: complication group: 2.1, no complication group: 0.7) (P ¼ .06) (Table VII). There were no relationships between the other candidate preoperative, intraoperative, and postoperative factors and complications. The cutoff value for the preoperative subjective edema score was 2.0 points (specificity: 0.96, sensitivity: 0.50, area under the curve: 0.73, 95% confidence intervals: 0.56 -0.89). The frequency of complications in the patients with a preoperative subjective assessment score of 2 or more points for edema was significantly higher than that in the patients with a score of 2 points or less (complication frequency: edema 2 points: 83%, edema < 2 points: 15%) (P < .05) ( Table VII). The multivariable logistic regression models showed that the odds ratio (95% confidence interval, P value) of a preoperative subjective edema score of 2 or more points was 29.0 (2.77, 303.0, P < .05).
Discussion
In our study, 20 patients (40%) had complications of the fingers and hand after ARCR. Among them, the diagnosis was CTS in 2 hands, TS in 15 hands, and both CTS and TS in 3 hands; none of the hands exhibited CRPS. In our retrospective 4 and prospective 5 studies, we also reported that the complications of the fingers and hand that occurred after ARCR were mainly CTS or TS and that none of the hands exhibited CRPS after ARCR. Our current and previous studies suggest that CTS or TS can easily occur after ARCR. Therefore, when complications in the fingers and hand occur after ARCR, it is important for us to primarily suspect CTS or TS.
CTS and TS are among the most common diseases treated by orthopedic doctors, and it has been reported that these two diseases frequently occur and share a relationship. 21 The main symptoms of CTS are pinch impairment and numbness of the fingers and hand, 21 and the main symptom of TS is pain with triggering of the fingers and hand. 16 In addition, CTS and TS have been reported to occur after trauma or surgical intervention, resulting in the presentation of CRPS-like symptoms, including numbness, pain, edema, and movement limitations in the fingers and hand 12,15,29 ; therefore, it is important to distinguish CRPS from these conditions. Three main criteria have been used for a CRPS diagnosis, including the International Association for the Study of Pain criteria, 27 the criteria of the Ministry of Health, Labor, and Welfare study team for CRPS in Japan, 28 and the Budapest criteria. 6,7 The incidence of CRPS is largely influenced by the criteria used, as there is a large difference between the International Association for the Study of Pain or Ministry of Health, Labor, and Welfare study team for CRPS in Japan and Budapest criteria; namely, the presence or absence of 1 item: "no other diagnosis better explains the signs and symptoms." The Budapest criteria includes this 1 item, whereas the International Association for the Study of Pain and Ministry of Health, Labor, and Welfare study team for CRPS in Japan criteria does not. In our study, we chose the Budapest criteria for CRPS; as per its unique item, we were able to diagnose CTS or TS before diagnosing CRPS, resulting in no cases of CRPS.
In our study, the mechanisms underlying symptom development of the 20 patients with complications of the fingers and hand after ARCR included a current disease in 3 hands, a subclinical disease in 7 hands, and an acute disease in 10 hands. For the 47 patients who did not have a current disease just before the operation, regarding the risk factors that could predispose patients to complications in the fingers and hand after ARCR, univariable analysis showed a significant difference between sexes, although the difference between sexes was not significant in multivariable analysis. In previous studies on CTS and TS, the frequency of onset of CTS in women was 3 times higher than that in men, 21 and the frequency of onset of TS in women was 6 times higher than that in males. 2,19,20,34 In this study, complications of the fingers and hand after ARCR easily occurred in men. We speculate that our results are consistent with those of these previous studies on CTS and TS.
Regarding risk factors that could predispose patients to complications, both univariable and multivariable analyses showed significant effects from the presence of a past history of CTS or TS and a preoperative subjective edema score of 2 or more points. In terms of the relationships between a preoperative disease history of the fingers and hand and the preoperative subjective assessment for symptoms in the fingers or hand, the frequency of a past history tended to be higher in patients with a preoperative subjective edema score of 2 or more points than in those with lower scores, although the difference was not significant. In addition, most patients with a preoperative past history of disease had symptoms such as difficulty in flexion, triggering, stiffness, numbness, and performance as per the preoperative subjective assessment. These results suggest that most patients with a past history of CTS or TS have preoperative symptoms in the fingers or hand and that, in those patients, complications of the fingers and hand can easily occur after ARCR. Thus, we speculate that the cases of subclinical CTS or TS that were present before surgery were aggravated by ARCR, resulting in CTS and TS occurring after ARCR.
TS was divided into 4 grades 3 (Table II). In previous studies, the most common symptom of TS has been reported to be pain with triggering of the fingers and hand, 16 while fixed flexion contracture has not been shown to trigger TS, and the diagnosis of TS is often difficult. 29 Fixed flexion contracture TS is diagnosed when an orthopedic clinician notes tenderness of the flexor tendon distal or proximal to the A1 pulley during active or passive flexion of fingers and when active movement limitation is observed without passive movement limitation. 29 In our study, all 10 patients with an acute disease were diagnosed with TS. None of the 18 hands with TS also exhibited triggering of the fingers. These results suggest that the TS that occurs after ARCR is fixed flexion contracture TS rather than traditional TS.
In our study, in the 40 patients who did not have a current disease or subclinical disease just before the operation, we investigated the risk factors that were related to TS (in 10 hands) as an acute disease after ARCR. Both univariable and multivariable analyses showed a significant difference in the preoperative score for edema of the fingers and hand. These results suggest that most patients with preoperative edema in the fingers and hand are predisposed to developing TS as acute disease after ARCR. Among the 10 hands with TS as an acute disease, 5 (50%) had slight edema in the fingers and hand as per the subjective questionnaire. However, the face-to-face examination by an orthopedic shoulder surgeon before ARCR showed that none of the patients had edema. These results suggest that the patients with TS as an acute disease after ARCR might have had subclinical TS before ARCR that was not detected by the doctor in the preoperative examination. We speculate that the case of subclinical TS that was present before surgery was aggravated by ARCR, resulting in TS as an acute disease occurring after ARCR.
In terms of the factors that were related to TS as an acute disease after ARCR in the 10 hands, the frequency of complications in the female patients tended to be higher than that in the male patients, although the difference was not significant. As we mentioned before, the frequency of TS in women has been reported to be 6 times higher than that in men. 2,19,20,34 Our results appear to be consistent with the results of these previous studies. In addition, it has been reported that TS easily occurs after distal radius fractures (DRFs). 16,25,35 While there were no significant differences in the frequency of TS after DRF between the groups treated with conservative and operative treatments, 35 a comparison of the conservative (n ¼ 40) and operative (n ¼ 38) treatment groups showed that TS occurred after DRF in 5 hands in the operative treatment group in the retrospective study. 25 In that study, it was unclear why the frequency of TS after DRF in the operative treatment group was high, but it was speculated that, owing to poor rehabilitation or wrist pain after the DRF operation, finger stiffness occurred, resulting in a mismatch of the flexor tendons and the surrounding retinaculum of the A1 pulley, leading to symptoms of TS. In terms of the factors that were related to TS as an acute disease after ARCR in the 10 hands, the number of postoperative injections performed within 1 week after ARCR tended to be higher in the group with finger and hand complications than in the group without these complications. We speculate that it was difficult for the patients to move their fingers owing to shoulder pain after ARCR and finally that the symptoms of TS were started by a similar mechanism as those of TS after the DRF operations. Subjective assessment for symptoms of fingers and hand with a scale ranging from 1 (no symptoms or no disability) to 5 (the worst symptoms or the severest disability).
Table VI
Relationships between a preoperative disease history of the fingers and hand and preoperative subjective assessment for symptoms in the fingers or hand in all subjects except the 3 patients with a current disease just before the operation.
Preoperative subjective assessment for symptoms in the fingers or hand y Total Preoperative history of disease of the fingers and hand* Subjective assessment for symptoms of fingers and hand with a scale ranging from 1 (no symptoms or no disability) to 5 (the worst symptoms or the most severe disability).
Table VII
Risk factors related to complications of the fingers and hand after ARCR in the subjects without a current or subclinical disease just before the operation.
Total
Complications of the fingers after ARCR (excluding 10 patients)* Subjective assessment for symptoms of fingers and hand including a scale ranging from 1 (no symptoms or no disability) to 5 (the worst symptoms or the severest disability).
Regarding treatment outcomes, in our study, 20 hands diagnosed with CTS or TS were treated with corticosteroid injections, and the symptoms in all hands resolved completely. These results suggest that rapid corticosteroid injection administration could be effective in treating complications. To prevent worsening of the symptoms of CTS or TS, immediately after the patients experienced these symptoms, we began to treat the complications with corticosteroid injections. Therefore, we could not know how long these complications would persist after ARCR. In terms of the clinical outcomes in patients with complications of the fingers and hand after ARCR who were diagnosed with CRPS, Kobayashi et al 10 reported that the coexistence of CRPS does not affect shoulder function after surgery. However, we believe that it could be of great importance to evaluate the outcomes of hand lesions caused by complications of the fingers and hand after ARCR, including CRPS, CTS, or TS; this is because patients with complications after ARCR have difficulty performing activities of daily life if the symptoms of CTS and TS persist, even if the shoulder symptoms disappear after ARCR.
We hypothesized that intraoperative factors such as the use of an arm holder would be associated with complications in the fingers and hand after ARCR because, for example, the covering of the forearm with an elastic bandage during the use of an arm holder would lead to compression of the forearm, consequently resulting in the onset of complications. However, in our study, there were no relationships between the intraoperative factors and the complications. We speculate that this is due to the very small sample size.
In our study, 20 patients (20 hands) (40%) had complications of the fingers and hand after ARCR, with a frequency higher than that reported in previous studies on CTS, TS, and CRPS, which ranged from 0% to 35% after ARCR. 1,4,5,14,18,23 This relatively high frequency might be because we included slight symptoms as complications. By including these symptoms, we were able to diagnose and treat them to prevent them from worsening. This study had several limitations. The first is that our study had a very small sample size. Additional investigations with large study populations and comparisons between 2 groups are needed. Second, the diagnoses may have been biased because the author who diagnosed these complications is an orthopedic shoulder surgeon (the investigator). However, we do not consider this limitation to have caused a large bias because the investigator did not diagnose these patients by observing results but by following established diagnostic criteria for CTS, TS, and CRPS.
Conclusions
After ARCR, 20 patients (40%) had complications of the fingers and hand. The diagnosis was CTS in 2 hands, TS in 15 hands, and both CTS and TS in 3 hands. Complications easily occurred in patients with a past history of CTS or TS and those with edema as per the subjective assessment. We speculate that the cases of subclinical CTS or TS that were present before surgery were aggravated by ARCR, resulting in CTS and TS occurring after the procedure. | 2021-09-09T20:38:14.404Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "d4371927aeccb21442d8237128907327fbc23927",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jseint.2021.07.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "225bfb32637c03cebacaec34de4a3ed33cf06a6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.